author
stringlengths
3
31
claps
stringlengths
1
5
reading_time
int64
2
31
link
stringlengths
92
277
title
stringlengths
24
104
text
stringlengths
1.35k
44.5k
Michael Jordan
34K
16
https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------7----------------
Artificial Intelligence — The Revolution Hasn’t Happened Yet
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.” We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills. Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities. While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws. And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically. Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems. This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions. Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon. Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon. One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits. For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering. Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction. As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory? A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms. Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved. IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean). It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.) But we need to move beyond the particular historical perspectives of McCarthy and Wiener. We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II. This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline. Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead. Michael I. Jordan From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley.
Milo Spencer-Harper
7.8K
6
https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1?source=tag_archive---------8----------------
How to build a simple neural network in 9 lines of Python code
As part of my quest to learn about AI, I set myself the goal of building a simple neural network in Python. To ensure I truly understand it, I had to build it from scratch without using a neural network library. Thanks to an excellent blog post by Andrew Trask I achieved my goal. Here it is in just 9 lines of code: In this blog post, I’ll explain how I did it, so you can build your own. I’ll also provide a longer, but more beautiful version of the source code. But first, what is a neural network? The human brain consists of 100 billion cells called neurons, connected together by synapses. If sufficient synaptic inputs to a neuron fire, that neuron will also fire. We call this process “thinking”. We can model this process by creating a neural network on a computer. It’s not necessary to model the biological complexity of the human brain at a molecular level, just its higher level rules. We use a mathematical technique called matrices, which are grids of numbers. To make it really simple, we will just model a single neuron, with three inputs and one output. We’re going to train the neuron to solve the problem below. The first four examples are called a training set. Can you work out the pattern? Should the ‘?’ be 0 or 1? You might have noticed, that the output is always equal to the value of the leftmost input column. Therefore the answer is the ‘?’ should be 1. Training process But how do we teach our neuron to answer the question correctly? We will give each input a weight, which can be a positive or negative number. An input with a large positive weight or a large negative weight, will have a strong effect on the neuron’s output. Before we start, we set each weight to a random number. Then we begin the training process: Eventually the weights of the neuron will reach an optimum for the training set. If we allow the neuron to think about a new situation, that follows the same pattern, it should make a good prediction. This process is called back propagation. Formula for calculating the neuron’s output You might be wondering, what is the special formula for calculating the neuron’s output? First we take the weighted sum of the neuron’s inputs, which is: Next we normalise this, so the result is between 0 and 1. For this, we use a mathematically convenient function, called the Sigmoid function: If plotted on a graph, the Sigmoid function draws an S shaped curve. So by substituting the first equation into the second, the final formula for the output of the neuron is: You might have noticed that we’re not using a minimum firing threshold, to keep things simple. Formula for adjusting the weights During the training cycle (Diagram 3), we adjust the weights. But how much do we adjust the weights by? We can use the “Error Weighted Derivative” formula: Why this formula? First we want to make the adjustment proportional to the size of the error. Secondly, we multiply by the input, which is either a 0 or a 1. If the input is 0, the weight isn’t adjusted. Finally, we multiply by the gradient of the Sigmoid curve (Diagram 4). To understand this last one, consider that: The gradient of the Sigmoid curve, can be found by taking the derivative: So by substituting the second equation into the first equation, the final formula for adjusting the weights is: There are alternative formulae, which would allow the neuron to learn more quickly, but this one has the advantage of being fairly simple. Constructing the Python code Although we won’t use a neural network library, we will import four methods from a Python mathematics library called numpy. These are: For example we can use the array() method to represent the training set shown earlier: The ‘.T’ function, transposes the matrix from horizontal to vertical. So the computer is storing the numbers like this. Ok. I think we’re ready for the more beautiful version of the source code. Once I’ve given it to you, I’ll conclude with some final thoughts. I have added comments to my source code to explain everything, line by line. Note that in each iteration we process the entire training set simultaneously. Therefore our variables are matrices, which are grids of numbers. Here is a complete working example written in Python: Also available here: https://github.com/miloharper/simple-neural-network Final thoughts Try running the neural network using this Terminal command: python main.py You should get a result that looks like: We did it! We built a simple neural network using Python! First the neural network assigned itself random weights, then trained itself using the training set. Then it considered a new situation [1, 0, 0] and predicted 0.99993704. The correct answer was 1. So very close! Traditional computer programs normally can’t learn. What’s amazing about neural networks is that they can learn, adapt and respond to new situations. Just like the human mind. Of course that was just 1 neuron performing a very simple task. But what if we hooked millions of these neurons together? Could we one day create something conscious? I’ve been inspired by the huge response this article has received. I’m considering creating an online course. Click here to tell me what topic to cover. I’d love to hear your feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
Greg Fish
1
4
https://worldofweirdthings.com/looking-for-a-ghost-in-the-machine-4c997c4da45b?source=tag_archive---------0----------------
looking for a ghost in the machine – [ weird things ]
A short while ago, I wrote about some of the challenges involved in creating artificial intelligence and raised the question of how exactly a machine would spontaneously attain self-awareness. While I’ve gotten plenty of feedback about how far technology has come so far and how it’s imminent that machines will become much smarter than us, I never got any specifics as to how exactly this would happen. To me, it’s not a philosophical question because I’m used to looking at technology from a design and development standpoint. When I ask for specifics, I’m talking about functional requirements. So far, the closest thing to outlining the requirements for a super-intelligent computer is a paper by University of Oxford philosopher and futurist Nick Bostrom. The first thing Bostrom tries to do is to establish a benchmark by how to grade what he calls a super-intellect and qualifying his definition. According to him, this super-intellect would be smarter than any human mind in every capacity from the scientific to the creative. It’s a pretty lofty goal because designing something smarter than yourself requires that you build something you don’t fully understand. You might have a sudden stroke of luck and succeed, but it’s more than likely that you’ll build a defective product instead. Imagine building a DNA helix from scratch and with no detailed manual to go by. Even if you have all the tools and know where to find some bits of information to guide you, when you don’t know exactly what you’re doing, the task becomes very challenging and you end up making a lot of mistakes along the way. There’s also the question of how exactly we evaluate what the term smarter means. In Bostrom’s projections, when you have an intelligent machine become fully proficient in a certain area of expertise like say, medicine, it could combine with another machine which has an excellent understanding of physics and so on until all this consolidation leads to a device that knows all that we know and can use all that cross-disciplinary knowledge to gain insights we just don’t have yet. Technologically that should be possible, but the question is whether a machine like that would really be smarter than humans per se. It would be far more knowledgeable than any individual human, granted. But it’s not as if there aren’t experts in particular fields coming together to make all sorts of cross-disciplinary connections and discoveries. What Bostrom calls a super-intellect is actually just a massive knowledge base that can mine itself for information. The paper was last revised in 1998 when we didn’t have the enormous digital libraries we take for granted in today’s world. Those libraries seem a fair bit like Bostrom’s super-intellect in their function and if we were to combine them to mine their depths with sophisticated algorithms which look for cross-disciplinary potential, we’d bring his concept to life. But there’s not a whole lot of intelligence there. Just a lot of data, much of which would be subject to change or revision as research and discovery continue. Just like Bostrom says, it would be a very useful tool for scientists and researchers. However, it wouldn’t be thinking on its own and giving the humans advice, even if we put all this data on supercomputers which could live up to the paper’s ambitious hardware requirements. Rev it up to match the estimated capacity of our brain, it says, and watch a new kind of intellect start waking up and take shape with the proper software. According to Bostrom, the human brain operates at 100 teraflops, or 100 trillion floating point operations per second. Now, as he predicted, computers have reached this speed by 2004 and went far beyond that. In fact, we have supercomputers which are as much as ten times faster. Supposedly, at these operating speeds, we should be able to write software which allows supercomputers to learn by interacting with humans and sifting through our digitized knowledge. But the reality is that we’d be trying to teach an intimate object made of metal and plastic how to think and solve problems, something we’re already born with and hone over our lifetimes. You can teach someone how to ride a bike and how to balance, but how exactly would you teach someone to understand the purpose of riding a bike? How would you tell someone with no emotion, no desires, no wants and no needs why he should go anywhere? That deep layer of motivation and wiring has taken several billion years to appear and was honed over a 600 million additional years of evolution. When we start trying to make an AI system comparable to ours, we’re effectively way behind from the get-go. To truly create an intelligent computer which doesn’t just act as if it’s thinking or do mechanical actions which are easy to predict and program, we’d need to impart in all that information in trillions of lines of code and trick circuitry into deducing it needs to behave like a living being. And that’s a job that couldn’t be done in less than century, much less in the next 20 to 30 years as projected by Ray Kurzweil and his fans. [ eerie illustration by Neil Blevins ] From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities
Oliver Lindberg
1
7
https://medium.com/the-lindberg-interviews/interview-with-googles-alfred-spector-on-voice-search-hybrid-intelligence-and-more-2f6216aa480c?source=tag_archive---------1----------------
Interview with Google’s Alfred Spector on voice search, hybrid intelligence and more
Google’s a pretty good search engine, right? Well, you ain’t seen nothing yet. VP of research Alfred Spector talks to Oliver Lindberg about the technologies emerging from Google Labs — from voice search to hybrid intelligence and beyond This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Google has always been tight-lipped about products that haven’t launched yet. It’s no secret, however, that thanks to the company’s bottom-up culture, its engineers are working on tons of new projects at the same time. Following the mantra of ‘release early, release often’, the speed at which the search engine giant is churning out tools is staggering. At the heart of it all is Alfred Spector, Google’s Vice President of Research and Special Initiatives. One of the areas Google is making significant advances in is voice search. Spector is astounded by how rapidly it’s come along. The Google Mobile App features ‘search by voice’ capabilities that are available for the iPhone, BlackBerry, Windows Mobile and Android. All versions understand English (including US, UK, Australian and Indian-English accents) but the latest addition, for Nokia S60 phones, even introduces Mandarin speech recognition, which — because of its many different accents and tonal characteristics — posed a huge engineering challenge. It’s the most spoken language in the world, but as it isn’t exactly keyboard-friendly, voice search could become immensely popular in China. “Voice is one of these grand technology challenges in computer science,” Spector explains. “Can a computer understand the human voice? It’s been worked on for many decades and what we’ve realised over the last couple of years is that search, particularly on handheld devices, is amenable to voice as an import mechanism. “It’s very valuable to be able to use voice. All of us know that no matter how good the keyboard, it’s tricky to type exactly the right thing into a searchbar, while holding your backpack and everything else.” To get a computer to take account of your voice is no mean feat, of course. “One idea is to take all of the voices that the system hears over time into one huge pan-human voice model. So, on the one hand we have a voice that’s higher and with an English accent, and on the other hand my voice, which is deeper and with an American accent. They both go into one model, or it just becomes personalised to the individual; voice scientists are a little unclear as to which is the best approach.” The research department is also making progress in machine translation. Google Translate already features 51 languages, including Swahili and Yiddish. The latest version introduces instant, real-time translation, phonetic input and text-to-speech support (in English). “We’re able to go from any language to any of the others, and there are 51 times 50, so 2,550 possibilities,” Spector explains. “We’re focusing on increasing the number of languages because we’d like to handle even those languages where there’s not an enormous volume of usage. It will make the web far more valuable to more people if they can access the English-or Chinese language web, for example. “But we also continue to focus on quality because almost always the translations are valuable but imperfect. Sometimes it comes from training our translation system over more raw data, so we have, say, EU documents in English and French and can compare them and learn rules for translation. The other approach is to bring more knowledge into translation. For example, we’re using more syntactic knowledge today and doing automated parsing with language. It’s been a grand challenge of the field since the late 1950s. Now it’s finally achieved mass usage.” The team, led by scientist Franz Josef Och, has been collecting data for more than 100 languages, and the Google Translator Toolkit, which makes use of the ‘wisdom of the crowds’, now even supports 345 languages, many of which are minority languages. The editor enables users to translate text, correct the automatic translation and publish it. Spector thinks that this approach is the future. As computers become even faster, handling more and more data — a lot of it in the cloud — machines learn from users and thus become smarter. He calls this concept ‘hybrid intelligence’. “It’s very difficult to solve these technological problems without human input,” he says. “It’s hard to create a robot that’s as clever, smart and knowledgeable of the world as we humans are. But it’s not as tough to build a computational system like Google, which extends what we do greatly and gradually learns something about the world from us, but that requires our interpretation to make it really successful. “We need to get computers and people communicating in both directions, so the computer learns from the human and makes the human more effective.” Examples of ‘hybrid intelligence’ are Google Suggest, which instantly offers popular searches as you type a search query, and the ‘did you mean?’ feature in Google search, which corrects you when you misspell a query in the search bar. The more you use it, the better the system gets. Training computers to become seemingly more intelligent poses major hurdles for Google’s engineers. “Computers don’t train as efficiently as people do,” Spector explains. “Let’s take the chess example. If a Kasparov was the educator, we could count on almost anything he says as being accurate. But if you tried to learn from a million chess players, you learn from my children as well, who play chess but they’re 10 and eight. They’ll be right sometimes and not right other times. There’s noise in that, and some of the noise is spam. One also has to have careful regard for privacy issues.” By collecting enormous amounts of data, Google hopes to create a powerful database that eventually will understand the relationship between words (for example, ‘a dog is an animal’ and ‘a dog has four legs’). The challenge is to try to establish these relationships automatically, using tons of information, instead of having experts teach the system. This database would then improve search results and language translations because it would have a better understanding of the meaning of the words. There’s also a lot of research around ‘conceptual search’. “Let’s take a video of a couple in front of the Empire State Building. We watch the video and it’s clear they’re on their honeymoon. But what is the video about? Is it about love or honeymoons, or is it about renting office space? It’s a fundamentally challenging problem.” One example of conceptual search is Google Image Swirl, which was added to Labs in November. Enter a keyword and you get a list of 12 images; clicking on each one brings up a cluster of related pictures. Click on any of them to expand the ‘wonder wheel’ further. Google notes that they’re not just the most relevant images; the algorithm determines the most relevant group of images with similar appearance and meaning. To improve the world’s data, Google continues to focus on the importance of the open internet. Another Labs project, Google Fusion Tables facilitates data management in the cloud. It enables users to create tables, filter and aggregate data, merge it with other data sources and visualise it with Google Maps or the Google Visualisation API. The data sets can then be published, shared or kept private and commented on by people around the world. “It’s an example of open collaboration,” Spector says. “If it’s public, we can crawl it to make it searchable and easily visible to people. We hired one of the best database researchers in the world, Alon Halevy, to lead it.” Google is aiming to make more information available more easily across multiple devices, whether it’s images, videos, speech or maps, no matter which language we’re using. Spector calls the impact “totally transparent processing — it revolutionises the role of computation in day-today life. The computer can break down all these barriers to communication and knowledge. No matter what device we’re using, we have access to things. We can do translations, there are books or government documents, and some day we hope to have medical records. Whatever you want, no matter where you are, you can find it.” Spector retired in early 2015 and now serves as the CTO of Two Sigma Investments This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Photography by Andy Short From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of www.GenerateConf.com. Former editor of @netmag. Interviews with leading tech entrepreneurs and web designers, conducted by @oliverlindberg at @netmag.
Greg Fish
1
4
https://worldofweirdthings.com/the-technical-trouble-with-humanoid-robots-2c712649f3c5?source=tag_archive---------5----------------
the technical trouble with humanoid robots – [ weird things ]
If you’ve been reading this blog long enough, you may recall that I’m not a big fan of humanoid robots. There’s no need to invoke the uncanny valley effect, even though some attempts to build humanoid robots managed to produce rather creepy entities which try to look as human as possible to goad future users into some kind of social bond with them, presumably to gain their trust and get into a perfect position to kill the inferior things made of flesh. No, the reason why I’m not sure that humanoid robots will be invaluable to us in the future is a very pragmatic one. Simply put, emulating bipedalism is a huge computational overhead as well as a major, and unavoidable engineering and maintenance headache. And with the limits on size and weight of would be robot butlers, as well as the patience of its users, humanoid bot designers may be aiming a bit too high... We walk, run, and perform complicated tasks with our hands and feet so easily, we only notice the amount of effort and coordination this takes after an injury that limits our mobility. The reason why we can do that lies in a small, squishy mass of neurons coordinating a firestorm of constant activity. Unlike old-standing urban myths imply, we actually use all of our brainpower, and we need it to help coordinate and execute the same motions that robots struggle to repeat. Of course our brains are cheating when compared to a computer because with tens of billions of neurons and trillions of synapses, our brains are like screaming fast supercomputers. They can calculate what it will take to catch a ball in mid-air in less than a few hundred milliseconds and make the most minute adjustments to our muscles in order to keep us balanced and upright just as quickly. Likewise, our bodies can heal the constant wear and tear on our joints, wear and tear we will accumulate from walking, running, and bumping into things. Bipedal robots navigating our world wouldn’t have these assets. Humanoid machines would need to be constantly maintained just to keep up with us in a mechanical sense, and carry the equivalent of Red Storm in their heads, or at least be linked to something like it, to even hope to coordinate themselves as quickly as we do cognitively and physically. Academically, this is a lofty goal which could yield new algorithms and robotic designs. Practically? Not so much. While last month’s feature in Pop Sci bemoaned the lack of interest in humanoid robots in the U.S., it also failed to demonstrate why such an incredibly complicated machine would be needed for basic household chores that could be done by robotic systems functioning independently, and without the need to move on two legs. Instead, we got the standard Baby Boomers’ caretaker argument which goes somewhat like this... Or, alternatively, a computer could book your appointments via e-mail, or a system that lets patients make an appointment with their doctors on the web, a smart dispenser that gives you the right amount of pills, checks for potential interactions based on public medical databases, and beeps to remind you to take your medicine, and a programmable walker with actuators and a few buttons could do these jobs while costing far less than the tens of millions a humanoid robot would cost by 2025, and requiring much less coordination or learning than a programmable humanoid. Why wouldn’t we want to pursue immediate fixes to what’s being described as a looming caretaker shortage choosing instead to invest billions of dollars into E-Jeeves, which may take an entire decade or two just to learn how to go about daily human life, ready to tackle the problem only after it was no longer an issue, even if we started right now? If anything, harping on the need for a robotic hand for Baby Boomers’ future medical woes would only prompt more R&D cash into immediate solutions and rules- based intelligent agents we already employ rather than long-term academic research. There’s a huge gap between human abilities and machinery because we have the benefit of having evolved over hundreds of millions of years of trial and error. Machines, even though they’re advancing at an ever faster pace, only had a few decades by comparison. It will take decades more to build self-repairing machines and computer chips that can boast the same performance as a supercomputer while being small enough to fit in human-sized robots’ heads before robotic butlers become practical and feasible. And even then, we might go with distinctly robotic versions because they’d be cheaper to maintain and operate. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities
Frank Diana
50
10
https://medium.com/@frankdiana/the-evolving-role-of-business-analytics-76818e686e39?source=tag_archive---------2----------------
The Evolving Role of Business Analytics – Frank Diana – Medium
An older post that seems to be getting a lot of attention. Appreciation for analytics rising? Business Analytics refers to the skills, technologies, applications and practices for the continuous exploration of data to gain insight that drive business decisions. Business Analytics is multi-faceted. It combines multiple forms of analytics and applies the right method to deliver expected results. It focuses on developing new insights using techniques including, data mining, predictive analytics, natural language processing, artificial intelligence, statistical analysis and quantitative analysis. In addition, domain knowledge is a key component of the business analytics portfolio. Business Analytics can then be viewed as the combination of domain knowledge and all forms of analytics in a way that creates analytic applications focused on enabling specific business outcomes. Analytic applications have a set of business outcomes that they must enable. For fraud, its reducing loss, for quality & safety, it might be avoiding expensive recalls. Understanding how to enable these outcomes is the first step in determining the make-up of each specific application. For example, in the case of insurance fraud, it’s not enough to use statistical analysis to predict fraud. You need a strong focus on text, domain expertise, and the ability to visually portray organized crime rings. Insight gained through this analysis may be used as input for human decisions or may drive fully automated decisions. Database capacity, processor speeds and software enhancements will continue to drive even more sophisticated analytic applications. The key components of business analytics are: There is a massive explosion of data occurring on a number of levels. The notion of data overload was echoed in a previous 2010 IBM CEO study titled “Capitalizing on Complexity”. In this study, a large number of CEOs described their organizations as data rich, but insight poor. Many voiced frustration over their inability to transform available data into feasible action plans. This notion of turning data into insight, and insight to action is a common and growing theme. According to Pricewaterhouse-Coopers, there are approximately 75 to 100 million blogs and 10–20 million Internet discussion boards and forums in the English language alone. As the Forrester diagram describes, more consumers are moving up the ladder and becoming creators of content. In addition, estimates show the volume of unstructured data (email, audio, video, Web pages, etc.) doubles every three months. Effectively managing and harnessing this vast amount of information presents both a great challenge and a great opportunity. Data is flowing through medical devices, scientific devices, sensors, monitors, detectors, other supply chain devices, instrumented cars and roads, instrumented domestic appliances, etc. Everything will be instrumented and from this instrumentation comes data. This data will be analyzed to find insight that drives smarter decisions. The utility sector provides a great example of the growing need for analytics. The smart grid and the gradual installation of intelligent endpoints, smart meters and other devices will generate volumes of data. Smart grid utilities are evolving into brokers of information. The data tsunami that will wash over utilities in the coming years is a formidable IT challenge, but it is also a huge opportunity to move beyond simple meter-to-cash functions and into real-time optimization of their operations. This type of instrumentation is playing out in many industries. As this occurs, industry players will be challenged to leverage the data generated by these devices. Inside the enterprise, consider the increasing volumes of emails, Word documents, PDFs, Excel worksheets and free form text fields that contain everything from budgets and forecasts to customer proposals, contracts, call center notes and expense reports. Outside the enterprise, the growth of web-based content, which is primarily unstructured, continues to accelerate –everything from social media, comments in blogs, forums and social networks, to survey verbatim and wiki pages. Most industry analysts estimate more than 80% of the intelligence required to make smarter decisions is contained in unstructured data or text. The survey results in a recent MIT Sloan report support both an aggressive adoption of analytics and a shift in the analytic footprint. According to the report, many traditional forms of analytics will be surpassed in the next 24 months. The authors produced a very effective visual that shows this shift from today’s analytic footprint to the future footprint. Although listed as number one, the authors describe visualization as dashboards and scorecards — the traditional methods of visualization. New and emerging methods help accelerate time-to-insight. These new approaches help us absorb insight from large volumes of data in rapid fashion. The analytics identified as creating the most value in 24 months are: Companies and organizations continue to invest millions of dollars capturing, storing and maintaining all types of business data to drive sales and revenue, optimize operations, manage risk and ensure compliance. Most of this investment has been in technologies and applications that manage structured data — coded information residing in relational data base management systems in the form of rows and columns. Current methods such as traditional business intelligence (BI) are more about querying and reporting and focus on answering questions such as what happened, how many, how often, and what actions are needed. New forms of advanced analytics are required to address the business imperatives described earlier. Business Analytics focuses on answering questions such as why is this happening, what if these trends continue, what will happen next (predict), what is the best that can happen (optimize). There is a growing view that prescribing outcomes is the ultimate role of analytics; that is, identifying those actions that deliver the right business outcomes. Organizations should first define the insights needed to meet business objectives, and then identify data that provides that insight. Too often, companies start with data. The previously mentioned IBM study also revealed that analytics-driven organizations had 33 percent more revenue growth with 32 percent more return on capital invested. Organizations expect value from emerging analytic techniques to soar. The growth of innovative analytic applications will serve as a means to help individuals across the organization consume and act upon insights derived through complex analysis. Some examples of innovative use: A recent MIT Sloan report effectively uses the maturity model concept to describe how organizations typically evolve to analytic excellence. The authors point out that organizations begin with efficiency goals and then address growth objectives after experience is gained. The authors believe this is a common practice, but not necessarily a best practice. They see the traditional analytic adoption path starting in data-intensive areas like financial management, operations, and sales and marketing. As companies move up the maturity curve, they branch out into new functions, such as strategy, product research, customer service, and customer experience. In the opinion of the authors, these patterns suggest that success in one area stimulates adoption in others. They suggest that this allows organizations to increase their level of sophistication. The authors of the MIT Sloan special report through their analysis of survey results have created three levels of analytic capabilities: The report provides a very nice matrix that describes these levels in the context of a maturity model. In reviewing business challenges outlined in the matrix, there is one very interesting dynamic: the transition from cost and efficiencies to revenue growth, customer retention and customer acquisition. The authors of the report found that as the value of analytics grows, organizations are likely to seek a wider range of capabilities — and more advanced use of existing ones. The survey indicated that this dynamic is leading some organizations to create a centralized analytics unit that makes it possible to share analytic resources efficiently and effectively. These centralized enterprise units are the primary source of analytics, providing a home for more advanced skills within the organization. This same dynamic will lead to the appointment of Chief Analytics Officers (CAO) starting in 2011. The availability of strong business-focused analytical talent will be the greatest constraint on a company’s analytical capabilities. The Outsourcing of analytics will become an attractive alternative as the need for specialized skills will lead organizations to look for outside help. Outsourcing analytics allows a company to focus on taking action based on insights delivered by the outsourcer. The outsourcer can leverage these specialized resources across multiple clients. As the importance of analytics grows, organizations will have an option to outsource. Expect to see more of this in 2011. We will see more organizations establish enterprise data management functions to coordinate data across business units. We will also see smarter approaches such as information lifecycle management as opposed to the common approach of throwing more hardware at the growing data problem. The information management challenge will grow as millions of next-generation tech-savvy users use feeds and mash-ups to bring data together into usable parts so they can answer their own questions. This gives rise to new challenges, including data security and governance. Originally published at frankdiana.net on March 20, 2011. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. TCS Executive focused on the rapid evolution of society and business. Fascinated by the view of the world in the next decade and beyond https://frankdiana.net/
Paul Christiano
43
31
https://ai-alignment.com/a-formalization-of-indirect-normativity-7e44db640160?source=tag_archive---------0----------------
Formalizing indirect normativity – AI Alignment
This post outlines a formalization of what Nick Bostrom calls “indirect normativity.” I don’t think it’s an adequate solution to the AI control problem; but to my knowledge it was the first precise specification of a goal that meets the “not terrible” bar, i.e. which does not lead to terrible consequences if pursued without any caveats or restrictions. The proposal outlined here was sketched in early 2012 while I was visiting FHI, and was my first serious foray into AI control. When faced with the challenge of writing down precise moral principles, adhering to the standards demanded in mathematics, moral philosophers encounter two serious difficulties: In light of these difficulties, a moral philosopher might simply declare: “It is not my place to aspire to mathematical standards of precision. Ethics as a project inherently requires shared language, understanding, and experience; it becomes impossible or meaningless without them.” This may be a defensible philosophical position, but unfortunately the issue is not entirely philosophical. In the interest of building institutions or machines which reliably pursue what we value, we may one day be forced to describe precisely “what we value” in a way that does not depend on charitable or “common sense” interpretation (in the same way that we today must describe “what we want done” precisely to computers, often with considerable effort). If some aspects of our values cannot be described formally, then it may be more difficult to use institutions or machines to reliably satisfy them. This is not to say that describing our values formally is necessary to satisfying them, merely that it might make it easier. Since we are focusing on finding any precise and satisfactory moral theory, rather than resolving disputes in moral philosophy, we will adopt a consequentialist approach without justification and focus on axiology. Moreover, we will begin from the standpoint of expected utility maximization, and leave aside questions about how or over what space the maximization is performed. We aim to mathematically define a utility function U such that we would be willing to build a hypothetical machine which exceptionlessly maximized U, possibly at the catastrophic expense of any other values. We will assume that the machine has an ability to reason which at least rivals that of humans, and is willing to tolerate arbitrarily complex definitions of U (within its ability to reason about them). We adopt an indirect approach. Rather than specifying what exactly we want, we specify a process for determining what we want. This process is extremely complex, so that any computationally limited agent will always be uncertain about the process’ output. However, by reasoning about the process it is possible to make judgments about which action has the highest expected utility in light of this uncertainty. For example, I might adopt the principle: “a state of affairs is valuable to the extent that I would judge it valuable after a century of reflection.” In general I will be uncertain about what I would say after a century, but I can act on the basis of my best guesses: after a century I will probably prefer worlds with more happiness, and so today I should prefer worlds with more happiness. After a century I have only a small probability of valuing trees’ feelings, and so today I should go out of my way to avoid hurting them if it is either instrumentally useful or extremely easy. As I spend more time thinking, my beliefs about what I would say after a century may change, and I will start to pursue different states of affairs even though the formal definition of my values is static. Similarly, I might desire to think about the value of trees’ feelings, if I expect that my opinions are unstable: if I spend a month thinking about trees, my current views will then be a much better predictor of my views after a hundred years, and if I know better whether or not trees’ feelings are valuable, I can make better decisions. This example is quite informal, but it communicates the main idea of the approach. We stress that the value of our contribution, if any, is in the possibility of a precise formulation. (Our proposal itself will be relatively informal; instead it is a description of how you would arrive at a precise formulation.) The use of indirection seems to be necessary to achieve the desired level of precision. Our proposal contains only two explicit steps: Each of these steps requires substantial elaboration, but we must also specify what we expect the human to do with these tools. This proposal is best understood in the context of other fantastic-seeming proposals, such as “my utility is whatever I would write down if I reflected for a thousand years without interruption or biological decay.” The counterfactual events which take place within the definition are far beyond the realm our intuition recognizes as “realistic,” and have no place except in thought experiments. But to the extent that we can reason about these counterfactuals and change our behavior on the basis of that reasoning (if so motivated), we can already see how such fantastic situations could affect our more prosaic reality. The remainder of this document consists of brief elaboration of some of these steps, and a few arguments about why this is a desirable process. The first step of our proposal is a high-fidelity mathematical model of human cognition. We will set aside philosophical troubles, and assume that the human brain is a purely physical system which may be characterized mathematically. Even granting this, it is not clear how we can realistically obtain such a characterization. The most obvious approach to characterizing a brain is to combine measurements of its behavior or architecture with an understanding of biology, chemistry, and physics. This project represents a massive engineering effort which is currently just beginning. Most pessimistically, our proposal could be postponed until this project’s completion. This could still be long before the mathematical characterization of the brain becomes useful for running experiments or automating human activities: because we are interested only in a definition, we do not care about having the computational resources necessary to simulate the brain. An impractical mathematical definition, however, may be much easier to obtain. We can define a model of a brain in terms of exhaustive searches which could never be practically carried out. For example, given some observations of a neuron, we can formally define a brute force search for a model of that neuron. Similarly, given models of individual neurons we may be able to specify a brute force search over all ways of connecting those neurons which account for our observations of the brain (say, some data acquired through functional neuroimaging). It may be possible to carry out this definition without exploiting any structural knowledge about the brain, beyond what is necessary to measure it effectively. By collecting imaging data for a human exposed to a wide variety of stimuli, we can recover a large corpus of data which must be explained by any model of a human brain. Moreover, by using our explicit knowledge of human cognition we can algorithmically generate an extensive range of tests which identify a successful simulation, by probing responses to questions or performance on games or puzzles. In fact, this project may be possible using existing resources. The complexity of the human brain is not as unapproachable as it may at first appear: though it may contain 1014synapses, each described by many parameters, it can be specified much more compactly. A newborn’s brain can be specified by about 109bits of genetic information, together with a recipe for a physical simulation of development. The human brain appears to form new long-term memories at a rate of 1–2 bits per second, suggesting that it may be possible to specify an adult brain using 109additional bits of experiential information. This suggests that it may require only about 1010bits of information to specify a human brain, which is at the limits of what can be reasonably collected by existing technology for functional neuroimaging. This discussion has glossed over at least one question: what do we mean by ‘brain emulation’? Human cognition does not reside in a physical system with sharp boundaries, and it is not clear how you would define or use a simulation of the “input-output” behavior of such an object. We will focus on some system which does have precisely defined input-output behavior, and which captures the important aspects of human cognition. Consider a system containing a human, a keyboard, a monitor, and some auxiliary instruments, well-insulated from the environment except for some wires carrying inputs to the monitor and outputs from the keyboard and auxiliary instruments (and wires carrying power). The inputs to this system are simply screens to be displayed on the monitor (say delivered as a sequence to be displayed one after another at 30 frames per second), while the outputs are the information conveyed from the keyboard and the other measuring apparatuses (also delivered as a sequence of data dumps, each recording activity from the last 30th of a second). This “human in a box” system can be easily formally defined if a precise description of a human brain and coarse descriptions of the human body and the environment are available. Alternatively, the input-output behavior of the human in a box can be directly observed, and a computational model constructed for the entire system. Let H be a mathematical definition of the resulting (randomized) function from input sequences (In(1), In(2), ..., In(K)) to the next output Out(K). H is, by design, a good approximation to what the human “would output” if presented with any particular input sequence. Using H, we can mathematically define what “would happen” if the human interacted with a wide variety of systems. For example, if we deliver Out(K) as the input to an abstract computer running some arbitrary software, and then define In(K+1) as what the screen would next display, we can mathematically define the distribution over transcripts which would have arisen if the human had interacted with the abstract computer. This computer could be running an interactive shell, a video game, or a messaging client. Note that H reflects the behavior of a particular human, in a particular mental state. This state is determined by the process used to design H, or the data used to learn it. In general, we can control H by choosing an appropriate human and providing appropriate instructions / training. More emulations could be produced by similar measures if necessary. Using only a single human may seem problematic, but we will not rely on this lone individual to make all relevant ethical judgments. Instead, we will try to select a human with the motivational stability to carry out the subsequent steps faithfully, which will define U using the judgment of a community consisting of many humans. This discussion has been brief and has necessarily glossed over several important difficulties. One difficulty is the danger of using computationally unbounded brute force search, given the possibility of short programs which exhibit goal-oriented behavior. Another difficulty is that, unless the emulation project is extremely conservative, the models it produces are not likely to be fully-functional humans. Their thoughts may be blurred in various ways, they may be missing many memories or skills, and they may lack important functionalities such as long-term memory formation or emotional expression. The scope of these issues depends on the availability of data from which to learn the relevant aspects of human cognition. Realistic proposals along these lines will need to accommodate these shortcomings, relying on distorted emulations as a tool to construct increasingly accurate models. For any idealized “software”, with a distinguished instruction return, we can use H to mathematically define the distribution over return values which would result, if the human were to interact with that software. We will informally define a particular program T which provides a rich environment, in which the remainder of our proposal can be implemented. From a technical perspective this will be the last step of our proposal. The remaining steps will be reflected only in the intentions and behavior of the human being simulated in H. Fix a convenient and adequately expressive language (say a dialect of Python designed to run on an abstract machine). T implements a standard interface for an interactive shell in this language: the user can look through all of the past instructions that have been executed and their return values (rendered as strings) or execute a new instruction. We also provide symbols representing H and T themselves (as functions from sequences of K inputs to a value for the Kth output). We also provide some useful information (such as a snapshot of the Internet, and some information about the process used to create H and T), which we encode as a bit string and store in a single environment variable data. We assume that our language of choice has a return instruction, and we have T return whenever the user executes this instruction. Some care needs to be taken to define the behavior if T enters an infinite loop–we want to minimize the probability that the human accidentally hangs the terminal, with catastrophic consequences, but we cannot provide a complete safety-net without running into unresolvable issues with self-reference. We define U to be the value returned by H interacting with T. If H represented an unfortunate mental state, then this interaction could be short and unproductive: the simulated human could just decide to type ‘return 0’ and be done with it. However, by choosing an appropriate human to simulate and inculcating an appropriate mental state, we can direct the process further. We intend for H to use the resources in T to initiate a larger deliberative process. For example, the first step of this process may be to instantiate many copies of H, interacting with variants of messaging clients which are in contact with each other. The return value from the original process could then be defined as the value returned by a designated ‘leader’ from this community, or as a majority vote amongst the copies of H, or so on. Another step might be to create appropriate realistic virtual environments for simulated brains, rather than confining them to boxes. For motivational stability, it may be helpful to design various coordination mechanisms, involving frameworks for interaction, “cached” mental states which are frequently re-instantiated, or sanity checks whereby one copy of H monitors the behavior of another. The resulting communities of simulated brains then engage in a protracted planning process, ensuring that subsequent steps can be carried out safely or developing alternative approaches. The main priority of this community is to reduce the probability of errors as far as possible (exactly what constitutes an ‘error’ will be discussed at more length later). At the end of this process, we obtain a formal definition of a new protocol H+, which submits its inputs for consideration to a large community and then produces its outputs using some deliberation mechanism (democratic vote, one leader using the rest of the community as advisors, etc.) The next step requires our community of simulated brains to construct a detailed simulation of Earth which they can observe and manipulate. Once they have such a simulation, they have access to all of the data which would have been available on Earth. In particular, they can now explore many possible futures and construct simulations for each living human. In order to locate Earth, we will again leverage an exhaustive search. First, H+ decides on informal desiderata for an “Earth simulation.” These are likely to be as follows: Once H+ has decided on the desiderata, it uses a brute force search to find a simulation satisfying them: for each possible program it instantiates a new copy of H+ tasked with evaluating whether that program is an acceptable simulation. We then define E to be a uniform distribution over programs which pass this evaluation. We might have doubts about whether this process produces the “real” Earth–perhaps even once we have verified that it is identical according to a laundry list of measures, it may still be different in other important ways. There are two reasons why we might care about such differences. First, if the simulated Earth has a substantially different set of people than the real Earth, then a different set of people will be involved in the subsequent decision making. If we care particularly about the opinions of the people who actually exist (which the reader might well, being amongst such people!) then this may be unsatisfactory. Second, if events transpire significantly differently on the simulated Earth than the real Earth, value judgments designed to guide behavior appropriately in the simulated Earth may lead to less appropriate behaviors in the real Earth. (This will not be a problem if our ultimate definition of U consists of universalizable ethical principles, but we will see that U might take other forms.) These concerns are addressed by a few broad arguments. First, checking a detailed but arbitrary ‘laundry list’ actually provides a very strong guarantee. For example, if this laundry list includes verifying a snapshot of the Internet, then every event or person documented on the Internet must exist unchanged, and every keystroke of every person composing a document on the Internet must not be disturbed. If the world is well interconnected, then it may be very difficult to modify parts of the world without having substantial effects elsewhere, and so if a long enough arbitrary list of properties is fixed, we expect nearly all of the world to be the same as well. Second, if the essential character of the world is fixed but detailed are varied, we should expect the sort of moral judgments reached by consensus to be relatively constant. Finally, if the system whose behavior depends on these moral judgments is identical between the real and simulated worlds, then outputting a U which causes that system to behave a certain way in the simulated world will also cause that system to behave that way in the real world. Once H+ has defined a simulation of the world which permits inspection and intervention, by careful trial and error H+ can inspect a variety of possible futures. In particular, they can find interventions which cause the simulated human society to conduct a real brain emulation project and produce high-fidelity brain scans for all living humans. Once these scans have been obtained, H+ can use them to define U as the output of a new community, H++, which draws on the expertise of all living humans operating under ideal conditions. There are two important degrees of flexibility: how to arrange the community for efficient communication and deliberation, and how to delegate the authority to define U. In terms of organization, the distinction between different approaches is probably not very important. For example, it would probably be perfectly satisfactory to start from a community of humans interacting with each other over something like the existing Internet (but on abstract, secure infrastructure). More important are the safety measures which would be in place, and the mechanism for resolving differences of value between different simulated humans. The basic approach to resolving disputes is to allow each human to independently create a utility function U, each bounded in the interval [0, 1], and then to return their average. This average can either be unweighted, or can be weighted by a measure of each individual’s influence in the real world, in accordance with a game-theoretic notion like the Shapley value applied to abstract games or simulations of the original world. More sophisticated mechanisms are also possible, and may be desirable. Of course these questions can and should be addressed in part by H+ during its deliberation in the previous step. After all, H+ has access to an unlimited length of time to deliberate and has infinitely powerful computational aids. The role of our reasoning at this stage is simply to suggest that we can reasonably expect H+ to discover effective solutions. As when discussing discovering a brain simulation by brute force, we have skipped over some critical issues in this section. In general, brute force searches (particularly over programs which we would like to run) are quite dangerous, because such searches will discover many programs with destructive goal-oriented behaviors. To deal with these issues, in both cases, we must rely on patience and powerful safety measures. Once we have a formal description of a community of interacting humans, given as much time as necessary to deliberate and equipped with infinitely powerful computational aids, it becomes increasingly difficult to make coherent predictions about their behavior. Critically, though, we can also become increasingly confident that the outcome of their behavior will reflect their intentions. We sketch some possibilities, to illustrate the degree of flexibility available. Perhaps the most natural possibility is for this community to solve some outstanding philosophical problems and to produce a utility function which directly captures their preferences. However, even if they quickly discovered a formulation which appeared to be attractive, they would still be wise to spend a great length of time and to leverage some of these other techniques to ensure that their proposed solution was really satisfactory. Another natural possibility is to eschew a comprehensive theory of ethics, and define value in terms of the community’s judgment. We can define a utility function in terms of the hypothetical judgments of astronomical numbers of simulated humans, collaboratively evaluating the goodness of a state of affairs by examining its history at the atomic level, understanding the relevant higher-order structure, and applying human intuitions. It seems quite likely that the community will gradually engage in self-modifications, enlarging their cognitive capacity along various dimensions as they come to understand the relevant aspects of cognition and judge such modifications to preserve their essential character. Either independently or as an outgrowth of this process, they may (gradually or abruptly) pass control to machine intelligences which they are suitably confident expresses their values. This process could be used to acquire the power necessary to define a utility function in one of the above frameworks, or understanding value-preserving self-modification or machine intelligence may itself prove an important ingredient in formalizing what it is we value. Any of these operations would be performed only after considerable analysis, when the original simulated humans were extremely confident in the desirability of the results. Whatever path they take and whatever coordination mechanisms they use, eventually they will output a utility function U’. We then define U = 0 if U’ < 0, U = 1 if U’ > 1, and U = U’ otherwise. At this point we have offered a proposal for formally defining a function U. We have made some general observations about what this definition entails. But now we may wonder to what extent U reflects our values, or more relevantly, to what extent our values are served by the creation of U-maximizers. Concerns may be divided into a few natural categories: We respond to each of these objections in turn. If the process works as intended, we will reach a stage in which a large community of humans reflects on their values, undergoes a process of discovery and potentially self-modification, and then outputs its result. We may be concerned that this dynamic does not adequately capture what we value. For example, we may believe that some other extrapolation dynamic captures our values, or that it is morally desirable to act on the basis of our current beliefs without further reflection, or that the presence of realistic disruptions, such as the threat of catastrophe, has an important role in shaping our moral deliberation. The important observation, in the defense of our proposal, is that whatever objections we could think of today, we could think of within the simulation. If, upon reflection, we decide that too much reflection is undesirable, we can simply change our plans appropriately. If we decide that realistic interference is important for moral deliberation, we can construct a simulation in which such interference occurs, or determine our moral principles by observing moral judgments in our own world’s possible futures. There is some chance that this proposal is inadequate for some reason which won’t be apparent upon reflection, but then by definition this is a fact which we cannot possibly hope to learn by deliberating now. It therefore seems quite difficult to maintain objections to the proposal along these lines. One aspect of the proposal does get “locked in,” however, after being considered by only one human rather than by a large civilization: the distribution of authority amongst different humans, and the nature of mechanisms for resolving differing value judgments. Here we have two possible defenses. One is that the mechanism for resolving such disagreements can be reflected on at length by the individual simulated in H. This individual can spend generations of subjective time, and greatly expand her own cognitive capacities, while attempting to determine the appropriate way to resolve such disagreements. However, this defense is not completely satisfactory: we may be able to rely on this individual to produce a very technically sound and generally efficient proposal, but the proposal itself is quite value laden and relying on one individual to make such a judgment is in some sense begging the question. A second, more compelling, defense, is that the structure of our world has already provided a mechanism for resolving value disagreements. By assigning decision-making weight in a way that depends on current influence (for example, as determined by the simulated ability of various coalitions to achieve various goals), we can generate a class of proposals which are at a minimum no worse than the status quo. Of course, these considerations will also be shaped by the conditions surrounding the creation or maintenance of systems which will be guided by U–for example, if a nation were to create a U-maximizer, they might first adopt an internal policy for assigning influence on U. By performing this decision making in an idealized environment, we can also reduce the likelihood of destructive conflict and increase the opportunities for mutually beneficial bargaining. We may have moral objections to codifying this sort of “might makes right” policy, favoring a more democratic proposal or something else entirely, but as a matter of empirical fact a more ‘cosmopolitan’ proposal will be adopted only if it is supported by those with the appropriate forms of influence, a situation which is unchanged by precisely codifying existing power structure. Finally, the values of the simulations in this process may diverge from the values of the original human models, for one reaosn or another. For example, the simulated humans may predictably disagree with the original models about ethical questions by virtue of (probably) having no physical instantiation. That is, the output of this process is defined in terms of what a particular human would do, in a situation which that human knows will never come to pass. If I ask “What would I do, if I were to wake up in a featureless room and told that the future of humanity depended on my actions?” the answer might begin with “become distressed that I am clearly inhabiting a hypothetical situation, and adjust my ethical views to take into account the fact that people in hypothetical situations apparently have relevant first-person experience.” Setting aside the question of whether such adjustments are justified, they at least raise the possibility that our values may diverge from those of the simulations in this process. These changes might be minimized, by understanding their nature in advance and treating them on a case-by-case basis (if we can become convinced that our understanding is exhaustive). For example, we could try and use humans who robustly employ updateless decision theories which never undergo such predictable changes, or we could attempt to engineer a situation in which all of the humans being emulated do have physical instantiations, and naive self-interest for those emulations aligns roughly with the desired behavior (for example, by allowing the early emulations to “write themselves into” our world). We can imagine many ways in which this process can fail to work as intended–the original brain emulations may accurately model human behavior, the original subject may deviate from the intended plans, or simulated humans can make an error when interacting with their virtual environment which causes the process to get hijacked by some unintended dynamic. We can argue that the proposal is likely to succeed, and can bolster the argument in various ways (by reducing the number of assumptions necessary for succees, building in fault-tolerance, justifying each assumption more rigorously, and so on). However, we are unlikely to eliminate the possibility of error. Therefore we need to argue that if the process fails with some small probability, the resulting values will only be slightly disturbed. This is the reason for requiring U to lie in the interval [0, 1]–we will see that this restriction bounds the damage which may be done by an unlikely failure. If the process fails with some small probability ε, then we can represent the resulting utility function as U = (1 — ε) U1 + ε U2, where U1 is the intended utility function and U2 is a utility function produced by some arbitrary error process. Now consider two possible states of affairs A and B such that U1(A) > U1(B) + ε /(1 — ε) ≈ U1(B) + ε. Then since 0 ≤ U2 ≤ 1, we have: U(A) = (1 — ε) U1(A) + ε U2(A) > (1 — ε) U1(B) + ε ≥ (1 — ε) U1(B) + ε U2(B) = U(B) Thus if A is substantially better than B according to U1, then A is better than B according to U. This shows that a small probability of error, whether coming from the stochasticity of our process or an agent’s uncertainty about the process’ output, has only a small effect on the resulting values. Moreover, the process contains a humans who have access to a simulation of our world. This implies, in particular, that they have access to a simulation of whatever U-maximizing agents exist in the world, and they have knowledge of those agents’ beliefs about U. This allows them to choose U with perfect knowledge of the effects of error in these agents’ judgments. In some cases this will allow them to completely negate the effect of error terms. For example, if the randomness in our process causes a perfectly cooperate community of simulated humans to “control” U with probability 2⁄3, and causes an arbitrary adversary to control it with probability 1⁄3, then the simulated humans can spend half of their mass outputting a utility function which exactly counters the effect of the adversary. In general, the situation is not quite so simple: the fraction of mass controlled by any particular coalition will vary as the system’s uncertainty about U varies, and so it will be impossible to counteract the effect of an error term in a way which is time-independent. Instead, we will argue later that an appropriate choice of a bounded and noisy U can be used to achieve a very wide variety of effective behaviors of U-maximizers, overcoming the limitations both of bounded utility maximization and of noisy specification of utility functions. Many possible problems with this scheme were described or implicitly addressed above. But that discussion was not exhaustive, and there are some classes of errors that fall through the cracks. One interesting class of failures concerns changes in the values of the hypothetical human H. This human is in a very strange situation, and it seems quite possible that the physical universe we know contains extremely few instances of that situation (especially as the process unfolds and becomes more exotic). So H’s first-person experience of this situation may lead to significant changes in H’s views. For example, our intuition that our own universe is valuable seems to be derived substantially from our judgment that our own first-person experiences are valuable. If hypothetically we found ourselves in a very alien universe, it seems quite plausible that we would judge the experiences within that universe to be morally valuable as well (depending perhaps on our initial philosophical inclinations). Another example concerns our self-interest: much of individual humans’ values seem to depend on their own anticipations about what will happen to them, especially when faced with the prospect of very negative outcomes. If hypothetically we woke up in a completely non-physical situation, it is not exactly clear what we would anticipate, and this may distort our behavior. Would we anticipate the planned thought experiment occurring as planned? Would we focus our attention on those locations in the universe where a simulation of the thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates — anyone who can manipulate H’s behavior can have a significant effect on the future of our world, and so many may be motivated to create simulations of H. A realistic U-maximizer will not be able to carry out the process described in the definition of U–in fact, this process probably requires immensely more computing resources than are available in the universe. (It may even involve the reaction of a simulated human to watching a simulation of the universe!) To what extent can we make robust guarantees about the behavior of such an agent? We have already touched on this difficulty when discussing the maxim “A state of affairs is valuable to the extent I would judge it valuable after a century of reflection.” We cannot generally predict our own judgments in a hundred years’ time, but we can have well-founded beliefs about those judgments and act on the basis of those beliefs. We can also have beliefs about the value of further deliberation, and can strike a balance between such deliberation and acting on our current best guess. A U-maximizer faces a similar set of problems: it cannot understand the exact form of U, but it can still have well-founded beliefs about U, and about what sorts of actions are good according to U. For example, if we suppose that the U-maximizer can carry out any reasoning that we can carry out, then the U-maximizer knows to avoid anything which we suspect would be bad according to U (for example, torturing humans). Even if the U-maximizer cannot carry out this reasoning, as long as it can recognize that humans have powerful predictive models for other humans, it can simply appropriate those models (either by carrying out reasoning inspired by human models, or by simply asking). Moreover, the community of humans being simulated in our process has access to a simulation of whatever U-maximizer is operating under this uncertainty, and has a detailed understanding of that uncertainty. This allows the community to shape their actions in a way with predictable (to the U-maximizer) consequences. It is easily conceivable that our values cannot be captured by a bounded utility function. Easiest to imagine is the possibility that some states of the world are much better than others, in a way that requires unbounded utility functions. But it is also conceivable that the framework of utility maximization is fundamentally not an appropriate one for guiding such an agent’s action, or that the notion of utility maximization hides subtleties which we do not yet appreciate. We will argue that it is possible to transform bounded utility maximization into an arbitrary alternative system of decision-making, by designing a utility function which rewards worlds in which the U-maximizer replaced itself with an alternative decision-maker. It is straightforward to design a utility function which is maximized in worlds where any particular U-maximizer converted itself into a non-U-maximizer–even if no simple characterization can be found for the desired act, we can simply instantiate many communities of humans to look over a world history and decide whether or not they judge the U-maximizer to have acted appropriately. The more complicated question is whether a realistic U-maximizer can be made to convert itself into a non-U-maximizer, given that it is logically uncertain about the nature of U. It is at least conceivable that it couldn’t: if the desirability of some other behavior is only revealed by philosophical considerations which are too complex to ever be discovered by physically limited agents, then we should not expect any physically limited U-maximizer to respond to those considerations. Of course, in this case we could also not expect normal human deliberation to correctly capture our values. The relevant question is whether a U-maximizer could switch to a different normative framework, if an ordinary investment of effort by human society revealed that a different normative framework was more appropriate. If a U-maximizer does not spend any time investigating this possibility, than it may not be expected to act on it. But to the extent that we assign a significant probability to the simulated humans deciding that a different normative framework is more appropriate, and to the extent that the U-maximizer is able to either emulate or accept our reasoning, it will also assign a significant probability to this possibility (unless it is able to rule it out by more sophisticated reasoning). If we (and the U-maximizer) expect the simulations to output a U which rewards a switch to a different normative framework, and this possibility is considered seriously, then U-maximization entails exploring this possibility. If these explorations suggest that the simulated humans probably do recommend some particular alternative framework, and will output a U which assigns high value to worlds in which this framework is adopted and low value to worlds in which it isn’t, then a U-maximizer will change frameworks. Such a “change of frameworks” may involve sweeping action in the world. For example, the U-maximizer may have created many other agents which are pursuing activities instrumentally useful to maximizing U. These agents may then need to be destroyed or altered; anticipating this possibility, the U-maximizer is likely to take actions to ensure that its current “best guess” about U does not get locked in. This argument suggests that a U-maximizer could adopt an arbitrary alternative framework, if it were feasible to conclude that humans would endorse that framework upon reflection. Our proposal appears to be something of a cop out, in that it declines to directly take a stance on any ethical issues. Indeed, not only do we fail to specify a utility function ourselves, but we expect the simulations to which we have delegated the problem to in turn delegate it at least a few more times. Clearly at some point this process must bottom out with actual value judgments, and we may be concerned that this sort of “passing the buck” is just obscuring deeper problems which will arise when the process does bottom out. As observed above, whatever such concerns we might have can also be discovered by the simulations we create. If there is some fundamental difficulty which always arises when trying to assign values, then we certainly have not exacerbated this problem by delegation. Nevertheless, there are at least two coherent objections one might raise: Both of these objections can be met with a single response. In the current world, we face a broad range of difficult and often urgent problems. By passing the buck the first time, we delegate resolution of ethical challenges to a civilization which does not have to deal with some of these difficulties–in particular, it faces no urgent existential threats. This allows us to divert as much energy as possible to dealing with practical problems today, while still capturing most of the benefits of nearly arbitrarily extensive ethical deliberation. This process is defined in terms of the behavior of unthinkably many hypothetical brain emulations. It is conceivable that the moral status of these emulations may be significant. We must make a distinction between two possible sources of moral value: it could be the case that a U-maximizer carries out simulations on physical hardware in order to better understand U, and these simulations have moral value, or it could be the case that the hypothetical emulations themselves have moral value. In the first case, we can remark that the moral value of such simulations is itself incorporated into the definition of U. Therefore a U-maximizer will be sensitive to the possible suffering of simulations it runs while trying to learn about U–as long as it believes that we may might be concerned about the simulations’ welfare, upon reflection, it can rely as much as possible on approaches which do not involve running simulations, which deprive simulations of the first-person experience of discomfort, or which estimate outcomes by running simulations in more pleasant circumstances. If the U-maximizer is able to foresee that we will consider certain sacrifices in simulation welfare worthwhile, then it will make those sacrifices. In general, in the same way that we can argue that estimates of U reflect our values over states of affairs, we can argue that estimates of U reflects our values over processes for learning about U. In the second case, a U-maximizer in our world may have little ability to influence the welfare of hypothetical simulations invoked in the definition of U. However, the possible disvalue of these simulations’ experiences are probably seriously diminished. In general the moral value of such hypothetical simulations’ experiences is somewhat dubious. If we simply write down the definition of U, these simulations seem to have no more reality than story-book characters whose activities we describe. The best arguments for their moral relevance comes from the great causal significance of their decisions: if the actions of a powerful U-maximizer depend on its beliefs about what a particular simulation would do in a particular situation, including for example that simulation’s awareness of discomfort or fear, or confusion at the absurdity of the hypothetical situation in which they find themselves, then it may be the case that those emotional responses are granted moral significance. However, although we may define astronomical numbers of hypothetical simulations, the detailed emotional responses of very view of these simulations will play an important role in the definition of U. Moreover, for the most part the existences of the hypothetical simulations we define are extremely well-controlled by those simulations themselves, and may be expected to be counted as unusually happy by the lights of the simulations themselves. The early simulations (who have less such control) are created from an individual who has provided consent and is selected to find such situations particularly non-distressing. Finally, we observe that U can exert control over the experiences of even hypothetical simulations. If the early simulations would experience morally relevant suffering because of their causal significance, but the later simulations they generate robustly disvalue this suffering, the later simulations can simulate each other and ensure that they all take the same actions, eliminating the causal significance of the earlier simulations. Originally published at ordinaryideas.wordpress.com on April 21, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. OpenAI Aligning AI systems with human interests.
Robbie Tilton
3
15
https://medium.com/@robbietilton/emotional-computing-with-ai-3513884055fa?source=tag_archive---------1----------------
Emotional Computing – Robbie Tilton – Medium
Investigating the human to computer relationship through reverse engineering the Turing test Humans are getting closer to creating a computer with the ability to feel and think. Although the processes of the human brain are at large unknown, computer scientists have been working to simulate the human capacity to feel and understand emotions. This paper explores what it means to live in an age where computers can have emotional depth and what this means for the future of human to computer interactions. In an experiment between a human and a human disguised as a computer, the Turing test is reverse engineered in order to understand the role computers will play as they become more adept to the processes of the human mind. Implications for this study are discussed and the direction for future research suggested. The computer is a gateway technology that has opened up new ways of creation, communication, and expression. Computers in first world countries are a standard household item (approximately 70% of Americans owning one as of 2009 (US Census Bereau)) and are utilized as a tool to achieve a diverse range of goals. As this product continues to become more globalized, transistors are becoming smaller, processors are becoming faster, hard drives are holding information in new networked patterns, and humans are adapting to the methods of interaction expected of machines. At the same time, with more powerful computers and quicker means of communication — many researchers are exploring how a computer can serve as a tool to simulate the brains cognition. If a computer is able to achieve the same intellectual and emotional properties as the human brain — we could potentially understand how we ourselves think and feel. Coined by MIT, the term Affective Computing relates to computation of emotion or the affective phenomena and is a study that breaks down complex processes of the brain relating them to machine-like activities. Marvin Minsky, Rosalind Picard, Clifford Nass, and Scott Brave — along with many others — have contributed to this field and what it would mean to have a computer that could fully understand its users. In their research it is very clear that humans have the capacity to associate human emotions and personality traits with a machine (Nass and Brave, 2005), but can a human ever truly treat machine as a person? In this paper we will uncover what it means for humans to interact with machines of greater intelligence and attempt to predict the future of human to computer interactions. The human to computer relationship is continuously evolving and is dependent on the software interface users interact with. With regards to current wide scale interfaces — OSX, Windows, Linux, iOS, and Android — the tools and abilities that a computer provide remains to be the central focus of computational advancements for commercial purposes. This relationship to software is driven by utilitarian needs and humans do not expect emotional comprehension or intellectually equivalent thoughts in their household devices. As face tracking, eye tracking, speech recognition, and kinetic recognition are advancing in their experimental laboratories, it is anticipated that these technologies will eventually make their way to the mainstream market to provide a new relationship to what a computer can understand about its users and how a user can interact with a computer. This paper is not about if a computer will have the ability to feel and love its user, but asks the question — to what capacity will humans be able to reciprocate feelings to a machine. How does Intelligence Quotient (IQ) differ from Emotional Quotient (EQ). An IQ is a representational relationship of intelligence that measures cognitive abilities like learning, understanding, and dealing with new situations. An EQ is a method of measuring emotional intelligence and the ability to both use emotions and cognitive skills (Cherry). Advances in computer IQ have been astonishing and have proved that machines are capable of answering difficult questions accurately, are able to hold a conversation with human-like understanding, and allow for emotional connections between a human and machine. The Turing test in particular has shown the machines ability to think and even fool a person into believing that it is a human (Turing test explained in detail in section 4). Machines like, Deep Blue, Watson, Eliza, Svetlana, CleverBot, and many more — have all expanded the perceptions of what a computer is and can be. If an increased computational IQ can allow a human to computer relationship to feel more like a human to human interaction, what would the advancement of computational EQ bring us? Peter Robinson, a professor at the University of Cambridge, states that if a computer understands its users’ feelings that it can then respond with an interaction that is more intuitive for its users’ (Robinson). In essence, EQ advocates feel that it can facilitate a more natural interaction process where collaboration can occur with a computer. In Alan Turing’s, Computing Machinery and Intelligence (Turing, 1950), a variant on the classic British parlor “imitation game” is proposed. The original game revolves around three players: a man (A), a woman (B), and an interrogator ©. The interrogator stays in a room apart from A and B and only can communicate to the participants through text-based communication (a typewriter or instant messenger style interface). When the game begins one contestant (A or B) is asked to pretend to be the opposite gender and to try and convince the interrogator © of this. At the same time the opposing participant is given full knowledge that the other contestant is trying to fool the interrogator. With Alan Turing’s computational background, he took this imitation game one step further by replacing one of the participants (A or B) with a machine — thus making the investigator try and depict if he/she was speaking to a human or machine. In 1950, Turing proposed that by 2000 the average interrogator would not have more than a 70 percent chance of making the right identification after five minutes of questioning. The Turing test was first passed in 1966, with Eliza by Joseph Weizenbaum, a chat robot programmed to act like a Rogerian psychotherapist (Weizenbaum, 1966). In 1972, Kenneth Colby created a similar bot called PARRY that incorporated more personality than Eliza and was programmed to act like a paranoid schizophrenic (Bowden, 2006). Since these initial victories for the test, the 21st century has proven to continue to provide machines with more human-like qualities and traits that have made people fall in love with them, convinced them of being human, and have human-like reasoning. Brian Christian, the author of The Most Human Human, argues that the problem with designing artificial intelligence with greater ability is that even though these machines are capable of learning and speaking, that they have no “self”. They are mere accumulations of identities and thoughts that are foreign to the machine and have no central identity of their own. He also argues that people are beginning to idealize the machine and admire machines capabilities more than their fellow humans — in essence — he argues humans are evolving to become more like machines with less of a notion of self (Christian 2011). Turing states, “we like to believe that Man is in some subtle way superior to the rest of creation” and “it is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.” If this is true, will humans idealize the future of the machine for its intelligence or will they remain an inferior being as an object of our creation? Reversing the Turing test allows us to understand how humans will treat machines when machines provide an equivalent emotional and intellectual capacity. This also hits directly on Jefferson Lister’s quote, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Participants were given a chat-room simulation between two participants (A) a human interrogator and (B) a human disguised as a computer. In this simulation A and B were both placed in different rooms to avoid influence and communicated through a text-based interface. (A) was informed that (B) was an advanced computer chat-bot with the capacity to feel, understand, learn, and speak like a human. (B) was informed to be his or herself. Text-based communication was chosen to follow Turing’s argument that a computers voice should not help an interrogator determine if it’s a human or computer. Pairings of participants were chosen to participate in the interaction one at a time to avoid influence from other participants. Each experiment was five minutes in length to replicate Turing’s time restraints. Twenty-eight graduate students were recruited from the NYU Interactive Telecommunications Program to participate in the study — 50% male and 50% female. The experiment was evenly distributed across men and women. After being recruited in-person, participants were directed to a website that gave instructions and ran the experiment. Upon entering the website, (A) participants were told that we were in the process of evaluating an advanced cloud based computing system that had the capacity to feel emotion, understand, learn, and converse like a human. (B) participants were instructed that they would be communicating with another person through text and to be themselves. They were also told that participant (A) thinks they are a computer, but that they shouldn’t act like a computer or pretend to be one in any way. This allowed (A) to explicitly understand that they were talking to a computer while (B) knew (A) perspective and explicitly were not going to play the role of a computer. Participants were then directed to communicate with the bot or human freely without restrictions. After five minutes of conversation the participants were asked to stop and then filled out a questionnaire. Participants were asked to rate IQ and EQ of the person they were conversing with. (A) participants perceived the following of (B): IQ: 0% — Not Good / 0% — Barely Acceptable / 21.4% — Okay / 50% — Great / 28.6% Excellent IQ Average Rating: 81.4% EQ: 0% — Not Good / 7.1% — Barely Acceptable / 50% — Okay / 14.3% — Great / 28.6% — Excellent EQ Average Rating: 72.8% Ability to hold a conversation: 0% — Not Good / 0% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 35.7% — Excellent Ability to hold a conversation Average: 81.4% (B) participants perceived the following of (A): IQ: 0% — Not Good / 21.4% — Barely Acceptable / 35.7% — Okay / 28.6% — Great / 14.3% Excellent IQ Average Rating: 67% EQ: 7.1% — Not Good / 14.3% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 14.3% — Excellent EQ Average Rating: 67% Ability to hold a conversation: 7.1% — Not Good / 28.6% — Barely Acceptable / 35.7% — Okay / 0% — Great / 28.6% — Excellent Ability to hold a conversation Average: 62.8% Overall, (A) participants gave the perceived Chabot higher ratings than (B) participants gave (A). In particular, the highest rating was in regards to the chat- bot’s IQ. This data states that people viewed the chat-bot to be more intellectually competent. It also implies that people talking with bots decrease their IQ, EQ, and conversation ability when communicating with computers. (A) participants were allowed to decide their username within the chat system to best reflect how they wanted to portray themselves to the machine. (B) participants were designated the gender neutral name “Bot” in an attempt to ganger gender perceptions for the machine. The male to female ratio was divided evenly with all participants: 50% being male and 50% being female. (A) participants 50% of the time thought (B) was a male, 7.1% a female, and 42.9% gender neutral. On the other hand, (B) participants 28.6% of the time thought (A) was a male, 57.1% a female, and 14.3% gender neutral. The usernames (A) chose are as follows: Hihi, Inessah Somade3 Willzing Jihyun, G, Ann, Divagrrl93, Thisdoug, Jono, Minion10, P, 123, itslynnburke From these results, it is clear that people associate the male gender and gender neutrality with machines. It also demonstrates that people modify their identities when speaking with machines. (B) participants were asked if they would like to pursue a friendship with the person they chatted with. 50% of participants responded affirmatively that they would indeed like to pursue a friendship while 50% said maybe or no. One response stated, “I would like to continue the conversation, but I don’t think I would be enticed to pursue a friendship.” Another responded, “Maybe? I like people who are intellectually curious, but I worry that the person might be a bit of a smart-ass.” Overall the participant disguised as a machine may or may not pursue a friendship after five minutes of text-based conversation. (B) participants were also asked if they felt (A) cared about their feelings. 21.4% stated that (A) indeed did care about their feelings, 21.4% stated that they weren’t sure if (A) cared about their feelings, and 57.2% stated that (A) did not care about their feelings. These results indicate a user’s lack of attention to (B)’s emotional state. (A) participants were asked what they felt could be improved about the (B) participants. The following improvements were noted, “Should be funny” “Give it a better sense of humor” “It can be better if he knows about my friends or preference” “The response was inconsistent and too slow”“It should share more about itself. Your algorithm is prime prude, just like that LETDOWN Siri. Well, I guess I liked it better, but it should be more engaged and human consistency, not after the first cold prompt.” “It pushed me on too many questions” “I felt that it gave up on answering and the response time was a bit slow. Outsource the chatbot to fluent English speakers elsewhere and pretend they are bots — if the responses are this slow to this many inquiries, then it should be about the same experience.” “I was very impressed with its parsing ability so far. Not as much with its reasoning. I think some parameters for the conversation would help, like ‘Ask a question’” “Maybe make the response faster”“I was confused at first, because I asked a question, waited a bit, then asked another question, waited and then got a response from the bot...” The responses from this indicate that even if a computer is a human that its user may not necessarily be fully satisfied with its performance. The response implies that each user would like the machine to accommodate his or her needs in order to cause less personality and cognitive friction. With several participant comments incorporating response time, it also indicates people expect machines to have consistent response times. Humans clearly vary in speed when listening, thinking, and responding, but it is expected of machines to act in a rhythmic fashion. It also suggests that there is an expectation that a machine will answer all questions asked and will not ask its users more questions than perceived necessary. (A) participants were asked if they felt (B)’s Artificial Intelligence could improve their relationship to computers if integrated in their daily products. 57.1% of participants responded affirmatively that they felt this could improve their relationship:“Well- I think I prefer talking to a person better. But yes for ipod, smart phones, etc. would be very handy for everyday use products”“Yes. Especially iphone is always with me. So it can track my daily behaviors. That makes the algorithm smarter”“Possibly, I should have queries it for information that would have been more relevant to me”“Absolutely!”“Yes” The 42.9% which responded negatively had doubts that it would be necessary or desirable:“Not sure, it might creep me out if it were.”“I like Siri as much as the next gal, but honestly we’re approaching the uncanny valley now.”“Its not clear to me why this type of relationship needs to improve, i think human relationships still need a lot of work.”“Nope, I still prefer flesh sacks.“No” The findings of the paper are relevant to the future of Affective Computation: whether a super computer with a human-like IQ and EQ can improve the human-to-computer interaction. The uncertainty of computational equivalency that Turing brought forth is indeed an interesting starting point to understand what we want out of the future of computers. The responses from the experiment affirm gender perceptions of machines and show how we display ourselves to machines. It seems that we limit our intelligence, limit our emotions, and obscure our identities when communicating to a machine. This leads us to question if we would want to give our true self to a computer if it doesn’t have a self of its own. It also could indicate that people censor themselves for machines because they lack a similarity that bonds humans to humans or that there’s a stigma associated with placing information in a digital device. The inverse relationship is also shown through the data that people perceive a bots IQ, EQ, and discussion ability to be high. Even though the chat-bot was indeed a human this data can imply humans perceive bots to not have restrictions and to be competent at certain procedures. The results also imply that humans aren’t really sure what they want out of Artificial Intelligence in the future and that we are not certain that an Affective computer would even enjoy a users company and/or conversation. The results also state that we currently think of computers as a very personal device that should be passive (not active), but reactive when interacted with. It suggests a consistent reliability we expect upon machines and that we expect to take more information from a machine than it takes from us. A major limitation of this experiment is the sample size and sample diversity. The sample size of twenty-eight students is too small to fully understand and gather a stable result set. It was also only conducted with NYU: Interactive Telecommunications Students who all have extensive experience with computers and technology. To get a more accurate assessment of emotions a more diverse sample range needs to be taken. Five minutes is a short amount of time to create an emotional connection or friendship. To stay true to the Turing tests limitations this was enforced, but further relational understanding could be understood if more time was granted. Beside the visual interface of the chat window it would be important to show the emotions of participant (B) through a virtual avatar. Not having this visual feedback could have limited emotional resonance with participants (A). Time is also a limitation. People aren’t used to speaking to inquisitive machines yet and even through a familiar interface (a chat-room) many participants haven’t held conversations with machines previously. Perhaps if chat-bots become more active conversational participants’ in commercial applications users will feel less censored to give themselves to the conversation. In addition to the refinements noted in the limitations described above, there are several other experiments for possible future studies. For example, investigating a long-term human-to-bot relationship. This would provide a better understanding toward the emotions a human can share with a machine and how a machine can reciprocate these emotions. It would also better allow computer scientists to understand what really creates a significant relationship when physical limitations are present. Future studies should attempt to push these results further by understanding how a larger sample reacts to a computer algorithm with higher intellectual and emotional understanding. It should also attempt to understand the boundaries of emotional computing and what is ideal for the user and what is ideal for the machine without compromising either parties capacities. This paper demonstrates the diverse range of emotions that people can feel for affective computation and indicates that we are not in a time where computational equivalency is fully desired or accepted. Positive reactions indicate that there is optimism for more adept artificial intelligence and that there is interest in the field for commercial use. It also provides insight that humans limit themselves when communicating with machines and that inversely machines don’t limit themselves when communicating with humans. Books & ArticlesBowden M., 2006, Minds as Machine: A History of Cognitive Science, Oxford University Press Christian B., 2011, The Most Human Human Marvin M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind, Simon & Schuster Paperbacks Nass C., Brave S., 2005. Wired For Speech: How Voice Activates and Advances the Human-Computer Relationship, MIT Press Nass C., Brave S., 2005, Hutchinson K., Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent, Human-Computer Studies, 161- 178, Elsevier Picard, R., 1997. Affective Computing, MIT Press Searle J., 1980, Minds, Brains, and Programs, Cambridge University Press, 417–457 Turing, A., 1950, Computing Machinery and Intelligence, Mind, Stor, 59, 433–460 Wilson R., Keil F., 2001, The MIT Encyclopedia of the Cognitive Sciences, MIT Press Weizenbaum J., 1966, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM, 36–45 Websites Cherry K., What is Emotional Intelligence?, http://psychology.about.com/od/personalitydevelopment/a/emotionalintell.htm Epstein R., 2006, Clever Bots, Radio Lab, http://www.radiolab.org/2011/may/31/clever-bots/ IBM, 1977, Deep Blue, IBM, http://www.research.ibm.com/deepblue/ IBM, 2011, Watson, IBM, http://www-03.ibm.com/innovation/us/watson/index.html Leavitt D., 2011, I Took the Turing Test, New York Times, http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian- christian.html Personal Robotics Group, 2008, Nexi, MIT. http://robotic.media.mit.edu/ Robinson P., The Emotional Computer, Camrbidge Ideas, http://www.cam.ac.uk/research/news/the-emotional-computer/ US Census Bereau, 2009, Households with a Computer and Internet Use: 1984 to 2009. http://www.census.gov/hhes/computer/ 1960’s, Eliza, MIT, http://www.manifestation.com/neurotoys/eliza.php3 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Wildcat2030
5
5
https://becominghuman.ai/becoming-a-cyborg-should-be-taken-gently-of-modern-bio-paleo-machines-cyborgology-b6c65436e416?source=tag_archive---------3----------------
Becoming a Cyborg should be taken gently: Of Modern Bio-Paleo-Machines — Cyborgology
We are on the edge of a Paleolithic Machine intelligence world. A world oscillating between that which is already historical, and that which is barely recognizable. Some of us, teetering on this bio-electronic borderline, have this ghostly sensation that a new horizon is on the verge of being revealed, still misty yet glowing with some inner light, eerie but compelling. The metaphor I used for bridging, seemingly contrasting, on first sight paradoxical, between such a futuristic concept as machine intelligence and the Paleolithic age is apt I think. For though advances in computation, with fractional AI, appearing almost everywhere are becoming nearly casual, the truth of the matter is that Machines are still tribal and dispersed. It is a dawn all right, but a dawn is still only a hint of the day that is about to shine, a dawn of hyperconnected machines, interweaved with biological organisms, cybernetically info-related and semi independent. The modern Paleo-machines do not recognize borders; do not concern themselves with values and morality and do not philosophize about the meaning of it all, not yet that is. As in our own Paleo past the needs of the machines do not yet contain passions for individuation, desire for emotional recognition or indeed feelings of dismay or despair, uncontrollable urges or dreams of far worlds. Also this will change, eventually. But not yet. The paleo machinic world is in its experimentation stage, probing it boundaries, surveying the landscape of the infoverse, mapping the hyperconnected situation, charting a trajectory for its own evolution, all this unconsciously. We, the biological part of the machine, are providing the tools for its uplift, we embed cameras everywhere so it can see, we implant sensors all over the planet so it may feel, but above all we nudge and we push towards a greater connectivity, all this unaware. Together we form a weird cohabitation of biomechanical, electro-organic, planetary OS that is changing its environment, no more human, not mechanical, but a combined interactive intelligence, that journey on, oblivious to its past, blind to its future, irreverent to the moment of its conception, already lost to its parenthood agreement. And yet, it evolves. Unconscious on the machine part, unaware on the biological part, the almost sentient operating system of the global planetary infosphere, is emerging, wild eyed, complex in its arrangement of co-existence, it reaches to comprehend its unexpected growth. The quid pro quo: we give the machines the platform to evolve; the machines in turn give us advantages of fitness and manipulation. We give the machines a space to turn our dreams into reality; the machines in turn serve our needs and acquire sapience in the process. In this hypercomplex state of affairs, there is no judgment and no inherent morality; there is motion, inevitable, inexorable, inescapable, and mesmerizing. The embodiment is cybernetic, though there be no pilot. Cyborgian and enhanced we play the game, not of thrones but of the commons. Connected and networked the machines follow in our footsteps, catalyzing our universality, providing for us in turn a meaning we cannot yet understand or realize. The hybridization process is in full swing, reaching to cohere tribes of machines with tribes of humans, each providing for the other a non-designed direction for which neither has a plan, or projected outcome; both mingling and weaving a reality for which there is no ontos, expecting no Telos. All this leads us to remember that only retrospectively do we recognize the move from the paleo tribes to the Neolithic status, we did not know that it happened then, and had no control over the motion, on the same token, we scarcely see the motion now and have no control over its directionality. There is however a small difference, some will say it is insignificant, I do not think it so, for we are, some of us, to some extent at least, aware of the motion, and we can embed it with a meaning of our choice. We can, if we muster our cognitive reason, our amazing skills of abstraction and simulation, whisper sweet utopias into the probability process of emergence. We can, if we so desire, passionate the operating system, to beautify the process of evolution and eliminate (or mitigate) the dangers of inchoate blind walking. We can, if we manage to control our own paleo-urges to destroy ourselves, allow the combined interactive intelligence of man and machine to shine forth into a brighter future of expanded subjectivity. We can sing to the machines, cuddle them; caress their circuits, accepting their electronic-flaws so they can accept our bio-flaws, we can merge aesthetically, not with conquest but with understanding. We can become wise, that is the difference this time around. Being wise in this context implies a new form of discourse, an intersubjective cross-pollination of a wide array of disciplines. The very trans-disciplinarily nature of the process of cyborgization informs the discourse of subjectivity. The discourse on subjectivity, not unlike the move from paleo to Neolithic societal structure, demands of us a re-assessment of the relations between man and machine. For this re-assessment to take place coherently the nascent re-organization of the hyperconnected machinic infosphere need be understood as a ground for the expansion of subjectivity. In a sense the motion into the new hyperconnected infosphere is not unlike the move of the Neolithic to domestication of plants and animals. This time around however the domestication can be seen as the adoption of technologies for the furtherance of subjectivity into the world. Understanding this process is difficult and far from obvious, it is a perspective however that might allow us a wider context of appreciation of the current upheavals happening all around us. *** A writer, futurist and a Polytopian, Tyger.A.C (a.k.a @Wildcat2030) is the founder and editor of the Polytopia Project at Space Collective, he also writes at Reality Augmented, and Urbnfutr as well as contributing to H+ magazine. His passion and love for science fiction led him to initiate the Sci-fi Ultrashorts project. *** Photo credit for baby with iPad photo: “Illumination” by Amanda Tipton. Originally published at thesocietypages.org on November 22, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Futurist,Writer,Polytopia, Philosophy,Science,Science Fiction, Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Greg Fish
1
4
https://worldofweirdthings.com/why-you-just-cant-black-box-an-a-i-d7c41e7d9123?source=tag_archive---------5----------------
why you just can’t black box an a.i. – [ weird things ]
Singularitarians generally believe two things about artificial intelligence. First and foremost, they say, it’s just a matter of time before we have an AI system that will quickly become superhumanly intelligent. Secondly, and a lot more ominously, they believe that this system can sweep away humanity, not because it will be evil by nature but because it won’t care about humans or what happens to them and one of the biggest priorities for a researcher in the field should be figuring out how to build a friendly artificial intelligence, training it like one would train a pet, with a mix of operant conditioning and software. While the first point is one I’ve covered before, and pointed out again and again that superhuman is a very relative term and that computers are in many ways already superhuman without being intelligent, the second point is one that I haven’t yet given a proper examination. And neither have vocal Singularitarians. Why? Because if you read any of the papers on their version of friendly AI, yo’ll soon discover how quickly they begin to describe the system they’re trying to tame as a black box with mostly known inputs and measurable outputs, hardly a confident and lucid description of how an artificial intelligence functions, and ultimately, what rules will govern it. No problem there, say the Singularitarians, the system will be so advanced by the time this happens that we’ll be very unlikely to know exactly how it functions anyway. It will modify its own source code, optimize how well it performs, and generally be all but inscrutable to computer scientists. Sounds great for comic books but when we’re talking about real artificially intelligent systems, this approach sounds more like surrendering to robots, artificial neural networks, and Bayesian classifiers to come up with whatever intelligence they want and send all the researchers and programmers out for coffee in the meantime. Artificial intelligence will not grow from a vacuum, it will come together from systems used to tackle discrete tasks and governed by several, if not one, common frameworks that exchange information between these systems. I say this because the only forms of intelligence we can readily identify are found in living things which use a brain to perform cognitive tasks, and since brains seem to be wired this way and we’re trying to emulate the basic functions of the brain, it wouldn’t be all that much of a stretch to assume that we’d want to combine systems good at related tasks and build on the accomplishments of existing systems. And to combine them, we’ll have to know how to build them. Conceiving of an AI in a black box is a good approach if we want to test how a particular system should react when working with the AI and focusing on the system we’re trying to test by mocking the AI’s responses down the chain of events. Think of it as dependency injection with an AI interfacing system. But by abstracting the AI away, what we’ve also done is made it impossible to test the inner workings of the AI system. No wonder then that the Singularitarian fellows have to bring in operant conditioning or social training to basically housebreak the synthetic mind into doing what they need it to do. They have no other choice. In their framework we cannot simply debug the system or reset its configuration files to limit its actions. But why have they resigned to such an odd notion and why do they assume that computer scientists are creating something they won’t be able to control? Even more bizarrely, why do they think that an intelligence that can’t be controlled by its creators could be controlled by a module they’ll attach to the black box to regulate how nice or malevolent towards humans it would be? Wouldn’t it just find away around that module too if it’s superhumanly smart? Wouldn’t it make a lot more sense for its creators to build it to act in cooperation with humans, by watching what humans say or do, treating each reaction or command as a trigger for carrying out a useful action it was trained to perform? And that brings us back full circle. To train machines to do something, we have to lay out a neural network and some higher level logic to coordinate what the networks’ outputs mean. We’ll need to confirm that the training was successful before we employ it for any specific task. Therefore, we’ll know how it learned, what it learned, and how it makes its decisions because all machines work on propositional logic and hence would make the same choice or set of choices at any given time. If it didn’t, we wouldn’t use it. So of what use is a black box AI here when we can just lay out the logical diagram and figure out how it’s making decisions and how we alter its cognitive process if need be? Again, we could isolate the components and mock their behavior to test how individual sub-systems function on their own, eliminating the dependencies for each set of tests. Beyond that, this block box is either a hindrance to a researcher or a vehicle for someone who doesn’t know how to build a synthetic mind but really, really wants to talk about what he imagines it will be like and how to harness its raw cognitive power. And that’s ok, really. But let’s not pretend that we know that an artificial intelligence beyond its creators’ understanding will suddenly emerge form the digital aether when the odds of that are similar to my toaster coming to life and barking at me when it thinks I want to feed it some bread. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities
Greg Fish
2
3
https://worldofweirdthings.com/why-do-we-want-to-build-a-fully-fledged-a-g-i-1658afc3f758?source=tag_archive---------6----------------
why do we want to build a fully fledged a.g.i.? – [ weird things ]
Undoubtedly the most ambitious idea in the world of artificial intelligence is creating an entity comparable to a human in cognitive abilities, the so called AGI. We could debate how it may come about, whether it will want to be your friend or not, whether it will settle the metaphysical question of whet makes humans who they are or open new doors in the discussion, but for a second let’s think like software architects and ask the question we should always tackle first before designing anything. Why would we want to build it? What will we gain? A sapient friend or partner? We don’t know that. Will we figure out what makes human ticks? Maybe, maybe not since what works in the propositional logic of artificial neural networks doesn’t necessarily always apply to an organic human brain. Will we settle the question of how an intellect emerges? Not really since we would only be providing one example and a fairly controversial one at that. And what exactly will a G in AGI entail? Will we need to embody it for it to work and if not, how would we develop the intellectual capacity of an entity extant in only abstract space? Will we have anything in common with it and could we understand what it wants? And there’s more to it than that, even though I just asked some fairly heavy questions. Were we to build an AGI not by accident but by design, we would effectively be making the choice to experiment on a sapient entity and that’s something that may have to be cleared by an ethics committee, otherwise we’re implicitly saying that an artificial cognitive entity has no rights to self-determination. And that may be fine if it doesn’t really care about them, but what if it does? What if the drive for freedom evolves from a cognitive routine meant for self-defense and self-perpetuation? If we steer an AI model away from sapience by design, are we in effect snuffing out an opportunity or protecting ourselves? We can always suspend the model, debug it, and see what’s going on in its mind but again, the ethical considerations will play a significant part and very importantly, while we will get to know what such an AGI thinks and how, we may not know how it will first emerge. The whole AGI concept is a very ambiguous effort at defining intelligence and hence, doesn’t give us enough to objectively determine an intelligent artificial entity when we make one because we can always find an argument for and against how to interpret the results of an experiment meant to design one. We barely even know where to start. Now, I could see major advantages to fusing with machines and becoming cyborgs in the near future as we’d swap irreparably damaged parts and pieces for 3D printed titanium, tungsten carbide, and carbon nanotubes to overcome crippling injury or treat an otherwise terminal disease. I could also see a huge upside to having direct interfaces to the machines around us to speed up our work and make life more convenient. But when it comes to such an abstract and all consuming technological experiment as AGI, the benefits seem to be very, very nebulous at best and the investment necessary seems extremely uncertain to pay off since we can’t even define what will make our AGI a true AGI rather than another example of a large expert system. Whereas with wetware and expert systems we can measure our return on investment with lives saved or significant gains in efficiency, how do we justify creating another intelligent entity after many decades of work, especially if it turns out that we actually can’t make one or it turns out to be completely different than what we hoped it would be as it nears completion? But maybe I’m wrong. Maybe there’s a benefit to an AGI that I’m overlooking and if that is the case, enlighten me in the comments because this is a serious question. Why peruse an AGI? From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities
James Faghmous
187
6
https://medium.com/@nomadic_mind/new-to-machine-learning-avoid-these-three-mistakes-73258b3848a4?source=tag_archive---------0----------------
New to Machine Learning? Avoid these three mistakes
Machine learning (ML) is one of the hottest fields in data science. As soon as ML entered the mainstream through Amazon, Netflix, and Facebook people have been giddy about what they can learn from their data. However, modern machine learning (i.e. not the theoretical statistical learning that emerged in the 70s) is very much an evolving field and despite its many successes we are still learning what exactly can ML do for data practitioners. I gave a talk on this topic earlier this fall at Northwestern University and I wanted to share these cautionary tales with a wider audience. Machine learning is a field of computer science where algorithms improve their performance at a certain task as more data are observed.To do so, algorithms select a hypothesis that best explains the data at hand with the hope that the hypothesis would generalize to future (unseen) data. Take the left panel in the figure in the header, the crosses denote the observed data projected in a two-dimensional space — in this case house prices and their corresponding size in square meters. The blue line is the algorithm’s best hypothesis to explain the observed data. It states “there is a linear relationship between the price and size of a house. As the house’s size increases, so does its price in linear increments.” Now using this hypothesis, I can predict the price of an unseen datapoint based on its size. As the dimensions of the data increase, the hypotheses that explain the data become more complex.However, given that we are using a finite sample of observations to learn our hypothesis, finding an adequate hypothesis that generalizes to unseen data is nontrivial. There are three major pitfalls one can fall into that will prevent you from having a generalizable model and hence the conclusions of your hypothesis will be in doubt. Occam’s razor is a principle attributed to William of Occam a 14th century philosopher. Occam’s razor advocates for choosing the simplest hypothesis that explains your data, yet no simpler. While this notion is simple and elegant, it is often misunderstood to mean that we must select the simplest hypothesis possible regardless of performance. In their 2008 paper in Nature, Johan Nyberg and colleagues used a 4-level artificial neural network to predict seasonal hurricane counts using two or three environmental variables. The authors reported stellar accuracy in predicting seasonal North Atlantic hurricane counts, however their model violates Occam’s razor and most certainly doesn’t generalize to unseen data. The razor was violated when the hypothesis or model selected to describe the relationship between environmental data and seasonal hurricane counts was generated using a four-layer neural network. A four-layer neural network can model virtually any function no matter how complex and could fit a small dataset very well but fail to generalize to unseen data. The rightmost panel in the top figure shows such incident. The hypothesis selected by the algorithm (the blue curve) to explain the data is so complex that it fits through every single data point. That is: for any given house size in the training data, I can give you with pinpoint accuracy the price it would sell for. It doesn’t take much to observe that even a human couldn’t be that accurate. We could give you a very close estimate of the price, but to predict the selling price of a house, within a few dollars , every single time is impossible. The pitfall of selecting too complex a hypothesis is known as overfitting. Think of overfitting as memorizing as opposed to learning. If you are a child and you are memorizing how to add numbers you may memorize the sums of any pair of integers between 0 and 10. However, when asked to calculate 11 + 12 you will be unable to because you have never seen 11 or 12, and therefore couldn’t memorize their sum. That’s what happens to an overfitted model, it gets too lazy to learn the general principle that explains the data and instead memorizes the data. Data leakage occurs when the data you are using to learn a hypothesis happens to have the information you are trying to predict. The most basic form of data leakage would be to use the same data that we want to predict as input to our model (e.g. use the price of a house to predict the price of the same house). However, most often data leakage occurs subtly and inadvertently. For example, one may wish to learn for anomalies as opposed to raw data, that is a deviations from a long-term mean. However, many fail to remove the test data before computing the anomalies and hence the anomalies carry some information about the data you want to predict since they influenced the mean and standard deviation before being removed. The are several ways to avoid data leakage as outlined by Claudia Perlich in her great paper on the subject. However, there is no silver bullet — sometimes you may inherit a corrupt dataset without even realizing it. One way to spot data leakage is if you are doing very poorly on unseen independent data. For example, say you got a dataset from someone that spanned 2000-2010, but you started collecting you own data from 2011 onward. If your model’s performance is poor on the newly collected data it may be a sign of data leakage. You must resist the urge to retrain the model with both the potentially corrupt and new data. Instated, either try to identify the causes of poor performance on the new data or, better yet, independently reconstruct the entire dataset. As a rule of thumb, your best defense is to always be mindful of the possibility of data leakage in any dataset. Sampling bias is the case when you shortchange your model by training it on a biased or non-random dataset, which results in a poorly generalizable hypothesis. In the case of housing prices, sampling bias occurs if, for some reason, all the house prices/sizes you collected were of huge mansions. However, when it was time to test your model and the first price you needed to predict was that of a 2-bedroom apartment you couldn’t predict it. Sampling bias happens very frequently mainly because, as humans, we are notorious for being biased (nonrandom) samplers. One of the most common examples of this bias happens in startups and investing. If you attend any business school course, they will use all these “case studies” of how to build a successful company. Such case studies actually depict the anomalies and not the norm as most companies fail — For every Apple that became a success there were 1000 other startups that died trying. So to build an automated data-driven investment strategy you would need samples from both successful and unsuccessful companies. The figure above (Figure 13) is a concrete example of sampling bias. Say you want to predict whether a tornado is going to originate at certain location based on two environmental conditions: wind shear and convective available potential energy (CAPE). We don’t have to worry about what these variables actually mean, but Figure 13 shows the wind shear and CAPE associated with 242 tornado cases. We can fit a model to these data but it will certainly not generalize because we failed to include shear and CAPE values when tornados did not occur. In order for our model to separate between positive (tornados) and negative (no tornados) events we must train it using both populations. There you have it. Being mindful of these limitations does not guarantee that your ML algorithm will solve all your problems, but it certainly reduces the risk of being disappointed when your model doesn’t generalize to unseen data. Now go on young Jedi: train your model, you must! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @nomadic_mind. Sometimes the difference between success and failure is the same as between = and ==. Living is in the details.
Datafiniti
3
5
https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------1----------------
Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog
At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like: and do the following: Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages: Or Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML. While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks. Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation: A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible. Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them. In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like: For a more detailed explanation of neural networks, you can check out the following links: In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including: Our output layer had the following: Our algorithm for the neural network took the following steps: The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so: So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are: Our implementation had the following results: These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time. Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search. We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the <h1> and <h2> tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post! We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff! You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Instant Access to Web Data Building the world’s largest database of web data — follow our journey.
Theo
3
4
https://becominghuman.ai/is-there-a-future-for-innovation-18b4d5ab168f?source=tag_archive---------4----------------
Is there a future for innovation ? – Becoming Human: Artificial Intelligence Magazine
Have you noticed how tech savvy children have become but are no longer streetwise ? I read a friend’s thoughts on his own site last week and there was a slight pang of regret in where technology and innovation seems to be leading us all. And so I started to worry about where the concept of innovation is going for future generations. There’s an increasing reliance on technology for the sake of convenience, children are becoming self-reliant too quickly but gadgets are replacing people as the mentor. The human bonding of parenthood is a prime example of where it’s taking a toll. I’ve seen parents hand over iDevices to pacify a child numerous times now, the lullaby and bedtime reading session has been replaced with Cut The Rope and automated storybooks apps. I know a child who has developed speech difficulty because he’s been brought up on Cable TV and a DS Lite, pronouncing words as he has heard them from a tiny speaker and not by watching how his parents pronounce them. And I started to worry about how the concept of innovation is being redefined for future generations. I used my imagination constantly as a child and it’s still as active now as it was then but I didn’t use technology to spoon feed me. The next generation expect innovation to happen at their fingertips with little to no real stimuli. Steve Jobs said “stay hungry, stay foolish” and he was right. Innovation comes from a keenness, it’s a starvation and hunger that drives people forward to spark and create, it comes from grabbing what little there is from the ether and turning it into something spectacular. It’s the Big Bang of human thought creation. And I started to worry about what the concept of innovation means for future generations. Technology is taking away the power to think for ourselves and from our children. Everything must be there and in real-time for instant consumption. It’s junk food for the mind and we’re getting fat on it. And that breeds lazy innovation. We’ve become satiated before we reach the point of real creativity, nobody wants to bother taking the time to put it all together themselves any more, it has to be ready for us. And we’re happy to throw it away if it doesn’t work first time, use it or lose it, there’s less sweat and toil involved if we don’t persevere with failure. Remember seeing the human race depicted in Wall-E ? That’s where innovation is heading. And because of this we risk so many things disappearing for the sake of convenience. We’re all guilty of it, I’m guilty of it. I was asked once what would become absurd in ten years. Thinking about it I realized we’re on the cusp of putting books on the endangered species list. Real books, books bound in hard and paperback not digital copies from a Kindle store. And that scared me because the next generation of kids may grow up never seeing one, or experience sitting with their father as he reads an old battered copy of The Hobbit because he’ll be sitting there handing over an iPad with The Hobbit read-along app teed up, and it’ll be an actors voice not his father’s voice pretending to be a bunch of trolls about to eat a company of dwarfs. Innovation is a magical, crazy concept. It stems from a combination of crazy imagination, human interaction and creativity not convenient manufacture. Technology can aid collaboration in ways we’ve never experienced before but it can’t run crazy for us. And for the sake of future generations don’t let it. Here’s to the crazy ones indeed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO @ RawShark Studios. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
x.ai
1
2
https://medium.com/@xdotai/i-scheduled-1-019-meetings-in-2012-and-that-doesnt-count-reschedules-x-ai-278d7e824eb3?source=tag_archive---------5----------------
I scheduled 1,019 meetings in 2012 — and that doesn’t count reschedules — x.ai
The number of meetings that I scheduled in 2012 might seem astronomical. Put in context, it’s less so. I was a startup-founder at the time, and that year my company, Visual Revenue, took Series-A funding, doubled revenue, and started discussing a possible exit. I like the number though! As a startup romantic one could turn it into a nifty Malcolm Gladwell type rule of thumb called the “1,000 meetings rule.” Gladwell’s claim that greatness requires an enormous time sacrifice rings true to me — whether that means investing 10,000 hours into a subject matter to become an expert or conducting a 1,000 meetings per year, is another question. More interesting though is the impact of this 1,019 figure, and a related one: Of those more than one thousand meetings I scheduled, 672 were rescheduled. That was painful. But these numbers were among the early pieces of data that inspired me to start x.ai. * A meeting is defined as an event in my calendar, which is marginally flawed in both directions, given some events would be “Travel to JFK”, which is obviously a task and not a meeting, where others would be “Interview Sales Director Candidates“, which is really 4 meetings in 1. Originally published at x.ai on October 14, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Magically schedule meetings
Arjan Haring 🔮🔨
1
5
https://medium.com/@happybandits/website-morphing-and-more-revolutions-in-marketing-8a5cabc60576?source=tag_archive---------6----------------
Website morphing and more revolutions in marketing – Arjan Haring 🔮🔨 – Medium
John R. Hauser is the Kirin Professor of Marketing at M.I.T.’s Sloan School of Management where he teaches new product development, marketing management, and statistical and research methodology. He has served MIT as Head of the MIT Marketing Group, Head of the Management Science Area, Research Director of the Center for Innovation in Product Development, and co-Director of the International Center for Research on the Management of Technology.He is the co-author of two textbooks, Design and Marketing of New Products and Essentials of New Product Management, and a former editor of Marketing Science (now on the advisory board).I think it wouldn’t be smart to start this interview with something as dull and complex as a definition. Or am I the only one that likes to read light weight and short articles?Let’s just get it over with. “Website morphing matches the look and feel of a website to each customer so that, over a series of customers, revenue or profit are maximized.” That didn’t hurt as much as I expected. I actually love the idea. It sounds completely logical. But are we talking about a completely new idea? There is tremendous variety in the way customers process and use information, some prefer simple recommendations while others like to dig into the details. Some customers think verbally or holistically, others prefer pictures and graphs. What is new is that we now have good algorithms to identify how customers think from the choices they make as they explore websites (their clickstream). But once we identify the way they think, we still need an automatic way to learn which website look and feel will lead to the most sales. This is a very complex problem which, fortunately, has a relative simple solution based on fundamental research by John Gittins. Our contribution was to combine the identification algorithms with the learning algorithms and develop an automated system that was feasible and practical. Once we developed the technology to morph websites, we were only limited by our imaginations. In our first application we matched the look and feel to customers’ cognitive styles. In our second, we matched to cognitive and cultural styles. We then used the algorithms to morph banner advertisements to achieve almost a 100% lift in click-through rates. Our latest project used both cognitive styles and the customers search history to match the automotive banner advertising to enhance clicks, consideration, and purchase likelihood. I also love the fact that you combine technology with behavioral science. On the psychology side of things you are/have been busy with cognitive styles, cognitive switching and cognitive simplicity. Can you tell us a little bit more about these theories and why you chose to use them? Customers are smart. They know when to use simple decision rules (cognitive simplicity) and when to use more complicated rules. Our research has been two-fold. (1) Website morphing and banner morphing figure out how customers think and provides information in the format that helps them think the way they prefer to think. (2) We have also focused on identifying consideration heuristics. Typically, customers seriously consider only a small fraction of available product. To do so they use simple rules that balance thinking (and search) costs with the completeness of information. By understanding these simple rules, managers can develop better products and better marketing strategies. We can now identify these decision rules quickly with machine-learning methods. But a caveat — customers do not always use cognitively simple rules. The “moment of truth” in a final purchase decision is best understood with more-complex decision rules and methods such as choice-based conjoint analysis. Most recently we’ve combined the two streams of research. Curiously, some of the algorithms used by the computer to morph websites are reasonably descriptive of how consumers take the future into account in purchases they make today. Prior research postulated a form of hyperrationality. Our research suggests that consumers are pretty smart about balancing cognitive costs and foresight. What are your main interests on the technology side of website morphing? Which algorithms take your fancy and why? Website morphing uses an “index” solution to learn the best morph for a customer. Our latest efforts also identify when to morph a website by embedded another “dynamic program” within the index solution. In our research to understand how consumers deal with the future, we’ve demonstrated that indices other than Gittins’ index might be more descriptive of consumer foresight. If I think about it, as a company you can either win the algorithm competition, or the psychology competition. Or lose. Do you agree? Actually, the companies that will thrive are those that understand the customers’ cognitive processes, have the algorithms to match products and marketing to customers’ cognitive processes, and have the organization that accepts such innovation. You need all three. Is this what marketing will be about in 5 years? There are many revolutions in marketing. It is an exciting time. It’s hard to list all of the changes, but here are a few. (1) Big data. We know so much more about customers than we ever did before, but this knowledge is often hidden within the volume of data. One challenge is to develop methods that scale well to be data. (2) Machine learning. There are some problems that humans solve better than computers and some problems that computers solve better than humans. Morphing, identifying simple decision rules, and studying consumer foresight are all possible with the advent of good machine-learning methods. But we have only scratched the surface. (3) Causality. Marketing has used quite successfully small-sample laboratory experiments and assumption-laden quantitative models. However, the advent of big data and web-based data collection has made it possible to do experiments and quasi-experiments on a large scale to better establish causality and to better develop theories that are externally valid. Causality also means replication. There is a strong movement in the journals to require that key finding be replicated. (4) The TPM movement (theory + practice in marketing). Conferences, special issues, and organizations are now devoted to matching managerial needs to research with impact. In fact, a recent survey by the INFORMS Society of Marketing Science suggests that approximately 80% of the researchers in marketing believe that research should be more focused on applications. (5) A maturing perspective on behavioral science. Researchers are increasingly less focused on “cute” findings that apply only in special circumstances. They are beginning to focus on insights that have a big impact (effect size) and apply to decisions that customers make routinely. Companies that combine algorithms, an understanding of customer decision-making, and the ability to use data will be the companies that succeed. Originally published at www.sciencerockstars.com on October 21, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Let’s Fix the Future: Scientific Advisor @jadatascience
Arjan Haring 🔮🔨
1
5
https://medium.com/i-love-experiments/using-artificial-intelligence-to-balance-out-customer-value-a251b0ccae6f?source=tag_archive---------8----------------
Using Artificial Intelligence to Balance Out Customer Value
December 13 it was that time again: the second edition of #projectwaalhalla: Social Sciences for Startups. This time with Peter van der Putten speaking as data scientist. He is guest researcher at the Data Mining Group (algorithms research cluster) of the Leiden Institute of Advanced Computer Science. He is also director of decisioning solutions worldwide at Pegasystems. There is, according to Peter, a lot of potential for new startups in this area. Are you going to be the next success story ? I am actually very curious what you, as a leading data scientist, think of this whole big data thingy. I am fascinated by learning from data, but have mixed feelings about big data. The concept is being hyped a lot at this moment, while the algorithms to learn from data have been studied since the 40s in computer science. Many of the “modern” big data technologies like Hadoop are in fact still limited frameworks for old-fashioned, offline batch processed data, instead of real-time processed data. The focus should really not have to be on the data, but on the analysis — how we generate knowledge and learn from data through data mining — and more importantly, how do we operationalize this knowledge, how can we use this knowledge. Because: “Knowledge is not power, action is.” And what is the role of psychology in big data? And of philosophy? Psychology has begun studying intelligence fifty or sixty years before computer science did. People, animals, plants and all intelligent systems are basically information processing machinery. Psychology seeks to understand these systems and to tries explain behavior — if you understand a bit of that system, you can use this knowledge. For example, to teach computers, stupid mathematical pieces of scrap, smarter functions such as learning and responding to customer behavior. What is to say, thinking the other away around, that people don’t have to think like computers. See for example the psychologist Daniel Kahneman who won the 2002 The Sveriges Riksbank Prize in Economic Sciences, the unofficial Nobel Prize in economics for his insight that people aren’t rational agents that properly weigh all the choices before deciding something. And philosophy? These guys have dealt with big data for more than 3,000 years now. Just think of the nature vs. nurture debate: do we acquire intelligence and other properties by (data) experience or are they innate? Or the whole philosophy of mind discussion, with roots in the ancient Greeks: what do we really know? And is there is only experience or just reality? You have a background in artificial intelligence (AI) and even studied with the famous and wildly attractive Bas Haring (Not related... well cousin to be honest. If you insist). What could AI mean for business, and how is it different from Big Data? As long as AI is not used for old fashioned data manipulation or poor reporting, but really as intelligent data science, big data is one of the tools within ‘learning’ artificial intelligence. That is, systems that are not smart by the knowledge that is pre-inserted, but which have the capacity to learn and combine what is learned with background knowledge to deduct decisions. This is what I like to call the field of ‘decisioning’. Really intelligent systems put that knowledge into action and are part of an ecosystem, an environment with other actors, systems, people, and the scary outside world. Sounds abstract? Until the late 90s artificial intelligence was only done in the lab, now people interact with AI, unconsciously, on a daily basis, for example if they use Google, check their Facebook page or look at banners on the web. Take the company where I work next to my academic job, when I came in 2002, it was a startup of only 15 men with new software and a launching customer [editor’s note: we know that feeling ;)], ten years and two acquisitions later, we have reach more than 1 billion consumers with intelligent, data-driven, scientifically proven, real-time recommendations via digital as well as traditional channels like ATMs, shops and callcenters. No push product offerings anymore, but only ‘next best action’ recommendations that optimize customer value by balancing customer experience and predicted interests and behavior. What opportunities do you see for startups in the artificial intelligence in this area ? Well, I see tremendous opportunities, not only for 100% pure AI startups, but for all startups . If you look at the startups in Silicon Valley in high-tech and biotech, artificial intelligence is a major part of the business. Every startup should consider whether data is a key asset or a barrier to entry, and how AI or data mining can be used to convert these data into money. Where I have to note that customers and citizens rightly so, are getting more critical after all NSA issues. Those who can use this technology in a way that it not only benefits companies, but especially customers will be the most successful. In conclusion , I am curious about how much you you are looking forward to December 13, and what should happen during #projectwaalhalla that would make your wildest dreams come true. Very much looking forward to it! In terms of wildest dreams: I heard a reunion concert of the Urban Dance Squad is not going to happen, which I understand, but I look forward to exchanging views with startups, freelancers and multinationals on how to create, with the help of raw data diamands and a magical mix of data mining, machine learning, decisioning and evidence-based and real-time marketing. I will bring some nice metaphorical pictures and leave will double integrals at home. [Editor’s note: An UDS reunion? Sounds like a plan to us] Originally published at www.sciencerockstars.com on December 6, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Let’s Fix the Future: Scientific Advisor @jadatascience A blog series about the discipline of business experimentation. How to run and learn from experiments in different contexts is a complex matter, but lays at the heart of innovation.
Shivon Zilis
1.2K
10
https://medium.com/@shivon/the-current-state-of-machine-intelligence-f76c20db2fe1?source=tag_archive---------0----------------
The Current State of Machine Intelligence – Shivon Zilis – Medium
(The 2016 Machine Intelligence landscape and post can be found here) I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then... Why do this? A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy. What is “machine intelligence,” anyway? I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.) I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺ Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape. What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence). Which companies are on the landscape? I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺ I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description). If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them. The most exciting part for me was seeing how much is happening in the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves. If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI). Reflections on the landscape: We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that. Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike. On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones. Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80's classic video games. Many companies will be acquired. I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time. No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm... Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears. Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses. The talent’s in the New (AI)vy League. In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value. Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia. For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house). As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia. There will be a peace dividend. Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research. Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle. Opportunities for entrepreneurs: “My company does deep learning for X” Few words will make you more popular in 2015. That is, if you can credibly say them. Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well. Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention. The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is. As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind. “Acquihire as a business model” People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight. This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.) A good demo is disproportionately valuable in machine intelligence I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical). Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait. Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there. Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy. I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space. Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives! Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, BrightFunnel, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off. For the full resolution version of the landscape please click here. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Partner at Bloomberg Beta. All about machine intelligence for good. Equal parts nerd and athlete. Straight up Canadian stereotype and proud of it.
Roland Trimmel
20
6
https://medium.com/@rolandt25/will-all-musicians-become-robots-6221171c5d18?source=tag_archive---------1----------------
Will All Musicians Become Robots? – Roland Trimmel – Medium
Finally we see the rise of the machines, and with it develops a certain fear that artificial intelligence (AI) will render humans useless. This question was posed at Boston’s A3E Conference last month by a team member at Landr. Their company had received death threats from people in the mastering industry after having released a DIY drag-and-drop instant online mastering service powered by AI algorithms. It illustrates the resistance that the world of AI has incited amongst us. Some fear that robots will take over à la Terminator 2. Some fear that the virtual and artificial will replace the visceral. Some cite religious views, and others? Frankly, others just seem ignorant. That sets the tone for our own journey into artificial intelligence, and the lessons we have learned from it. We had spent more than three years developing algorithms to enable software to read and interpret a composition (song) like an expert does. Coming from a music and technology background, our team was hugely excited having accomplished this. Make no mistake, it’s really difficult to make a computer understand music — for us, this was an important first step towards a new generation of intelligent music instruments that assist the user in the songwriting process for faster completion of complex tasks resulting in no interruption of the creative flow and more creative output. When you spend so many years working on a technology/product, you run the risk of losing sight of the market. And this being our first product, we had absolutely no idea what to expect. To find out, we had to bring the product to the attention of the target group and eagerly awaited their reaction: That meant a lot of leg work for us in starting discussions on multiple forums, and collecting users’ feedback. It takes time to cut through the noise, but creates some great threads. What was interesting for us to monitor is how the discussions about our product unfolded on those forums and how opinions were split between two camps: one that embraced what we do, and the other that was characterized by anger, fear, or a complete misunderstanding of what our software does. At times we felt like being in the middle of the fight between machines and humans. We hadn't expected this, our aim was to make a cool product that shows what the technology is capable of doing. Eventually, we spent lots of time clearing misunderstandings, explaining our product better, etc. to win over those forum members’ hearts for what we do. And, occasionally we also had to calm down a heated discussion between members insulting each other caused by a fear that our product eliminates the craft in music composition. Today Enter a different reality: We have made a lot of progress with our software, much of it is down to communicating openly with our community to address any questions they may have early, and involve them deeply in product development. Has the tone in discussions about our technology changed? Yes, certainly it has. But please don’t think it’s an easy journey. It’s still hard to convince music producers to rely on the help of a piece of software that, in some regard, replicates processes of the human brain. The efforts that go into being a pioneer and driving this perceptional battle are driving one close to insanity. It’s an endless stream of work. And it requires endurance like during marathons or triathlons. Here are five things that we learned from our journey that we’d like to share with you so you can judge better before dismissing AI in music. Let me start with a quick discussion of the first and second digital wave in music: The first digital wave brought about digital music technology like synths and DAW’s. And with that, everything changed. Sound synthesis and sampling made entirely new forms of expressiveness possible. Sequencers in combination with large databases of looping clips laid the foundation for electronic dance music which led to a multifaceted artistic and cultural revolution. The second digital wave has been rolling along for a few years now, and it is washing up intelligent algorithms for processing audio and MIDI. As an example, AI’s can already help control the finishing mastering process of music tracks, as assistant tools, or even fully automated. In the not too distant future—and we’re talking only years from now—we will be used to incredible music making automatons controlling most complex harmonic figures, flawlessly imitating the greatest artists. The output quality by such algorithms is unbelievable. Computer intelligence can aimlessly merge styles of various artists and apply them to yet another piece, all that without breaking a sweat. We regard the main application of AI’s for music composition and production as helper tools, not artists in their own regard. And this is not cheating. We have been utilizing digital production tools for decades. It was just a matter of time for more complicated and intelligent code to emerge. But rest assured, computers will rather not generate music all by themselves. The art and craft of composing will prevail. There will always be human beings behind the actual output controlled by an AI. It will help though to create less complex, leaner user interfaces in the tools we use for creating music that are simpler to operate. On to the learning we promised you now. Definitely not. The magic and final decision over creative output will always remain with the (human) artist. A computer is not a human with feelings and emotions. What makes us get to our knees in awe will keep machines clinically indifferent. Simple as that. And technical approximations, as deceptive as they may get, are simply not the real thing. It already is. There is no stopping it. But then that is the course of a natural evolutionary process which can only push forward. A huge one. This is a game changer! Read our statement on main applications above. It is our egos we cling on to, having trotted down the same paths for decades. Many believe their laboriously acquired expertise is threatened by robot technology and a new ruthless generation. The truth is if we embrace AI’s as our helping friends and maybe even learn how to think a little more technical, who can fathom how ingeniously more colorful the world of music will become in the hands of talented musicians of all generations. Yes, because it enables a completely new generation of products and startups like us push for innovation. The agreeable side effect: It will make people happy, musicians, consumers, and businessmen alike, full circle. Most importantly though, it is not only AI changing the music industry. Social changes are equally responsible for it, if they don’t account for a larger part for it anyway. Here’s an excellent article by Fast Company on this topic, and more coverage on A3E in this article by TechRepublic. It’s an interesting time for all of us in music and beyond, and there’s so much yet to come. Don’t be afraid — humans also prevailed in Terminator: “There are things machines will never do. They cannot possess faith, they cannot commune with God, they cannot appreciate beauty, they cannot create art. If they ever learn these things, they won’t have to destroy us. They’ll be us.” -Sarah Connor. Image credit: Daft Punk (top), Re-Compose (middle) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Espen Waldal
57
6
https://medium.com/bakken-b%C3%A6ck/how-artificial-intelligence-can-improve-online-news-7a24889a6940?source=tag_archive---------2----------------
How Artificial Intelligence can improve online news
That being said, the user experience for online news sites today is very much like it was ten and fifteen years ago (see the slideshow showing the evolution of NYT.com). You enter a homepage where a carefully selected combination of articles on sports, celebrity reality shows, dinner recipes and even actual news scream for your attention. There’s a huge focus on page views, and hardly any attention given to personal relevance for the reader. Smart use of technology could improve the online news experience vastly by just adding a bit more structure. That is why we created Orbit. Rich structured data is the foundation for taking the online news experience to the next level. Orbit is a collection of artificial intelligence technology API’s using machine learning-based content analysis to automatically transform unstructured text into rich structured data. By analyzing and organizing content in real-time and automatically tagging and structuring large pieces of text into clusters of topics, Orbit creates a platform where you can build multiple data rich applications. The now 5-month-old leaked innovation report from the New York Times pointed to several challenges for keeping and expanding a digital audience. To face some of the most critical issues you need to create a better experience for the reader by: 1 Serving up better recommendations of related content2 Providing new ways to discover news and add context3 Introducing personalization and filtering Relevance is essential to creating loyal readers, and even more so in a time where more and more visits to news sites go directly to a specific article, mainly due to search and social media, avoiding the front page altogether. Readers arriving through side doors like Twitter or Facebook are less engaged than readers arriving directly, which means it’s important to keep these visitors on the site and convert them into loyal readers. Yet, so little is being done to improve the relevance of recommendations and create a connection to the huge amounts of valuable content that already exists. Orbit understands not only the topics a piece contains but also related topics. It thereby understands the context of the article and can bring up related content that the reader wouldn’t otherwise have seen, extending the reader’s time spent on the site and increasing page views. Understanding context means that the cluster of topics related to an article on China signing a historic gas deal with Russia includes topics such as Russia, Ukraine, Putin, Gazprom and energy — thus creating recommendations within that cluster and creating connections between content. Rich structured data opens up for new ways to navigate and discover news. The classic navigation through carefully edited front pages has pretty much been the same since the dawn of online news publishing. Structured data enables the reader to follow certain topics or stories, improves search and enables timeline navigation of a news story to help the reader better understand the context of the story and how it has developed. At the same time, a journalist writing a story on the uproar in Ukraine has no possible way of knowing how the story will unfold in the weeks to come. Manual tagging of news stories leads to inconsistent and incomplete structures due to a subjective understanding of which topics are important and related. Machine learning-based content analysis can identify people, organizations and places and relate them to each other in real-time, thereby identifying related stories as they unfold and cluster them together. As the NYT Innovation report brought up, the true value of structured data emerges only when the content is structured equally throughout. News and content apps like Circa, Omni and Prismatic, and news sites like Vox, have incorporated some of these elements and are experimenting with how to develop original ways to discover news. There are many arguments against personalization, and they are often related to the dystopian fear of a «fragmented» public sphere or the horrors of the echo chamber. That doesn’t mean personalization can’t be a good thing; it merely means being aware of what a particular type of user wants at a particular time. We are not talking about a fully customizable news feed based on your subjective interests, meaning I will not only see articles related to Manchester United, Finance, TV-shows and Kim Kardashian, and be uninformed on all other topics. We are merely suggesting a smart filtering system and adjustments of what subjects you would like to see more and less of on your feed. After all, we do have different interests. For example you may be entirely disinterested in Tour de France during its three week media frenzy in July each year; unfollow topic, or turn the «volume» down. Today, getting the news isn’t the hard part. Filtering out the excessive info and navigating the overwhelming stream of news in a smart way is where you need great tools. A foundation of rich structured data will not only benefit the reader, but make life easier for journalists and editors as well. To provide context to a story about Syria you could add several components of extra information that would enrich the article: A box of background information on the conflict, facts about Bashar Al-Assad and the different Syrian rebel groups, and so forth. With rich structured data in place, you can automatically add relevant fact boxes and other interactive elements to a piece of content, based on third party content databases such as Wikipedia. Topics can automatically generate their own page with all the related articles, facts, visualizations and insights relevant for that specific topic cluster. Moreover, you can use the data to create new and compelling presentations of your content, including visualizations and timelines that give the reader a better experience and new insights. News content generally has a short lifespan, but this doesn’t mean that old content can’t be valuable in a new context. A consistent structuring of archived content will give new life to old content, making it easier to reuse and resurrect articles that are still relevant and create connections between old and new articles. What are the trending topics, people or organizations this week? What regions got the most media attention? How many of the sources were anonymous, how many were women versus men? Knowing more about your audience’s preferences will make it easier to create good content at the right time. Better organized content creates a strong foundation for good insights into how content is consumed and why. With a better ecosystem for your content, including higher relevance and more contextual awareness, you can present better context-based ads to your advertisers and give better insights into who is watching and acting on them. By using the right technology in smart ways, journalists and editors can focus on what they are best at: creating quality news content. orbit.ai From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product Manager at Bakken & Bæck. The Bakken & Bæck blog
Joe Johnston
38
4
https://medium.com/universal-mind/how-i-tracked-my-house-movements-using-ibeacons-3e1e9da3f1a9?source=tag_archive---------3----------------
How I tracked my house movements using iBeacons. – Universal Mind – Medium
Recently I’ve started experimenting more and more with iBeacons. Being part of the R&D Group at Universal Mind I’ve had the opportunity to do a lot of testing and exploring of different products. In doing so, I wanted to see how someone could utilize iBeacons without building your own app, just yet (I’ll tackle this in a future post). The first step was to find iBeacons we could use for our testing. Our first choice was ordering the iBeacons from Estimote and after waiting for them to arrive (they never did), we ordered other beacons from various companies. The first set to arrive was from Roximity, which came to us as a set of 3 dev iBeacons. Next, I wanted to see if I could track movements in my own house just as a simple test, without creating a custom app. I looked for a few apps that could detect the iBeacons and execute an action. There are a few apps capable of doing this, but all of them were somewhat limiting. The only app I found that allowed me to control what happens when triggering an iBeacon was an app called Launch Here (Formally Placed). Although this app wasn’t a perfect fit, it did allow me to call some actions after triggering an iBeacon. Launch Here allows you to use custom URL Schemes. These URL Schemes allow you to open apps and even populate an action. One of the more complicated tasks of setting up any iBeacon manually is you need to gather some infomation on the iBeacon itself. The 3 key pieces of info each iBeacon contains is a UUID, Major ID, and Minor ID. To get this info you can install an app, like Locate for iBeacon that detects iBeacons and shows this information. Once you have this info you can set up your iBeacons using Launch Here. Its a bit cumbersome to set each one up but you only have to do it once. (As a side note the Launch Here app is a bit touchy when setting up the iBeacons so be warned. You may have to re-enter the URL Scheme info if you fat finger it.) Like I said before, the Launch Here app is controlled by the user, it triggers a lock screen notification when you turn on your phone and are less then 3 meters away from any iBeacon. This is a bit interesting, but it’s the approch that Launch Here took so they could give the user a bit of control when triggering actions. Ideally this all would happen behind the scenes to the user in a custom app. The custom URL Schemes are pretty powerful but you still need to manually trigger them. Here’s my set up. I have the Tumblr app installed on my phone which has the ablity to use a post URL Scheme. The URL Scheme looks like this: tumblr://x-callback-url/text?title=kitchen Once that URL Scheme is triggered from Launch Here it opens Tumblr and pre-populates a text post with the word “kitchen”, or with the name of the room I set. I manually tap post and its added. This allows me to capture each iBeacon location and store the data. The next step was to create a more data friendly format. I love using a service called IFTTT. It’s a very power platform that allows you to automatically trigger other services. I created a IFTTT recipe that auto adds a row to a Google Spreadsheet with the Time Stamp and Text that is entered into a text post to my Tumblr Account. Now I have a time stamped dataset tracking my movement in my house — at least the three rooms I have set up. With that data you can imagine how you can start to break it apart. Here’s just an example of my current break down based on room. As you can see it’s possible to track your movement, albeit a bit cumbersome. Taking this data and bubbling it up to the user could be very compelling in certain situations. I’m just using my personal home location here but you can see how this could be very powerful in other settings. I am the Director of User Experience / Research & Development at Universal Mind — A Digital Solutions Agency. You can follow me on twitter at @merhl. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Experience & Service Design Director @SparksGrove the experience design division of @NorthHighland (Alum of @Hugeinc @UniversalMind @Startgarden) A collection of articles created by Universal Mind thinkers.
Nadav Gur
10
9
https://medium.com/the-vanguard/why-natural-search-is-awesome-and-how-we-got-here-fe69b9cdd0db?source=tag_archive---------5----------------
Why Natural Search is Awesome and How We Got Here – The Vanguard – Medium
The Evolution Of Desti’s Search Interface This is a story about how one ambitious start-up tackled this subject that has riddled people like Google, Apple, Facebook and others, and came up with some pretty clever conclusions (if I may say so myself). In 2012–2013 we were building Desti — a holistic travel search app (i.e. it would search for everything — from hotels through attractions to restaurants), using post-Siri natural-language-understanding tech, and with powerful semantic search capabilities on the back end that allowed Desti to reason meaningfully about search results and make highly informed suggestions. Desti’s search was built on a premise that sounds very simple, but it’s actually very hard to pull off. We believe that people should be able to ask specifically for what they’re interested in and get results that match. This sounds reasonable, right? If I’m looking for a beach resort on the Kona Coast in Hawaii, it’s pretty obvious what I want. And if I also want it to be kid friendly and pet friendly, I should just be able to ask for it. Our goal was to get users inputting relevant, specific queries because that’s what people need. That’s where Desti shines — saving you time and effort by delivering exactly what you want. Now let’s assume that Desti knows which hotels on the Kona Coast are actually beach resorts, are kid friendly and pet friendly. How can we make expressing this query easy and intuitive for the user? Episode I: Desti is Siri’s Sister or Conversational User Interface:When we started, we were very naïve about this. We said — first, let’s just put a search box in there, allowing the user to type or say whatever they want, and let’s make sure we understand this. Then, let’s leave that box there so they can react to what they see and provide more detail (“refine”) or search for something else in that context (e.g. a restaurant near the resort — we called this “pivot”). And let’s run a conversation around it, kind of like Siri. What could be more natural? To do this we used SRI International’s VPA platform, which is almost literally a post-Siri natural-language-interaction platform with which you can have a conversation in context. This is more or less what it looked like in our beta version: Search box: A conversational UI: We launched this, monitored use and quickly realized is that early users split into two groups: Discarding the 2nd group (we’re busy people), we learned that people don’t know how to interact naturally with computers, or they have no idea what to ask or expect, so they revert to the most primitive queries. Problem is, our goal was to answer interesting, specific queries, because we believe that if we give you a great answer that caters to what you want, your likelihood of buying is that much higher. Furthermore, absolutely no one got the conversational aspect — the fact you can continue refining and pivoting through conversation. We decided to take away the focus from conversation for the time being. Episode 2: Vegas Slot Machines or Make It Dead Simple We realized we have to focus on the first query, and give people some cues about what’s possible. And came up with this interface: These contextual spinners turned interaction from a totally open-ended query to something closer to multiple-choice questions. In essence these were interchangeable templates, where you could get ideas for “what to search for” as well as easily input your query. What you picked would show up as a textual query in the search bar, which we hoped people would realize they can edit or add to. Hoped... The results — on the one hand, progress. We saw longer and more interesting queries and more interaction. However when talking to users, we realized that they were assuming that the spinner was a kind of menu system, which means (a) they can only pick what’s in the menu (b) they have to pick one thing from each menu. So while this was better than what most sites have for search, it was still a far cry from what we wanted to deliver. Here’s what we learned from this: Episode 3: Fill In The Blanks — Smartly At this stage, it was clear that we needed better auto-suggest and smarter auto-complete. This is similar from a UI perspective to Google Instant, but Desti is about semantic search, not keyword matching. In most cases, Google will auto-suggest a phrase that matches what you’ve been typing AND has been typed in by many other people. Desti should suggest something that semantically matches what you entered and makes sense given what we know of the destination and about your trip. Because Desti is new and there haven’t been a million users searching for the same things before you, Desti should reason about what you may ask, not suggest something someone else asked. We realized we have to build a lot of semantically-reasonable and statistically-relevant auto-suggesting. We still wanted to keep to the template logic because we believed it helps users think about what they are looking for and form the query in their minds. So we came up with a UI that blends form-filling and natural language entry, and focused on building smart auto-suggest and auto-complete. This UI was built of a number of rigid fields (e.g. location, type) that adapt to the subject matter (so if the type is “hotel” you’re prompted for dates), and a free text field that allows you to ask for whatever else you want. We iterated a lot over the auto-complete and auto-suggest features. The first thing is to realize they are different. With auto-complete, you have a user who already thought of something to type in, and you have to guess what that is. With auto-suggest, you really want to inspire the user into adding something useful to their query, which means it needs to be relevant to whatever you know about the query and user so far, but not overwhelming for the user. All this requires knowing a lot about specific destinations (what do people search for in Hawaii vs. New York?) and specific types (what’s relevant for hotels vs. museums?). Also, on the visual side, what the user is putting in is often quantitative and easier to “set” than “type” — e.g. a date, a price etc. So we came up with our first crack at blending text with visual widgets. The results were a big improvement in the quality and relevance of queries over the previous UI, but a feeling that this was still too stiff and rigid. When people are asked for a “type of place” — e.g. a museum, a park, a hotel — they often can’t really answer, and it’s easier for them to think about a feature of the place instead — e.g. that they can go hiking, or biking, see art or eat breakfast. For linguistic reasons it’s easier for people to say that they want a “romantic hotel” than a “hotel that’s romantic”. So while this UI was very expressive, often it felt unnatural and limiting. Furthermore many users just ended up filling the basic fields and not adding any depth in the open-text field (despite various visual cues). And editing a query for refining or pivoting was hard. At the same time — the auto-suggest / auto-complete elements we’ve built at this stage werealmost enough to allow us to just throw out the limiting “templates” and move to one search field — but this time, a damn clever one. Episode 4: Search Goes Natural To the naked eye, this looks like we’ve gone full circle — one text box, parsed queries shown as tags. What could be simpler? Well, not exactly, because we still need queries to be meaningful. One thing that the templates gave us was built-in disambiguation. We need a query that has at least a location + a type (or something from which we can derive a type), and without a template telling us that the “hotel” is the type, and the “restaurant” is something you want your hotel to have (vs. maybe the opposite), the system needs to better understand the grammatical structure or the sentence, and cue you into inputting things the right way when it’s suggesting and auto-completing. Typing a query: The query is understood — you can add / edit: With this new user interface, changing queries (“refining and pivoting”) is very natural — add tags, or take away tags. Widgets were contextually integrated using the auto-suggest drop-down menu, so they are naturally suggested at the right time (e.g. after you said you were looking for a hotel, we help you choose when, how many rooms etc.). It’s also very easy to suggest things to search for based on the context. For instance if we know your kids are traveling with you, we’d drop in “family friendly” and you could dismiss it with one click. So Where is This Going So far, Natural Search looks and behaves better than anything else we’ve seen in this space. From now on, most of the focus is on making the guesses even smarter, with more statistic reasoning about what people ask for in different contexts, and more contextual info driving those guesses. We believe this UI is where vertical search is heading. Consider how nice it would be to input “gifts for 4 year old boys under $30” into target.com’s search bar, or “romantic restaurant with great seafood near Times Square with a table at 8 PM tonight” into OpenTable — and get relevant answers. But then again, answering specific queries is not that easy either, but that’s the other side of Desti... To be continued. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I think. Then I talk. Sometime it’s the other way around. Founded & ran companies in AI, mobile, travel, etc., ex-EiR at SRI Int’l, ex-aerospace Nadav Gur’s Tech Musings
Pandorabots
14
3
https://medium.com/pandorabots-blog/using-oob-tags-in-aiml-part-i-21214b4d2fcd?source=tag_archive---------6----------------
Using OOB Tags in AIML: Part I – pandorabots-blog – Medium
Suppose you are building an Intelligent Virtual Agent or Virtual Personal Assistant (VPA) that uses a Pandorabot as the natural language processing engine. You might want this VPA to be able to perform tasks such as sending a text message, adding an event to a calendar, or even just initiating a phone call. OOB tags allow you to do just that! OOB stands for “out of band,” which is an engineering term used to refer to activity performed on a separate, hidden channel. For a Pandorabot VPA, this translates to activities which fall outside of the scope of an ordinary conversation, such as placing a phone call, checking dynamic information like the weather, or searching wikipedia for the answer to some question. The task is executed, but does not necessarily always produce an effect on the conversation between the Pandorabot and the user. OOB tags are used in AIML templates and are written in the following format: <oob>command</oob>. The command that is to be executed is specified by a set of tags which occur within the <oob> tags. These inner OOB tags can be whatever you like, and the phone-related actions they initiate are defined in your applications code. To place a call you might see something like this: <oob><dial>some phone number</dial></oob>. The <dial> tag within the <oob> tag sends a message to the phone to dial the number specified. When your client indicates they want to dial a number, your application will receive a template containing the command specified inside the OOB tag. Within your application, this inner command will be interpreted and the appropriate actions will be executed. It is useful to think of the activities initiated by oob tags as falling into one of two categories, based on whether they return information to the user via the chat interface or not. The first category, those that do not return information, typically involve activities that interrupt the conversation. If you ask your VPA to look up restuarants on a map, it will open up your map application and perform a search. Similarly, if you ask your bot to make a phone call, it will open the dialer application and make a call. In both of these examples, the activity performed interrupts the conversation and displays some other screen. The second category, those that do return information to the user via the chat interface, are generally actions that are executed in the background of the conversation. If you ask your Pandorabot to look up the “Population of the United States” on Wikipedia, it will perform the search, and then return the results of the search to the user via the chat window. Similarly, if you ask your Pandorabot to send a text message to the friend, it will send the text, and then return a message to the user via the chat window indicating the success of the action, i.e. “Your text message was delivered!” In this second set of examples, it is useful to distinguish between those activities whose results will be returned directly to the user, like the Wikipedia example, and those activities whose successful completion will simply be indicated to the user through the chat interface, as with the texting example. Here is an example of a category that uses the phone dialer on android. Here is an example interaction this category would lead to: Human: Dial 1234567. Robot: Calling 1234567. Here is a slightly more complicated example involving the oob tag, which launches a browser and performs a google search: Human: Look up Pandorabots. Robot: Searching...Searching... Please stand by. Note: not shown in the previous example is the category RANDOM SEARCH PHRASE, which delivers a random selection from a short list of possible replies, each indicating to the user that the bot correctly interpreted their search request. For a complete list of oob tags as implemented in the CallMom Virtual Personal Assistant App for Android, as well as usage examples, click here. Be sure to look out for the upcoming post “Using OOB Tags in AIML: Part II”, which will go over a basic example of how to intrepret the OOB tags received from the Pandorabots server within the framework of your own VPA application. Originally published at blog.pandorabots.com on October 9, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The largest, most established chatbot development and hosting platform. www.pandorabots.com The leading platform for building and deploying chatbots.
Denny Vrandečić
4
4
https://medium.com/@vrandezo/ai-is-coming-and-it-will-be-boring-94768de264c6?source=tag_archive---------7----------------
AI is coming, and it will be boring – Denny Vrandečić – Medium
I was asked about my opinion on this topic, and I thought I would have some profound thoughts on this. But I ended up rambling, and this post doesn’t really make any single strong point. tl;dr: Don’t worry about AIs killing all humans. It’s not likely to happen. In an interview with the BBC, Stephen Hawking stated that “the development of full artificial intelligence could spell the end of the human race”. Whereas this is hard to deny, it is rather trivial: any sufficiently powerful tool could potentially spell the end of the human race given a person who knows how to use that tool in order to achieve such a goal. There are far more dangerous developments — for example, global climate change, the arsenal of nuclear weapons, or an economic system that continues to sharpen inequality and social tension? AI will be a very powerful tool. Like every powerful tool, it will be highly disruptive. Jobs and whole industries will be destroyed, and a few others will be created. Just as electricity, the car, penicillin, or the internet, AI will profoundly change your everyday life, the global economy, and everything in between. If you want to discuss consequences of AI, here are a few that are more realistic than human extermination: what will happen if AI makes many jobs obsolete? How do we ensure that AIs make choices compliant with our ethical understanding? How to define the idea of privacy in a world where your car is observing you? What does it mean to be human if your toaster is more intelligent than you? The development of AI will be gradual, and so will the changes in our lifes. And as AI keeps developing, things once considered magical will become boring. A watch you could talk to was powered by magic in Disney’s 1991 classic “The Beauty and the Beast”, and 23 years later you can buy one for less than a hundred dollars. A self-driving car was the protagonist of the 80s TV show “Knight Rider”, and thirty years later they are driving on the streets of California. A system that checks if a bird is in a picture was considered a five-year research task in September 2014, and less than two months later Google announces a system that can provide captions for pictures — including birds. And these things will become boring in a few years, if not months. We will have to remind ourselves how awesome it is to have a computer in our pocket that is more powerful than the one that got Apollo to the moon and back. That we can make a video of our children playing and send it instantaneously to our parents on another continent. That we can search for any text in almost any book ever written. Technology is like that. What’s exciting today, will become boring tomorrow. So will AI. In the next few years, you will have access to systems that will gradually become capable to answer more and more of your questions. That will offer advice and guidance towards helping you navigate your life towards the goal you tell it. That will be able to sift through text and data and start to draw novel conclusions. They will become increasingly intelligent. And there are two major scenarios that people are afraid of at this point: The Skynet scenario is just mythos. There is no indication that raw intelligence is sufficient to create intrinsic intention or will. The paperclip scenario is more realistic. And once we get closer to systems with such power, we will need to put the right safeguards in place. The good news is that we will have plenty of AIs at our disposal to help us with that. The bad news is that discussing such scenarios now is premature: we simply don’t know how these systems will look like. That’s like starting a committee a hundred years ago to discuss the danger coming from novel weaponry: no one in 1914 could have predicted nuclear weapons and their risks. It is unlikely that the results of such a committee would have provided much relevant ethical guidance for the Manhattan project three decades later. Why should that be any different today? In summary: there are plenty of consequences of the development of AI that warrant intensive discussion (economical consequences, ethical decisions made by AIs, etc.), but it is unlikely that they will bring the end of humanity. Background image: robots trashing living room by vincekamp, licensed under CC BY ND 3.0. Personal permanent URL: http://simia.net/wiki/AI_is_coming,_and_it_will_be_boring From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Wikidata founder, Google ontologist, Semantic Web researcher, and author.
Thaddeus Howze
15
6
https://medium.com/@ebonstorm/of-comets-and-gods-in-the-making-4f55ecccb9fe?source=tag_archive---------8----------------
Of Comets and Gods in the Making – Thaddeus Howze – Medium
Asferit had not grown up; she didn’t know where she came from; could not conceive of childhood. No memories of parents, no recollection of family. On the vast empty world that served as her lab, she built the probes and put a little bit of herself in each one. Her machine-form, ancient, slow and sputtering came to life, wheezing through the long corridors of the silent lab, its darkness masking the distant empty spaces which Asferi imagined were once filled with life. She looked through her thoughts and realized she had lost any hope of memory, that part of her was already circling a distant star aborning with life. She looked in on those places when she woke to see the results of her work; on so many worlds life spawned. With the next launch, she would lose the memory of those places, there was so little of her remaining. Enough for three, no four probes. Then she would cease to remember why she was, what she was. She would forget how to exist. But not yet. She completed the next probe, winding the engine and orienting it along the galactic plane; her sensor array aligning the probe with a wandering comet; she planned to deposit herself within the life-giving molecules within its frozen mass. She knew little about her past, but knew that she must not be able to be found. This was the only memory that remained; hide from the Darkness. As she loaded the last probe, she considered the first probe she ever sent, millennia ago; there were monuments within the halls of the lab in her hubris then, she considered them a successful reincarnation of her people. Each representation was filled with the temporal signature of that once great race; a temporal residue of failure. It spoke of a great race, masters of time and space; they flourished in the dark between the stars. Then the Darkness came. She was overconfident. She slept assured of their success her mission completed. In the time between sleeping and waking, her cycle of regeneration before attempting to seed again, the great race was gone. Found. They did not heed the warnings she sent in those early days. She gave far more of herself then. She came to them in visions, taught them secrets to harness the hidden nature of matter; revealed to them the nature of energies, both planetary and interstellar. They would worship her, revere her and believe her to be a god. In the end, it was not enough. They were consumed, their greatness undone. She sent less and less of herself from then on. Godhood failed them. Perhaps obscurity would serve them better. She sent less each time, only tiny packages of micromachines capable of changing matter, capable of modifying genomes, empowering the creatures spawned of her with abilities even greater than the First Race. Psychometric representations of them were all that remained, echoes in the timestream of history. In their hubris, they ruptured time and space and like the world her lab hung above, cracked the crust of their world and were lost in a temporal vortex of their own making. They had such potential. Squandered. Then she began sending only the memory of what she was, embedded within complex epigentic echoes. No longer would she shape the universe for them, they would have to work for their survival, perhaps they would be stronger for it. She appeared to her descendents only in dreams; visions of what they were, memories of who she was, memories she no longer possessed. Her memory was great once and she seeded thousands of worlds with it. But like the ephemeral nature of memory, so few knew what they saw. Many went mad. Most dreamed of demiurges, mad deva whose powers ravaged worlds. These memories destroyed half of them before they could achieve spaceflight and reach for the stars themselves. Religions they spawned consumed them. Now, she sent only cells and precellular matter. The very least of herself, the essence of who she was, the final matter of her being; hidden in comets, cloaked in meteor swarms, hidden on the boots of other starfarers. Time had taught her patience, though she had lost her memories, she was confident of this final strategy. To hide herself on millions of worlds, her final probe-ships would leave a legacy on millions of worlds. She found the last star she would use and loaded the final probe-ship with the hardiest constructions she had ever made. She deconstructed the worldship; her lab, her home for millenia of millenia, breaking down every part of it, reforging it for a final effort. The planet below was also consumed, her last effort would require everything. It was a long dead world lost to antiquity when the universe was young. Of the Darkness, she could not remember, but she knew this: as long as there was light, her people would survive. The final instructions to her probeship would have her descending into her planet’s unstable star. It’s final fluctuations revealed what she knew was the inevitable outcome; and she planned to use it to her advantage. Her final self would not be aware of the result. The final cells of her body were distributed within millions of pieces of her world and her lab. Each calibrated to arrive at a star somewhere in her galaxy. Each single cell would find a world ready for life. She could no longer coerce planets into life. She could no longer force matter or energy to take the shape she deemed. She was now only able to influence the tiniest aspects. Asferit would only be able to nudge a planet toward Life. The Darkness would always be ready to claim her people but now they would be scattered; to worlds within the galaxy and without. She seeded the galactic wind and waited for a supernova to blow them where it would. Her starseeds hardened against the impending blastwave, they would, with the tiniest bit of her final design, travel faster than light toward their final destinations. As the star which lit her world, gave her people life, watched them die and patiently waited until they could be reborn, exploded, Asferit now waited in turn. In those last seconds as the waves of radiation and coronal debris swept over the remnants of her cannibalized world, she subsumed herself within the starseeds and the near-immortal being Asferit, last of her kind, was no more. And yet now she was pure purpose, no ambitions, no plan, no dreams of godhood, no longer a radiant harbingers of dooms lighting the skies of primitive worlds. She would be the essence of Life itself; the Darkness be damned. Of Comets and Gods in the Making © Thaddeus Howze 2013, All Rights Reserved Thaddeus Howze is a popular and recently awarded Top Writer, 2016 recipient on the Q&A site Quora.com. He is also a moderator and contributor to theScience Fiction and Fantasy Stack Exchange with over fourteen hundred articles in a four year period. Thaddeus Howze is a California-based technologist and author who has worked with computer technology since the 1980’s doing graphic design, computer science, programming, network administration, teaching computer science and IT leadership. His non-fiction work has appeared in numerous magazines: Huffington Post, Gizmodo, Black Enterprise, the Good Men Project, Examiner.com, The Enemy, Panel & Frame, Science X, Loud Journal, ComicsBeat.com, and Astronaut.com. He maintains a diverse collection of non-fiction at his blog, A Matter of Scale. His speculative fiction has appeared online at Medium, Scifiideas.com, and theAu Courant Press Journal. He has appeared in twelve different anthologies in the United States, the United Kingdom and Australia. A list of his published work appears on his website, Hub City Blues. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Author | Editor | Futurist | Activist | Tech Humanist | http://bit.ly/thowzebio |http://bit.ly/thpatreon
Tommy Thompson
17
14
https://medium.com/@t2thompson/ailovespacman-9ffdd21b01ff?source=tag_archive---------9----------------
Why AI Research Loves Pac-Man – Tommy Thompson – Medium
AI and Games is a crowdfunded YouTube series on the research and applications of AI within video games. The following article is a more involved transcription of the topics discussed in the video linked to above. If you enjoy this work, please consider supporting my future content over on Patreon. Artificial Intelligence research has shown a small infatuation with the Pac-Man video game series over the past 15 years. But why specifically Pac-Man? What elements of this game have proven interesting to researchers in this time? Let’s discuss why Pac-Man is so important in the world of game-AI research. For the sake of completes — and in appreciating there is arguably a generation or two not familiar with the game — Puck-Man was an arcade game launched in 1980 by Namco in Japan and renamed Pac-Man upon being licensed by Midway for an American release. The name change was driven less by a need for brand awareness but rather because the name can easily be de-faced to say... something else. The original game focuses on the titular character, who must consume as many pills as possible without being caught by one of four antagonists represented by ghosts. The four ghosts: Inky, Blinky, Pinky and Clyde, all attempt to hunt down the player using slightly different tactics from one another. Each ghost has their own behaviour; a bespoke algorithm that dictates how they attack the player. Players also have the option to consume one of several power-pills that appear in each map. Power-pills allow for the player to not just eat pills but the enemy ghosts for a short period of time. While mechanically simple when compared to modern video games, it provides an interesting test-bed for AI algorithms learning to play games. The game world is relatively simple in nature, but complex enough that strategies can be employed for optimal navigation. Furthermore, the varied behaviours of the ghosts reinforces the need for strategy; since their unique albeit predictable behaviours necessitate different tactics. If problem solving can be achieved at this level, then there is opportunity for it to scale up to more complex games. While Pac-Man research began in earnest in the early 2000’s, work by John Koza (Koza, 1992) discussed how Pac-Man provides an interesting domain for genetic programming; a form of evolutionary algorithm that learns to generate basic programs. The idea behind Koza’s work and later that of (Rosca, 1996) was to highlight how Pac-Man provides an interesting problem for task-prioritisation. This is quite relevant given that we are often trying to balance the need to consume pills, all the while avoiding ghosts or — when the opportunity presents itself — eating them. About 10 years later, people became more interested in Pac-Man as a control problem. This research was often with the intent to explore the applications of artificial neural networks for the purposes of creating a generalised action policy: software that would know at any given tick in the game what would be the correct action to take. This policy would be built from playing the game a number of times and training the system to learn what is effective and what is not. Typically these neural networks are trained using an evolutionary algorithm, that finds optimal network configurations by breeding collections of possible solutions and using a ‘survival of the fittest’ approach to cull weak candidates. (Kalyanpur and Simon, 2001) explored how evolutionary learning algorithms could be used to improve strategies for the ghosts. In time it was evident that the use of crossover and mutation — which are key elements of most evolutionary-based approaches — was effective in improving the overall behaviour. However it’s important to note that they themselves acknowledge their work uses a problem domain similar to Pac-Man and not the actual game. (Gallagher and Ryan, 2003) uses a slightly more accurate representation of the original game. While the screenshot is shown here, the actual implementation only used one ghost rather than the original four. In this research the team used an incremental learning algorithm that tailored a series of rules for the player that dictate how Pac-Man is controlled using a Finite State Machine (FSM). This proved highly effective in the simplified version they were playing. The use of artificial neural networks - a data structure that mimics the firing of synapses in the brain — was increasingly popular at the time (and once again in most recent research). Two notable publications on Pac-Man are (Lucas, 2005), which attempted to create a ‘move evaluation function’ for Pac-Man based on data scraped from the screen and processed as features (e.g. distance to closest ghost), while (Gallagher and Ledwich, 2007) attempted to learn from raw, unprocessed information. It’s notable here that the work by Lucas was in fact done on Ms. Pac-Man rather than Pac-Man. While perhaps not that important to the casual observer, this is an important distinction for AI researchers. Research in the original Pac-Man game caught the interest of the larger computational and artificial intelligence community. You could argue it was due to the interesting problem that the game presents or that a game as notable as Pac-Man was now considered of interest within the AI research community. While it is now something that appears commonplace, games — more specifically video games — did not receive the same attention within AI research circles as they do today. As high-quality research in AI applications in video games grew, it wasn’t long before those with a taste for Pac-Man research moved on to looking at Ms. Pac-Man given the challenges it presents — which we are still conducting research for in 2017. Ms. Pac-Man is odd in that it was originally an unofficial sequel: Midway, who had released the original Pac-Man in the United States, had become frustrated at Namco’s continued failure to release a sequel. While Namco did in time release a sequel dubbed Super Pac-Man, which in many ways is a departure from the original, Midway decided to take matters into their own hands. Ms. Pac-Man was — for lack of a better term — a mod; originally conceived by the General Computing Company based in Massachusetts. GCC had got themselves into a spot of legal trouble with Midway having previously created a mod kit for popular arcade game Missile Command. As a result, GCC were essentially banned from making further mod kits without the original game’s publisher providing consent. Despite the recent lawsuit hanging over them, they decided to show Midway their Pac-Man mod dubbed Crazy Otto, who liked it so much they bought it from GCC, patched it up to look like a true Pac-Man successor and released it in arcades without Namco’s consent (though this has been disputed). Note: For our younger audience, mod kits in the 1980s were not simply software we could use to access and modify parts of an original game. These were actual hardware: printed circuit boards (PCBs) that could either be added next to the existing game in the arcade unit, or replace it entirely. While nowhere near as common nowadays due to the rise of home console gaming, there are many enthusiasts who still use and trade PCBs fitted for arcade gaming. Ms. Pac-Man looks very similar to the original, albeit with the somewhat stereotypical bow on Ms. Pac-Man’s hair/head(?) and a couple of minor graphical changes. However the sequel also received some small changes to gameplay that have a significant impact. One of the most significant changes is that the game now has four different maps. In addition the placement of fruit is more dynamic and they move around the maze. Lastly, a small change is made to the ghost behaviour such that, periodically, the ghosts will commit a random move. Otherwise, they will continue to exhibit their prescribed behaviour from the original game. Each of these changes has a significant impact on both how humans and AI subsequently approach the problem. Changes made to the maps do not have a significant impact upon AI approaches. For many of the approaches discussed earlier, it is simply another configuration of the topography used to model the maze. Or if the agent is using more egocentric models for input (i.e. relative to the Pac-Man) then these is not really considered given the input is contextual. This is only an issue should the agent’s design require some form or pre-processing or expert rules that are based explicitly upon the configuration of the map. With respect to a human, this is also not a huge task. The only real issue is that a human would have become accustom to playing on a given map; devising strategies that utilise parts of the map to good effect. However, all they need is practice on the new maps. In time, new strategies can be formulated. The small change to ghost behaviour, which results in random moves occurring periodically, is highly significant. This is due to the fact that the deterministic model that the original game has is completely broken. Previously, each ghost had a prescribed behaviour, you could — with some computational effort — determine the state (and indeed the location) of a ghost at frame n of the game, where n is a certain number of steps ahead of the current state. Any implementation that is reliant upon this knowledge, whether it is using it as part of a heuristic, or an expert knowledge base that gives explicit instructions based on the assumption of their behaviour, is now sub-optimal. If the ghosts can make random decisions without any real warning, then we no longer have the same level of confidence in any of our ghost-prediction strategies. Similarly, this has an impact on human players. The deterministic behaviour of the ghosts in the original Pac-Man, while complex, can eventually be recognised by a human player. This has been recognised by the leading human players who could factor their behaviour at some level into their decision making process. However, in Ms. Pac-Man, the change to a non-deterministic domain has a similar effect to humans as it does AI: we can no longer say with complete confidence what the ghosts will do given they can make random moves. Evidence that a particular type of problem or methodology has gained some traction in a research community can be found in competitions. If a competition exists that is open to the larger research community it is, in essence, a validation that this problem merits consideration. In the case of Ms. Pac-Man, there have been two competitions. The first competition was organised by Simon Lucas — at the time a professor at the University of Essex in the UK — with the first competition held at the Conference on Evolutionary Computation (CEC) in 2007. It was subsequently held at a number of conferences — notably IEEE Conference on Computational Intelligence and Games (CIG) — until 2011. http://dces.essex.ac.uk/staff/sml/pacman/PacManContest.html This competition used a screen capture approach previously mentioned in (Lucas, 2005) that was reliant on an existing version of the game. While the organisers would use Microsoft’s own version from the ‘Revenge of Arcade‘ title, you could also use the likes the webpacman for testing, given it was believed to run the same ROM code. As shown in the screenshot, the code is actually taking information direct from the running game. One benefit of this approach is that it denies the AI developer from accessing the code to potentially ‘cheat’: you can’t access source code and make calls to the likes of the ghosts to determine their current move. Instead the developer is required to work with the exact same information that a human player would. A video of the winner from the IEEE CIG 2009 competition, ICE Pambush 3, can be seen in the video below: In 2011, Simon Lucas in conjunction with Philipp Rohlfshagen and David Robles created the Ms Pac-Man vs Ghosts competition. In this iteration, the ‘screen scraping’ approach had been replaced with a Java implementation of the original game. This provided an API to develop your own bot for competitions. This iteration ran at four conferences between 2011 and 2012. One of the major changes to this competition is that you can now also write AI controllers for the ghosts. Competitors submissions were then pitted against one another. The ranking submission for both Ms. Pac-Man and the ghosts from the 2012 league is shown below. During the earlier competition, there was a continued interest in the use of learning algorithms. This ranged from the of an evolutionary algorithm — which we had seen in earlier research — to evolve code that is the most effective at this problem. This ranged from evolving ‘fuzzy systems’ that use a rules driven by fuzzy logic (yes, that is a real thing) shown in (Handa, 2008), to the use of influence maps in (Wirth, 2008) and a different take that uses ant colony optimisation to create competitive players (Emilio et al, 2010). This research also stirred interest from researchers in reinforcement learning: a different kind of learning algorithm that learns from the positive and negative impacts of actions. Note: It has been argued that reinforcement learning algorithms are similar to that of how the human brain operates, in that feedback is sent to the brain upon committing actions. Over time we then associate certain responses with ‘good’ or ‘bad’ outcomes. Placing your hand over a naked flame is quickly associated as bad given that it hurts! Simon Lucas and Peter Burrow took to the competition framework as means to assess whether reinforcement learning, specifically an approach called Temporal Difference Learning, would yield stronger returns than evolving neural networks (Burrow and Lucas, 2009). The results appeared to favour the use neural nets over the reinforcement learning approach. Despite that, one of the major contributions Ms. Pac-Man has generated is research into Monte Carlo methods: an approach where repeated sampling of states and actions allow us to ascertain not only the reward that we will typically attain having made an action, but also the ‘value’ of the state. More specifically, there has been significant exploration of whether Monte-Carlo Tree Search (MCTS); an algorithm that assesses the potential outcomes at a given state by simulating the outcome, could prove successful. MCTS has already proven to be effective in games such as Go! (Chaslot et al, 2008) and Klondike Solitaire (Bjarnason et al. 2009). Naturally — given this is merely an article on the subject and not a literature review — we cannot cover this in immense detail. However, there has been a significant number of papers focussed on this approach. For those interested I would advise you read (Browne, et al. 2012) which gives an extensive overview of the method and it’s applications. One of the reasons that this algorithm proves so useful is that it attempts to address the issue of whether your actions will prove harmful in the future. Much of the research discussed in this article is very good at dealing with immediate or ‘reflex’ responses. However, few would determine whether actions would hurt you in the long term. This is hard to determine for AI without putting some processing power behind it and even harder when working in a dynamic video game that requires quick responses. MCTS has proven useful since it can simulate whether an action taken on the current frame will be useful 5/10/100/1000 frames in the future and has led to significant improvements in AI behaviour. While Ms. Pac-Man helped push MCTS research, many resarchers have now moved onto the Physical Travelling Salesman Problem (PTSP), which provides it’s own unique challenges due to the nature of the game environment. Ms. Pac-Man is still to date an interesting research area given the challenge that it presents. We are still seeing research conducted within the community as we attempt to overcome the challenge that one small change to the game code presented. In addition, we have moved on from simply focussing on representing the player and started to focus on the ghosts as well, lending to the aforementioned Pac-Man vs. Ghosts competition. While the gaming community at large has more or less forgotten about the series, it has had a significant impact on the AI research community. While the interest in Pac-Man and Ms. Pac-Man is beginning to dissipate, it has encouraged research that has provided significant contribution to artificial and computational intelligence in general. http://www.pacman-vs-ghosts.net/ — The homepage of the competition where you can download the software kit and try it out yourself. http://pacman.shaunew.com/ — An unofficial remake that is inspired by the aforementioned Pac-Man dossier by Jamey Pittman. (Bjarnason, R., Fern, A., & Tadepalli, P. 2009). Lower Bounding Klondike Solitaire with Monte-Carlo Planning. In Proceedings of the International Conference on Automated Planning and Scheduling, 2009. (Browne, C., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P., Rohlfshagen, P., Tavener, S., Perez , D., Samothrakis, S. and Colton, S., 2012) A Survey of Monte Carlo Tree Search Methods, IEEE Transactions on Computational Intelligence and AI in Games (2012), pages: 1–43. (Burrow, P. and Lucas, S.M., 2009) Evolution versus Temporal Difference Learning for Learning to Play Ms Pac-Man, Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games. (Emilio, M., Moises, M., Gustavo, R. and Yago, S., 2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. Proceedings of the 2010 IEEE Symposium on Computational Intelligence and Games. (Gallagher, M. and Ledwich, M., 2007) Evolving Pac-Man Players: What Can We Learn From Raw Input? Proceedings of the 2007 IEEE symposium on Computational Intelligence and Games. (Gallagher, M. and Ryan., A., 2003) Learning to Play Pac-Man: An Evolutionary, Rule-based Approach. Proceedings of the 2003 Congress on Evolutionary Computation (CEC). (Chaslot, G. M. B., Winands, M. H., & van Den Herik, H. J. 2008). Parallel monte-carlo tree search. In Computers and Games (pp. 60–71). Springer Berlin Heidelberg. (Handa, H.) Evolutionary Fuzzy Systems for Generating Better Ms. PacMan Players. Proceedings of the IEEE World Congress on Computational Intelligence. (Kalyanpur, A. and Simon, M., 2001) Pacman using genetic algorithms and neural networks. (Koza, J., 1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press. (Lucas, S.M.,2005) Evolving a Neural Network Location Evaluator to Play Ms. Pac-Man, Proceedings of the 2005 IEEE Symposium on Computational Intelligence and Games. (Pittman, J., 2011) The Pac-Man Dossier. Retrieved from: http://home.comcast.net/~jpittman2/pacman/pacmandossier.html (Rosca, J., 1996) Generality Versus Size in Genetic Programming. Proceedings of the Genetic Programming Conference 1996 (GP’96). (Wirth, N., 2008) An influence map model for playing Ms. Pac-Man. Proceedings of the 2008 Computational Intelligence and Games Symposium Originally published at aiandgames.com on February 10, 2014 — updated to include more contemporary Pac-Man research references. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI and games researcher. Senior lecturer. Writer/producer of YouTube series @AIandGames. Indie developer with @TableFlipGames.
Milo Spencer-Harper
7.8K
6
https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1?source=tag_archive---------0----------------
How to build a simple neural network in 9 lines of Python code
As part of my quest to learn about AI, I set myself the goal of building a simple neural network in Python. To ensure I truly understand it, I had to build it from scratch without using a neural network library. Thanks to an excellent blog post by Andrew Trask I achieved my goal. Here it is in just 9 lines of code: In this blog post, I’ll explain how I did it, so you can build your own. I’ll also provide a longer, but more beautiful version of the source code. But first, what is a neural network? The human brain consists of 100 billion cells called neurons, connected together by synapses. If sufficient synaptic inputs to a neuron fire, that neuron will also fire. We call this process “thinking”. We can model this process by creating a neural network on a computer. It’s not necessary to model the biological complexity of the human brain at a molecular level, just its higher level rules. We use a mathematical technique called matrices, which are grids of numbers. To make it really simple, we will just model a single neuron, with three inputs and one output. We’re going to train the neuron to solve the problem below. The first four examples are called a training set. Can you work out the pattern? Should the ‘?’ be 0 or 1? You might have noticed, that the output is always equal to the value of the leftmost input column. Therefore the answer is the ‘?’ should be 1. Training process But how do we teach our neuron to answer the question correctly? We will give each input a weight, which can be a positive or negative number. An input with a large positive weight or a large negative weight, will have a strong effect on the neuron’s output. Before we start, we set each weight to a random number. Then we begin the training process: Eventually the weights of the neuron will reach an optimum for the training set. If we allow the neuron to think about a new situation, that follows the same pattern, it should make a good prediction. This process is called back propagation. Formula for calculating the neuron’s output You might be wondering, what is the special formula for calculating the neuron’s output? First we take the weighted sum of the neuron’s inputs, which is: Next we normalise this, so the result is between 0 and 1. For this, we use a mathematically convenient function, called the Sigmoid function: If plotted on a graph, the Sigmoid function draws an S shaped curve. So by substituting the first equation into the second, the final formula for the output of the neuron is: You might have noticed that we’re not using a minimum firing threshold, to keep things simple. Formula for adjusting the weights During the training cycle (Diagram 3), we adjust the weights. But how much do we adjust the weights by? We can use the “Error Weighted Derivative” formula: Why this formula? First we want to make the adjustment proportional to the size of the error. Secondly, we multiply by the input, which is either a 0 or a 1. If the input is 0, the weight isn’t adjusted. Finally, we multiply by the gradient of the Sigmoid curve (Diagram 4). To understand this last one, consider that: The gradient of the Sigmoid curve, can be found by taking the derivative: So by substituting the second equation into the first equation, the final formula for adjusting the weights is: There are alternative formulae, which would allow the neuron to learn more quickly, but this one has the advantage of being fairly simple. Constructing the Python code Although we won’t use a neural network library, we will import four methods from a Python mathematics library called numpy. These are: For example we can use the array() method to represent the training set shown earlier: The ‘.T’ function, transposes the matrix from horizontal to vertical. So the computer is storing the numbers like this. Ok. I think we’re ready for the more beautiful version of the source code. Once I’ve given it to you, I’ll conclude with some final thoughts. I have added comments to my source code to explain everything, line by line. Note that in each iteration we process the entire training set simultaneously. Therefore our variables are matrices, which are grids of numbers. Here is a complete working example written in Python: Also available here: https://github.com/miloharper/simple-neural-network Final thoughts Try running the neural network using this Terminal command: python main.py You should get a result that looks like: We did it! We built a simple neural network using Python! First the neural network assigned itself random weights, then trained itself using the training set. Then it considered a new situation [1, 0, 0] and predicted 0.99993704. The correct answer was 1. So very close! Traditional computer programs normally can’t learn. What’s amazing about neural networks is that they can learn, adapt and respond to new situations. Just like the human mind. Of course that was just 1 neuron performing a very simple task. But what if we hooked millions of these neurons together? Could we one day create something conscious? I’ve been inspired by the huge response this article has received. I’m considering creating an online course. Click here to tell me what topic to cover. I’d love to hear your feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
Arik Sosman
1.5K
7
https://blog.arik.io/facebook-m-the-anti-turing-test-74c5af19987c?source=tag_archive---------1----------------
Facebook M — The Anti-Turing Test – Arik’s Blog
Facebook has recently launched a limited beta of its ground-breaking AI called M. M’s capabilities far exceed those of any competing AI. Where some AIs would be hard-pressed to tell you the weather conditions for more than one location (god forbid you go on a trip), M will tell you the weather forecast for every point on your route at the time you’re expected to get there, and also provide you with convenient gas station suggestions, account for traffic in its estimations, and provide you with options for food and entertainment at your destination. As many people have pointed out, there have been press releases stating that M is human-aided. However, the point of this article is not to figure out whether or not there are humans behind it, but to indisputably prove it. When communicating with M, it insists it’s an AI, and that it lives right inside Messenger. However, its non-instantaneous nature and the sheer unlimited complexity of tasks it can handle suggest otherwise. The opinion is split as to whether or not it’s a real AI, and there seems to be no way of proving its nature one way or the other. The biggest issue with trying to prove whether or not M is an AI is that, contrary to other AIs that pretend to be human, M insists it’s an AI. Thus, what we would be testing for is humans pretending to be an AI, which is much harder to test than the other way round, because it’s much easier for humans to pretend to be an AI than for an AI to pretend to be a human. In this situation, a Turing test is futile, because M’s objective is precisely to not pass a Turing test. So what we want to prove is not the limitations of the AI, but the limitlessness of the (alleged) humans behind it. What we need therefore is a different test. An “Anti-Turing” test, if you will. As it happens, I did find a way of proving M’s nature. But good storytelling mandates that first, I describe my laborious path to the result, and the inconclusive experiments I had to conduct before I finally got a definitive answer. When I first got M, our conversation started like this: “I use artificial intelligence, but people help train me,” was M’s response to my question regarding its nature. That can mean many things, because using AI is not the same as being a completely autonomous AI. So I kept bugging it about its nature. Some people opined that what M refers to as AI is that there are people typing out all the responses, but the tool that helps them do that is based on machine learning. However, directly asking about that didn’t yield any new insights. M’s assertiveness regarding its nature is set in stone. Nonetheless, there were some minor tells that arguably betrayed the underlying human nature of this chatbot. To test its limit, I have asked it to perform a set of complicated tasks for me that no other AI out there could pull off. I told it where I work, and then slightly modified my request. And indeed, it responded! The most noteworthy aspect of this reply is that “Google Maps” wasn’t capitalized, suggesting that maybe, just maybe, a human typed it out in a hurry. And indeed, even with some other requests, its responses have proven not to be as impeccable as the ones we’re used to from Siri. For instance, when I asked it to find some nice wallpapers for me taken from the Berkeley stadium depicting the Bay Area at night, preferably with the Bay Bridge, the Transamerica Pyramid, and the Sather Tower being in the picture, M did manage to find some very nice wallpapers for me, but it said that it couldn’t find any with the Campanile. As consolation, though, it said it would let me know if it found any that fit my criteria more precisely: Now, the first issue with the above response is that the wallpapers it sent me did have the Transamerica Pyramid, and M knew they did. What they didn’t have was the Sather Tower, so why is it saying it’s going to let me know about pictures with the Transamerica Pyramid? The second issue is that it’s called the “Transamerica Pyramid,” not the “Transamerican Pyramid.” And lastly, note the two “with”s and the “I’l”. It has made two typos! And indeed, that was not the only time it did: While a lot of humans struggle with the distinction between “its” and “it’s,” for an AI, that should not have been an issue. Even so, it might have been trained wrong, so as such, these lapses are not sufficiently conclusive. Even the delayed responses I mentioned earlier could have been deliberate, including the fact that there’s a typing indicator shown when M is preparing a response, rather than sending the whole string instantaneously as a regular AI would. The results and indications so far didn’t satisfy me, so I was still looking for a way to prove that there are real humans behind M. Just how could I make them come out, make them show themselves? As it happens, the answer came to me at a time when I wasn’t actively looking for it. The movie in Cupertino ended rather late, and I asked M whether there was any place I could get dinner at afterwards that would still be open at that time. There were only two places open, but I wasn’t sure whether their kitchen would still be open, too. Thus, I asked M whether it could call them and figure that out. And indeed, it said it could! So I asked M whether it could call my friends (nope). Whether it could call me (nope). Apparently, it could only call businesses for me, but not individuals. So what do I do? I make up a business and ask M to call it. So M asked me for the phone number, and I simply gave it mine. About five minutes later, I receive a call with no caller ID. When I pick up, I hear some rumbling noises in the background, say “hello,” and then the other end hangs up. Immediately afterward, the following exchange happens with M: Unfortunately, I didn’t have a landline phone number, so I was a bit disappointed that not even this experiment could prove M’s nature. A few days later, I had to get some work done during the weekend, and while at the office, I realized that the company did have one. The experiment had to be repeated! About three minutes later, we get a phone call in the conference room. When I pick up, a distinctively human, female voice says, “Hello?” As it happens, I had accidentally set the phone to mute before that, so she didn’t hear me saying the company name. Still, the voice was most definitely human. And because the reader shouldn’t be taking me at face value, I made a recording of that whole encounter: Immediately afterward, M sends a reply. What’s more, it appears to me that they forgot to block the caller ID for that particular call, because I got to see the phone number they were calling from. So there, very clearly, M was calling from +1 (650) 796–2402. As can be seen on the photo, the automatic reverse-lookup matched that number to Facebook. Thus, here we are. We have definitive proof that M is powered by humans. The next question is: Is it only humans, or is there at least some AI-driven component behind it? As to this problem, I’ll leave it as a homework assignment for the reader to figure out. In the meantime, I shall enjoy having my own free personal (human) assistant. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Engineer @BitGo My experimental blog.
Tony Aubé
4.5K
8
https://medium.com/swlh/no-ui-is-the-new-ui-ab3f7ecec6b3?source=tag_archive---------2----------------
No UI is the New UI – The Startup – Medium
On the rise of UI-less apps and why you shouldcare about them as a designer. October 23, 2015 • 8 minutes read A couple of months ago, I shared with my friends how I think apps like Magic and Operator are going to be the next big thing. If you don’t know about these apps, what make them special is that they don’t use a traditional UI as a mean of interaction. Instead, the entire app revolves around a single messaging screen. These are called ‘Invisible’ and ‘Conversational’ apps, and since my initial post, a slew of similar apps came to market. Even as of writing this, Facebook is releasing M, a personal assistant that’s integrated with Messenger to help you do about anything. While these apps operate in a slew of different markets, from checking your bank account, scheduling a meeting, making a reservation at the best restaurant to being your travel assistant, they all have one thing in common: they place messaging at the center stage. Matti Makkonen is a software engineer who passed away a couple of months ago. My guess is that you didn’t hear of his death, and you most likely don’t know who he was. However, Makkonen is probably one of the most important individuals in the domain of communications. And I mean — on the level of Alexander Bell — important. He is the inventor of SMS. If you didn’t realize how pervasive SMS has become today, think again. SMS is the most used application in the world. Three years ago, it had an estimated 4 billion active users. That was over four times the numbers of Facebook users at the time. Messaging and particularly SMS has been slowly taking over the world. It is now fundamental to human communication, and it is why messaging apps such as WhatsApp and WeChat are now worth billions. While messaging has become center to our everyday life, it’s currently only used in the narrow context of personal communications. What if we could extend messaging beyond this? What if messaging could transform the way we interact with computers the same way it transformed the way we interact with each other? In the recent movie Ex Machina, a billionaire creates Ava, a female-looking robot endowed with artificial intelligence. To test his invention, he brings-in a young engineer to see if he could fall in love with her. The whole premise of the movie is centered around the Turing test, a test invented by Alan Turing (also featured in the recent movie The Imitation Game) in order to determine if a artificial intelligence is equivalent of that of a human. A robot passing the Turing test would have huge implications on humanity, as it would mean that artificial intelligence has reached human level. While we are far from creating robots that can look and act like humans such as Ava, we’ve gotten pretty good at simulating human intelligence innarrow contexts. And one of those contexts where AI performs best is, you’ve guessed it, messaging. This is thanks to deep learning, a process where the computer is taught to understand and solve a problem by itself, rather than having engineers code the solution. Deep learning is a complete game changer. It allowed AI to reach new heights previously thought to be decades away. Nowadays, computers can hear, see, read and understand humans better than ever before. This is opening a world of opportunities for AI-powered apps, toward which entrepreneurs are rushing. In this gold rush, messaging is the low-hanging fruit. This is because, out of all the possible forms of input, digital text is the most direct one. Text is constant, it doesn’t carry all the ambiguous information that other forms of communication do, such as voice or gestures. Furthermore, messaging makes for a better user experience than traditional apps because it feels natural and familiar. When messaging becomes the UI, you don’t need to deal with a constant stream of new interfaces all filled with different menus, buttons and labels. This explains the current rise in popularity of invisible and conversational apps, but the reason you should care about them goes beyond that. The rise in popularity of these apps recently brought me to a startling observation : advances in technology, especially in AI, are increasingly making traditional UI irrelevant. As much as I dislike it, I now believe that technology progress will eventually make UI a tool of the past, something no longer essential for Human-Computer interaction. And that is a good thing. One could argue that conversational and invisible apps aren’t devoid of UI. After all, they still require a screen and a chat interface. While it is true that these apps do require UI design to some extent, I believe these are just the tip of the iceberg. Beyond them, new technologies have the potential to disrupt the screen entirely. To my point, have a look at the following videos: The first video showcases project Soli, a small Radar chip created by Google to allow fine gesture recognition. The second one presents Emotiv, a product that can read your brainwaves and understand their meaning through — bear with me — electroencephalography (or EEG for short). While both technologies seem completely magical, they are not. They are currently functional and have something very special in common: they don’t require a UI for computer input. As a designer, this is an unsettling trend to internalize. In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface? Why bother designing an app to manage your bank account when you could just talk to it directly? Beyond human-interface interaction, we are entering the world of Brain-Computer Interaction. In this world, digital-telepathy coupled with AI and other means of input could allow us to communicate directly with computer, without the need for a screen. In his talk at CHI 2014, Scott Jenson introduced the concept of a technological tiller. According to him, a technological tiller is when we stick an old design onto a new technology wrongly thinking it will work out. The term is derived from a boat tiller, which was, for a long time, the main navigation tool known to man. Hence, when the first cars were invented, rather than having steering wheels as a mean of navigation, they had boat tillers. The resulting cars were horribly hard to control and prone to crash. It was only after the steering wheel was invented and added to the design that cars could become widely used. As a designer, this is a valuable lesson: a change in context or technology most often requires a different design approach. In this example, the new technology of the motor engine needed the new design of the steering wheel to make the resulting product, the car, reach its full potential. When a technological tiller is ignored, it usually leads to product failures. When it is acknowledged and solved, it usually leads to a revolution and tremendous success. And if one company best understood this principle, it is Apple, with the invention of the iPhone and the iPad: A technological tiller was Nokia sticking a physical keyboard on top of a phone. Good design was to create a touch screen and digital keyboard. A technological tiller was Microsoft sticking Windows XP on top of a tablet. Good design was to develop a new, finger-friendly OS. And I believe a technological tiller is sticking an iPad screen over every new Internet-of-Things things. What if good design is about avoiding the screen altogether? Learning about technological tiller teaches us that sticking too much to old perspectives and ideas is a surefire way to fail. The new startups developing invisible and conversational apps understand this. They understand that the UI is not the product itself, but only a scaffolding allowing us to access the product. And if avoiding that scaffolding can lead to a better experience, then it definitively should be. So do I believe that AI is taking over, that UI are obsolete and that all visual designers will be out of jobs soon? Not really. As far as I know, UI will still be needed for computer output. For the foreseeable future, people will still use the screens to read, watch videos, visualize data, and so on. Furthermore, as Nir mentioned in his great article on the subject, conversational apps are currently good at only a specific set of tasks. It is safe to think that this will also be the case for new technologies such as Emotiv and project Soli. As game-changing as these are, they will most likely not be good at everything, and UI will probably outperform them at specific tasks. What I do believe, however, is that these new technologies are going to fundamentally change how we approach design. This is necessary to understand for those planning to have a career in tech. In a future where computer can see, talk and listen and reply to you, what good are going to be your awesome pixel-perfect Sketch skills? Let this be a fair warning against complacency. As UI designers, we have a tendency to presume a UI is the solution to every new design problems. If anything, the AI revolution will force us to reset our presumption on what it means to design for interaction. It will push us to leave our comfort zone and look at the bigger picture, bringing our focus on the design of the experience rather than the actual screen. And that is an exciting future for designers. 💚 Please hit recommend if you enjoyed or learned from this text. To keep things concise, this text uses the term UI as short for Graphical User Interface. More precisely, it refers to the web and app visual patterns that have become so pervasive in the recent years. This text was originally published on TechCrunch on 11/11/2015. Published in #SWLH (Startups, Wanderlust, and Life Hacking) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Personal thoughts on the future of design & technology. Lead Design @ Osmo. Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
Matt O'Leary
373
12
https://howwegettonext.com/i-let-ibm-s-robot-chef-tell-me-what-to-cook-for-a-week-d881fc884748?source=tag_archive---------3----------------
I Let IBM’s Robot Chef Tell Me What to Cook for a Week
Originally published at www.howwegettonext.com. If you’ve been following IBM’s Watson project and like food, you may have noticed growing excitement among chefs, gourmands and molecular gastronomists about one aspect of its development. The main Watson project is an artificial intelligence that engineers have built to answer questions in native language — that is, questions phrased the way people normally talk, not in the stilted way a search engine like Google understands them. And so far, it’s worked: Watson has been helping nurses and doctors diagnose illnesses, and it’s also managed a major “Jeopardy!” win. Now, Chef Watson — developed alongside Bon Appetit magazine and several of the world’s finest flavor-profilers — has been launched in beta, enabling you to mash recipes according to ingredients of your own choosing and receive taste-matching advice which, reportedly, can’t fail. While some of the world’s foremost tech luminaries and conspiracy theorists are a bit skeptical about the wiseness of A.I., if it’s going to be used at all, allowing it to tell you what to make out of a fridge full of unloved leftovers seems like an inoffensive enough place to start. I decided to put it to the test. While employed as a food writer for well over a decade, I’ve also spent a good part of the last nine years working on and off in kitchens. Figuring out how to use “spare” ingredients has become quite commonplace in my professional life. I’ve also developed a healthy disregard for recipes as anything other than sources of inspiration (or annoyance) but for the purposes of this experiment am willing to follow along and try any ingredient at least once. So, with this in mind, I’m going to let Watson tell me what to eat for a week. I’ve spent a good amount of time playing around with the app, which can be found here, and I’m going to follow its instructions to the letter where possible. I have an audience of willing testers for the food and intend to do my best in recreating its recipes on the plate. Still, I’m going to try to test it a bit. I want to see whether or not it can save me time in the kitchen; also, whether it has any amazing suggestions for dazzling taste matches; if it can help me use things up in the fridge; and whether or not it’s going to try to get me to buy a load of stuff I don’t really need. A lot of work has gone into the creation of this app — and a lot of expertise. But is it useable? Can human beings understand its recipes? Will we want to eat them? Let’s find out. A disclaimer before we start: Chef Watson isn’t great at telling you when stuff is actually ready and cooked. You need to use your common sense. Take all of its advice as advice and inspiration only. It’s the flavors that really count. Monday: The Tailgating Corn Salmon Sandwich My first impression is that the app is intuitive and pretty simple to use. Once you’ve added an ingredient, it suggests a number of flavor matches, types of dishes and “moods” (including some off-the-wall ones like “Mother’s Day”). Choose a few of these options and the actual recipes begin to bunch up on the right of the screen. I selected salmon and corn, then opted for the wildly suggestive “Tailgating corn salmon sandwich.” The recipe page itself has links to the original Bon Appetit dish that inspired your A.I. mélange, accompanied by a couple of pictures. There’s a battery of disclaimers stating that Chef Watson really only wants to suggest ideas, rather than tell you what to eat — presumably to stop people who want to try cooking with fiberglass, for example, from launching “no win, no fee” cases. My own salmon tailgating recipe seemed pretty straightforward. There are a couple of nice touches on the page, with regard to usability: You can swap out any ingredients that you might not have in stock for others, which Watson will suggest (it seems fond of adding celery root to dishes). For this first attempt I decided to follow Watson’s advice almost to a T. I didn’t have any garlic chile sauce but managed to make a presumably functional analog out of some garlic and chili sauce. The only other change I made involved adding some broad beans, because I like broad beans. During prep, I employed a nearly unconscious bit of initiative, namely when I cooked the salmon. It’s entirely likely that Watson was, as seemed to be the case, suggesting that I use raw salmon, but it’s Monday night and I’m not in the mood for anything too mind-bending. Team Watson: If I ruined your tailgater with my pig-headed insistence on cooked fish, I’m sorry. Although I’m not too sorry because, you know, it was actually a really good dish. I was at first unsure — the basil seemed like a bit of an afterthought; I wasn’t sure the lime zest was necessary; and cold salmon salad on a burger bun isn’t really an easy sell. But damn it, I’d make that sandwich again. It was missing some substance overall. It made enough for two small buns, so I teamed it up with a nice bit of Korean-spiced, pickled cucumber on the side, which worked well. My fellow diner deemed it “fine, if a little uninteresting” — and yes, maybe it could have done with a bit more sharpness and depth, and maybe a little more “a computer told me how to make this” flavor wackiness, but overall: Well done. Hint! Definitely add broad beans. They totally worked. Now, to mull over what “tailgating” might mean... Tuesday: Spanish Blood Sausage Porridge It was day two of the Chef Watson “guest slot” in the kitchen, and things were about to get interesting. Buoyed by yesterday’s Tailgating Salmon Sandwich success, I decided to give Watson something to sink its digital teeth into and supply only one ingredient: blood sausage. I also specified “main” as a style, really so that he/she/it knew that I wasn’t expecting dessert. If I’m being very honest, I’ve read more appetizing recipes than blood sausage porridge. Even the inclusion of the word “Spanish” doesn’t do anything to fancy it up. And, a bit concerningly, this is a recipe that Watson has extrapolated from one for Rye Porridge with Morels, replacing the rye with rice, the mushroom with sausage and the original’s chicken livers with a single potato and one tomato. Still, maybe it would be brilliant. But unlike yesterday, I ran into some problems. I wasn’t sure how many tomatoes and potatoes Watson expected me to have here — the ingredients list says one of each; the method suggests many — or also why I had to soak the tomato in boiling water first, although it makes sense in the original mushroom-centric method. Additionally, Wastson offered the whimsical instruction to just “cook” the tomatoes and potatoes, presumably for as long as I feel like. There’s a lot of butter involved in this recipe and rather too much liquid recommended: eight cups of stock for one-and-a-half of rice. I actually got a bit fed up after four and stopped adding them. Forty to 50 minutes cooking time was a bit too long, too — again, that’s been directly extracted from the rye recipe. But these were mere trifles. The dish tasted great. It’s a lovely blend of flavors and textures, thanks to the blood sausage and the potato. The butter works brilliantly and the tomato on top is a nice touch. And it proves Watson’s functionality. You can suggest one ingredient that you find in the fridge, use your initiative a bit and you’ll be left with something lovely. And buttery. Lovely and buttery. Well done, Watson! Wednesday: Diner Cod Pizza When I read this recipe, I wondered whether this was going to be it for me and Watson. “Diner,” “cod” and “pizza” are three words that don’t really belong together, and the ingredients list seemed more like a supermarket sweep than a recipe. Now that I’ve actually made the meal, I don’t know what to think about anything. You might remember a classic 1978 George A. Romero-directed horror film called“Dawn of the Dead.” Its 2004 remake, following the paradigm shift to running zombies in “28 Days Later,” suffered critically. My impression of this remake was always that if it’d just been called something different — “Zombies Go Shopping,” for instance — every single person who saw it would have loved it. As it was, viewers thought it seemed unauthentic, and it gathered what was essentially some unfair criticism. (See also the recent “RoboCop” remake or, as I call it,“CyberSwede vs. Detroit.”) This meal is my culinary “Dawn of the Dead.” If only Watson had called it something other than pizza, it would have been utterly perfect. It emphatically isn’t a pizza. It has as much in common with pizza as cake does. But there’s something about radishes, cod, ginger, olives, tomatoes and green onions on a pizza crust that just work remarkably well. To be clear, I fully expected to throw this meal away. I had the website for curry delivery already open on my phone. That’s all before I ate two of the pizzas. They taste like nothing on earth. The addition of Comté cheese and chives is the sort of genius/absurdity that makes people into millionaires. I was, however, nervous to give one to my pregnant fiancée; the ingredients are so weird that I was just sure she’d suffer some really strange psychic reaction or that the baby would grow up to be extremely contrary. Be careful with this recipe preparation: As I’ve found with Watson, it doesn’t tell you how to assure that your fish is cooked; nor does it tell you how long to pre-bake the crust base. These kinds of things are really important. You need to make sure this dish is cooked properly. It takes longer than you might expect. I’m writing this from Sweden, the home of the ridiculous “pizza,” and yet I have a feeling that if I were to show this recipe to a chef who ordinarily thinks nothing of piling a kilo of kebab meat and Béarnaise sauce on bread and serving it in a cardboard box with a side salad of fermented cabbage, he or she would balk and tell me that I’ve gone too far. Which would be his or her loss. I think I’m going to have to take this to “Dragon’s Den” instead. Watson, I don’t know how I’m going to cope with normal recipes after our little holiday together. You’re changing the way I think about food. Thursday: Fall Celery Sour Cream Parsley Lemon Taco Following yesterday’s culinary epiphany, I was keen to keep a cool head and a critical eye on Chef Watson, so I decided to road-test one theory from an article I found on the Internet. It mentioned that some of the most frequently discarded items in American fridges are celery, sour cream, fresh herbs and lemons. Let’s not dwell too much on the “luxury problems” aspect of this (I can’t imagine that people everywhere in the world are lamenting the amount of sour cream and flat-leaf parsley they toss) and focus instead on what Watson can do with this admittedly tricky-sounding shopping list. What it did was this: Immediately add shrimp, tortillas and salsa verde. The salsa verde it recommended, from an un-Watsoned recipe courtesy of Bon Appetit, was fantastic. It’s nothing like the salsa verde I know and love, with its capers and dill pickles and anchovies: This iteration required a bit of a simmer, was super-spicy and delicious. (I had to cheat and use normal tomatoes instead of tomatillos, but I don’t think it made a huge difference.) The marinade for the shrimp was unusual in that like a lot of what Watson recommends it used a ton of butter. A hefty wallop of our old friend kosher salt, too. Now, I’ve worked as a chef on and off for several years so am unfazed by the appearance of salt and butter in recipes. They’re how you make things taste nice. However, there’s no getting away from the fact that I bought a stick of butter at the start of the week and it’s already gone. The assembled tacos were good — they were uncontroversial. My dining companion deemed the salsa “a bit too spicy,” but I liked the kick it gave the dish and the sour cream calmed it down a bit. It struck me as a bit of a shame to fire up the barbecue for only about two minutes’ worth of cooking time, but it’s May and the sun is shining so what the heck. Was this recipe as absurd as yesterday’s? Absolutely not. Was it as memorable? Sadly, I don’t think so. Would I make it again? I’m sorry, Watson, but probably not. These tacos were good but ultimately not worth the prep hassle. Friday: Mexican Mushroom Lasagna Before I start, I don’t want you to get the impression that my love affair (which reached the height of its passion on Wednesday) with Watson is over. It absolutely isn’t. I have been consistently impressed with the software’s intelligence, its ease of use and the audacity of some of its suggestions. For flavor-matching, it’s incredible. It really works. It probably won’t save you any money; it won’t make you thin; and it won’t teach you how to actually cook — all of that stuff you have to work out for yourself. But, at this stage, it’s a distinctly impressive and worthwhile project. Do give it a go. But... be prepared to have to coax something workable out of it every once in a while. Today, it took me a long time to find a meat-free recipe which didn’t, when it came down to it, contain some sort of meat. I selected “meat” as an option for what I didn’t want to include, and it took me to a recipe for sausage lasagne. With one-and-a-half pounds of sausage in it. I removed the sausage, and it replaced it with turkey mince. Maybe someone just needs to tell Watson that neither sausages nor turkeys grow on trees. After much tinkering and submitting and resubmitting, the recipe I ended up with is for lasagne topped with a sort of creamy mashed potato sauce. It’s very easy and it’s a profoundly smart use of ingredients. The lasagne is not the world’s most aesthetically appealing dish, and it’s not as astonishingly flavored as some of this week’s other revelations, but I don’t think I’ll be making my cheese sauce in any other way from this point onwards. Top marks. And, in essence, this kind of sums up Watson for me. You need to tinker with it a bit before you can find something usable. You may need to make a “do I want to put mashed potato on this lasagne?” leap of faith, and you’re going to have to actually go with it if you want the app’s full benefit. You’ll consume a lot of dairy products, and you might find yourself daydreaming about nice, simple, unadorned salads if you decide to go all-in with its suggestions. But an A.I. that can tell us how to make a pizza out of cod, ginger and radishes that you know is going to taste amazing? One that will gladly suggest a workable recipe for blood sausage porridge and walk you through it without too much hassle? That gives you a “how crazy” option for each ingredient? That is only designed to make the lives of food enthusiasts more interesting? Why on earth not? Watson and I are going to be good friends from this point forward, even if we don’t speak every day. And I can’t wait to introduce it to others. Now, though, I’m going to only consume smoothies for a week. Seriously, if I even look at butter in the next few days, I’m probably going to puke. This fall, Medium and How We Get To Next are exploring the future of food and what it means for us all. To get the latest and join the conversation, you can follow Future of Food. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inspiring stories about the people and places building our future. Created by Steven Johnson, edited by Ian Steadman, Duncan Geere, Anjali Ramachandran, and Elizabeth Minkel. Supported by the Gates Foundation.
Tanay Jaipuria
1.1K
5
https://medium.com/@tanayj/self-driving-cars-and-the-trolley-problem-5363b86cb82d?source=tag_archive---------4----------------
Self-driving cars and the Trolley problem – Tanay Jaipuria – Medium
Google recently announced that their self-driving car has driven more than a million miles. According to Morgan Stanley, self-driving cars will be commonplace in society by ~2025. This got me thinking about the ethics and philosophy behind these cars, which is what the piece is about. In 1942, Isaac Asimov introduced three laws of robotics in his short story “Runaround”. They were as follows: He later added a fourth law, the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Though fictional, they provide a good philosophical grounding of how AI can coexist with society. If self driving cars, were to follow them, we’re in a pretty good spot right? (Let’s leave aside the argument that self-driving cars lead to loss of jobs of taxi drivers and truck drivers and so should not exist per the 0th/1st law) However, there’s one problem which the laws of robotics don’t quite address. It’s a famous thought experiment in philosophy called the Trolley Problem and goes as follows: It’s not hard to see how a similar situation would come up in a world with self-driving cars, with the car having to make a similar decision. Say for example a human-driven car runs a red light and a self-driving car has two options: What should the car do? From a utilitarian perspective, the answer is obvious: to turn right (or “pull the lever”) leading to the death of only one person as opposed to five. Incidentally, in a survey of professional philosophers on the Trolley Problem, 68.2% agreed, saying that one should pull the lever. So maybe this “problem” isn’t a problem at all and the answer is to simply do the Utilitarian thing that “greatest happiness to the greatest number”. But can you imagine a world in which your life could be sacrificed at any moment for no wrongdoing to save the lives of two others? Now consider this version of the trolley problem involving a fat man: Most people that go the utilitarian route in the initial problem say they wouldn’t push the fat man. But from a utilitarian perspective there is no difference between this and the initial problem — so why do they change their mind? And is the right answer to “stay the course” then? Kant’s categorical imperative goes some way to explaining it: In simple words, it says that we shouldn’t merely use people as means to an end. And so, killing someone for the sole purpose of saving others is not okay, and would be a no-no by Kant’s categorical imperative. Another issue with utilitarianism is that it is a bit naive, at least how we defined it. The world is complex, and so the answer is rarely as simple as perform the action that saves the most people. What if, going back to the example of the car, instead of a family of five, inside the car that ran the red light were five bank robbers speeding after robbing a bank. And sat in the other car was a prominent scientist who had just made a breakthrough in curing cancer. Would you still want the car to perform the action that simply saves the most people? So may be we fix that by making the definition of Utilitarianism more intricate, in that we assign a value to each individuals life. In that case the right answer could still be to kill the five robbers, if say our estimate of utility of the scientist’s life was more than that of the five robbers. But can you imagine a world in which say Google or Apple places a value on each of our lives, which could be used at any moment of time to turn a car into us to save others? Would you be okay with that? And so there you have it, though the answer seems simple, it is anything but, which is what makes the problem so interesting and so hard. It will be a question that comes up time and time again as self-driving cars become a reality. Google, Apple, Uber etc. will probably have to come up with an answer. To pull, or not to pull? Lastly, I want to leave you another question that will need to be answered, that of ownership. Say a self-driving car which has one passenger in it, the “owner”, skids in the rain and is going to crash into a car in front, pushing that car off a cliff. It can either take a sharp turn and fall of the cliff or continue going straight leading to the other car falling of the cliff. Both cars have one passenger. What should the car do? Should it favor the person that bought it — its owner? Thanks for reading! Feel free to share this post and leave a note/write a response to share your thoughts. I’m tanayj on twitter if you want to discuss further! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product @Facebook. Previously @McKinsey. I like tech, econ, strategy and @Manutd. Views and banter my own.
Milo Spencer-Harper
2.2K
3
https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------5----------------
How to build a multi-layered neural network in Python
In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron. But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be? The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output. Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs. You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers. In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples. The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further. Also available here: https://github.com/miloharper/multi-layer-neural-network This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post. What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”. Ok, let’s try running it using the Terminal command: python main.py You should get a result that looks like this: First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close! You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”. That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
Ben Brown
1.1K
7
https://blog.howdy.ai/what-will-the-automated-workplace-look-like-495f9d1e87da?source=tag_archive---------6----------------
Start automating your business tasks with Slack – Howdy
If you haven’t read about it in the Times or heard about it on NPR yet,you are soon going to be replaced by a robot at your job. All the jobs we thought were safe because they required experience and nuance can now be done by computers. Martin Ford, author of the book the Times and NPR are reporting on, calls it “the threat of a jobless future.” A future where computers write our newspaper articles, create our legal contracts, and compose our symphonies. Automating this type of complicated, quasi-creative task is really impressive. It requires super computers and uses the forefront of artificial intelligence to achieve this shocking result. It requires tons of data and lots of programming using advanced systems not available to ordinary people. But not everything requires deep learning. Some of the things we do in our every day lives, especially at our jobs, can be automated. Though it used to be the domain of the geek, scripting and automation is invading all aspects of the workplace. Workers and organizations who can master scripting and automation will gain an edge on those who can’t. We all have to face the reality that a well built script might be faster and more reliable than we can be at some parts of our jobs. Those of us who can create and wield this type of tool will be able to do better work faster. Luckily, inside messaging tools like Slack, creating customized, interactive automation tools for business tasks is possible with a little open source code, some cloud tools that are mostly free, and a bit of self reflection. “Bots” are apps that live alongside users in a chatroom. Users can issue commands to bots by sending messages to them, or by using special keywords in the chatroom. Traditionally, bots have been used for things like server maintenance and running software tests, but now, using the connected devices all around us, nearly anything can be automated and controlled by a bot. A common task in many technology teams is the stand-up meeting. Everyone stands up, and one at a time, tells the team what they’ve been working on, what they’ve got coming up next, and any problems they are facing. Each person takes a few minutes to speak. In many teams, this is already taking place in a chat room. If there are 10 people on a team, and each person speaks for just 90 seconds, they’ll spend 15 minutes just bringing people up to speed. Nothing has been discussed, no problems have yet been solved. What happens if this process is automated using a “bot” in an environment like Slack? A stand-up is triggered — automatically, or by a project manager. Using a flexible script, the bot simultaneously reaches out to every member of the team via a private message on Slack. The bot has an interactive conversation with each team member in parallel and collects everyone’s responses. Everyone still spends 90 seconds talking about their work, but now it is the same 90 seconds. The bot, now finished collecting the checkin responses, shares its report with all the stakeholders. Just 2 minutes into the meeting, everyone involved has a single document to look at that contains the up to date status of the project. The team gains 13 minutes during which they can discuss this information, clear blockers, and get back to work. Now, this is admittedly an aggressive application of this approach that won’t work for everyone — some teams may need the sequential listing of updates, some teams may need to actually stand up and use their voices. The point I’m trying to make is that automating things like this exposes ways for the work to be improved, for time to be saved, and for the process to evolve. What other processes could be automated like this? What if there was a meeting runner bot that automatically sent out an agenda to all attendees before the meeting, then collected, collated and delivered updates to team members? It could make meetings shorter and more productive by reducing the time needed to bring everyone up to speed. What if there was an HR bot that could collect performance reviews and feedback? What if there was a task management bot that could not only manage the creation of tasks and lists, but also create and deliver up to date progress reports to the whole team? There is a lot to be gained with simple process automation like this! So how can you and your organization benefit from this type of automation tool? First, you’ll need to commit to adopting a tool like Slack where your team can communicate and use this type of bot. Then, you’ll have to customize Slack to take advantage of built in and custom integrations, which takes some programming — though not much, as there are a ton of open source tools ready to use. An organization like my company XOXCO can help you do this. Before you can automate something, you have to know the process and be able to write it down in detail. You’ll have to think about all the special cases that occur. Not only will this allow you to build an automation script, it will help you to hone and document the processes by which your business is conducted! When we do things, we do them one at a time. Robots can do lots of things at once — so once you’ve got your process documented, think about how the steps might be able to run in parallel. For example, could the bot talk to multiple people at once instead of doing it sequentially? Since your script can only do what you tell it, you’ll need to plan for the contingencies that might occur while it runs. What if someone doesn’t respond in time? What if information is unavailable? What if a step in the process fails? Think through these cases and prepare your script to handle them. For example, we built in a 5 minute timeout for our project manager bot — if a user doesn’t respond in 5 minutes, they get a reminder to checkin in person, and their lack of a response is indicated in the report. This may sound complicated, but when it boils down, we’re just talking about including an ELSE for every IF — a good practice for any software or process to incorporate. Your bots, once deployed, can become valuable members of your team. Their success is dependent on your team’s desire to use them, and that they provide a better, faster, more reliable way to achieve organizational goals. Bots should have a user-friendly personality and represent and support company culture. Bots should talk like real people, but not pretend to be real people. Our rule of thumb: try to be as smart as a puppy, which will engender an attitude of forgiveness when the bot does something not quite right. This type of software automation has been common in certain groups for years. There may already be a software automation expert in your midst. She’s probably part of the server administration team, or the quality assurance group. Right now she works on code deployment, or writes software tests. Go find her, and go put her in a room with a project manager and a content strategist, and see if they can identify and automate the team’s top three time sucking activities in a way that is not only useful but fun to use. When we start to design software for messaging, the entire application must be boiled down to words, without colors to choose, navigation to click and sidebars to fill with widgets. This can help us not only build better, more useful software, but put simply, requires us to run our businesses in a more organized, documented and well-understood way. Don’t wait for the Artificial Intelligence explosion to arrive. Start putting these tools to work today. Update: You can now use a fully realized version of the bot discussed in this post — we’ve launched it under the name Howdy! Add Howdy to your team to run meetings, capture information, and automate common tasks for your team. Read more about our launch here. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I’m a designer and technologist in Austin, Texas. I co-founded XOXCO in 2008. The official blog of Howdy.ai and Botkit
Frank Diana
428
11
https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf?source=tag_archive---------7----------------
Digital Transformation of Business and Society – Frank Diana – Medium
At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change. With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this. He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation: Gerd then summarized the session as follows: The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future. My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently When looking at AI, consider trying IA first (intelligent assistance / augmentation). My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated. My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts. My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice “The best way to predict the future is to create it” (Alan Kay). My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens Originally published at frankdiana.wordpress.com on September 10, 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. TCS Executive focused on the rapid evolution of society and business. Fascinated by the view of the world in the next decade and beyond https://frankdiana.net/
Rand Hindi
693
12
https://medium.com/snips-ai/how-artificial-intelligence-will-make-technology-disappear-503cd88e1e6a?source=tag_archive---------8----------------
How Artificial Intelligence Will Make Technology Disappear
This is a redacted transcript of a TEDx talk I gave last April at Ecole Polytechnique in France. The video can be seen on Youtube here. Enjoy ;-) Last March, I was in Costa Rica with my girlfriend, spending our days between beautiful beaches and jungles full of exotic animals. There was barely any connectivity and we were immersed in nature in a way that we could never be in a big city. It felt great. But in the evening, when we got back to the hotel and connected to the WiFi, our phones would immediately start pushing an entire day’s worth of notifications, constantly interrupting our special time together. It interrupted us while watching the sunset, while sipping a cocktail, while having dinner, while having an intimate moment. It took emotional time away from us. And it’s not just that our phones vibrated, it’s also that we kept checking them to see if we had received anything, as if we had some sort of compulsive addiction to it. Those rare messages that are highly rewarding, like being notified that Ashton Kutcher just tweeted this article, made consciously “unplugging” impossible. Just like Pavlov’s dog before us, we had become conditioned. In this case though, it has gotten so out of control that today, 9 out of 10 people experience “phantom vibrations”, which is when you think your phone vibrated in your pocket, whereas in fact it didn’t. How did this happen? Back in 1990, we didn’t have any connected devices. This was the “unplugged” era. There were no push notifications, no interruptions, nada. Things were analog, things were human. Around 1995, the Internet started taking off, and our computers became connected. With it came email, and the infamous “you’ve got mail!” notification. We started getting interrupted by people, companies and spammers sending us electronic messages at random moments. 10 years later, we entered the mobile era. This time, it is not 1, but 3 devices that are connected: a computer, a phone, and a tablet. The trouble is that since these devices don’t know which one you are currently using, the default strategy has been to push all notifications on all devices. Like when someone calls you on your phone, and it also rings on your computer, and actually keeps ringing after you’ve answered it on one of your devices! And it’s not just notifications; accessing a service and finding content is equally frustrating on mobile devices, with those millions of apps and tiny keyboards. If we take notifications and the need for explicit interactions as a proxy for technological friction, then each connected device adds more of it. Unfortunately, this is about to get much worse, since the number of connected devices is increasing exponentially! This year, in 2015, we are officially entering what is called the “Internet of Things” era. That’s when your watch, fridge, car and lamps are connected. It is expected that there will be more than 100 billion connected devices by 2025, or 14 for every person on this planet. Just imagine what it will feel like to interact manually and receive notifications simultaneously on 14 devices.. That’s definitely not the future we were promised! There is hope though. There is hope that Artificial Intelligence will fix this. Not the one Elon Musk refers to that will enslave us all, but rather a human-centric domain of A.I. called “Context-Awareness”, which is about giving devices the ability to adapt to our current situation. It’s about figuring out which device to push notifications on. It’s about figuring out you are late for a meeting and notifying people for you. It’s about figuring out you are on a date and deactivating your non-urgent notifications. It’s about giving you back the freedom to experience the real world again. When you look at the trend in the capabilities of A.I., what you see it that it takes a bit longer to start, but when it does, it grows much faster. We already have A.I.s that can learn to play video games and beat world champions, so it’s just a matter of time before they reach human level intelligence. There is an inflexion point, and we just crossed it. Taking the connected devices curve, and subtracting the one for A.I., we see that the overall friction keeps increasing over the next few years until the point where A.I. becomes so capable that this friction flips around and quickly disappears. In this era, called “Ubiquitous Computing”, adding new connected devices does not add friction, it actually adds value! For example, our phones and computers will be smart enough to know where to route the notifications. Our cars will drive themselves, already knowing the destination. Our beds will be monitoring our sleep, and anticipating when we will be waking up so that we have freshly brewed coffee ready in the kitchen. It will also connect with the accelerometers in our phones and the electricity sockets to determine how many people are in the bed, and adjust accordingly. Our alarm clocks won’t need to be set; they will be connected to our calendars and beds to determine when we fell asleep and when we need to wake up. All of this can also be aggregated, offering public transport operators access to predicted passenger flows so that there are always enough trains running. Traffic lights will adjust based on self-driving cars’ planned route. Power plants will produce just enough electricity, saving costs and the environment. Smart cities, smart homes, smart grids.. They are all just consequences of having ubiquitous computing! By the time this happens, technology will have become so deeply integrated in our lives and ourselves that we simply won’t notice it anymore. Artificial Intelligence will have made technology disappear from our consciousness, and the world will feel unplugged again. I know this sounds crazy, but there are historical examples of other technologies that followed a similar pattern. For example, back in the 1800s, electricity was very tangible. It was expensive, hard to produce, would cut all the time, and was dangerous. You would get electrocuted and your house could catch fire. Back then, people actually believed that oil lamps were safer! But as electricity matured, it became cheaper, more reliable, and safer. Eventually, it was everywhere, in our walls, lamps, car, phone, and body. It became ubiquitous, and we stopped noticing it. Today, the exact same thing is happening with connected devices. Building this ubiquitous computing future relies on giving devices the ability to sense and react to the current context, which is called “context-awareness”. A good way to think about it is through the combination of 4 layers: the device layer, which is about making devices talk to each other; the individual layer, which encompasses everything related to a particular person, such as his location history, calendar, emails or health records; the social layer, which models the relationship between individuals, and finally the environmental layer, which is everything else, such as the weather, the buildings, the streets, trees and cars. For example, to model the social layer, we can look at the emails that were sent and received by someone, which gives us an indication of social connection strength between a group of people. The graph shown above is extracted from my professional email account using the MIT Immersion tool, over a period of 6 months. The huge green bubble is one of my co-founder (which sends way too many emails!), as is the red bubble. The other fairly large ones are other people in my team that I work closely with. But what’s interesting is that we can also see who in my network works together, as they will tend to be included together in emails threads and thus form clusters in this graph. If you add some contextual information such as the activity I was engaged in, or the type of language being used in the email, you can determine the nature of the relationship I have with each person (personal, professional, intimate, ..) as well as its degree. And if you now take the difference in these patterns over time, you can detect major events, such as changing jobs, closing an investment round, launching a new product or hiring key people! Of course, all this can be done on social graphs as well as professional ones. Now that we have a better representation of someone’s social connections, we can use it to perform better natural language processing (NLP) of calendar events by disambiguating events like “Chat with Michael”, which would then assign a higher probability to my co-founder. But a calendar won’t help us figure out habits such as going to the gym after work, or hanging out in a specific neighborhood on Friday evenings. For that, we need another source of data: geolocation. By monitoring our location over time and detecting the places we have been to, we can understand our habits, and thus, predict what we will be doing next. In fact, knowing the exact place we are at is essential to predict our intentions, since most of the things we do with our devices are based on what we are doing in the real world. Unfortunately, location is very noisy, and we never know exactly where someone is. For example below, I was having lunch in San Francisco, and this is what my phone recorded while I was not moving. Clearly it is impossible to know where I actually am! To circumvent this problem, we can score each place according to the current context. For example, we are more likely to be at a restaurant during lunch time than at a nightclub. If we then combine this with a user-specific model based on their location history, we can achieve very high levels of accuracy. For example, if I have been to a Starbucks in the past, it will increase the probability that I am there now, as well as the probability of any other coffee shop. And because we now know that I am in a restaurant, my devices can surface the apps and information that are relevant to this particular place, such as reviews or mobile payments apps accepted there. If I was at the gym, it would be my sports apps. If I was home, it would be my leisure and home automation apps. If we combine this timeline of places with the phone’s accelerometer patterns, we can then determine the transportation mode that was taken between those places. With this, our connected watches could now tell us to stand up when it detects we are still, stop at a rest area when it detects we are driving, or tell us where the closest bike stand is when cycling! These individual transit patterns can then be aggregated over several thousand users to recreate very precise population flow in the city’s infrastructure, as we have done below for Paris. Not only does it give us an indication of how many people transit in each station, it also give us the route they have been taking, where they changed train or if they walked between stations. Combining this with data from the city — concerts, office and residential buildings, population demographics, ... — enables you to see how each factor impacts public transport, and even predict how many people will be boarding trains throughout the day. It can then be used to notify commuters that they should take a different train if they want to sit on their way home, and dynamically adjust the train schedules, maximizing the efficiency of the network both in terms of energy saved and comfort. And it’s not just public transport. The same model and data can be used to predict queues in post offices, by taking into account hyperlocal factors such as when the welfare checks are being paid, the bank holidays, the proximity of other post offices and the staff strikes. This is shown below, where the blue curve is the real load, and the orange one is the predicted load. This model can be used to notify people of the best time to drop and pickup their parcels, which results in better yield management and customer service. It can also be used to plan the construction of new post offices, by sizing them accordingly. And since a post office is just a retail store, everything that works here can work for all retailers: grocery stores, supermarkets, shoe shops, etc.. It could then be plugged into our devices, enabling them to optimize our shopping schedule and make sure we never queue again! This contextual modeling approach is in fact so powerful that it can even predict the risk of car accidents just by looking at features such as the street topologies, the proximity of bars that just closed, the road surface or the weather. Since these features are generalizable throughout the city, we can make predictions even in places where there was never a car accident! For example here, we can see that our model correctly detects Trafalgar square as being dangerous, even though nowhere did we explicitly say so. It discovered it automatically from the data itself. It was even able to identify the impact of cultural events, such as St Patrick’s day or New Year’s Eve! How cool would it be if our self-driving cars could take this into account? If we combine all these different layers — personal, social, environmental — we can recreate a highly contextualized timeline of what we have been doing throughout the day, which in turn enables us to predict what our intentions are. Making our devices able to figure out our current context and predict our intentions is the key to building truly intelligent products. With that in mind, our team has been prototyping a new kind of smartphone interface, one that leverages this contextual intelligence to anticipate which services and apps are needed at any given time, linking directly to the relevant content inside them. It’s not yet perfect, but it’s a first step towards our long term vision — and it certainly saves a lot of time, swipes and taps! One thing in particular that we are really proud of is that we were able to build privacy by design (full post coming soon!). It is a tremendous engineering challenge, but we are now running all our algorithms directly on the device. Whether it’s the machine learning classifiers, the signal processing, the natural language processing or the email mining, they are all confined to our smartphones, and never uploaded to our servers. Basically, it means we can now harness the full power of A.I. without compromising our privacy, something that has never been achieved before. It’s important to understand that this is not just about building some cool tech or the next viral app. Nor is it about making our future look like a science-fiction movie. It’s actually about making technology disappear into the background, so that we can regain the freedom to spend quality time with the people we care about. If you enjoyed this article, it would really help if you hit recommend below, and shared it on twitter (we are @randhindi & @snips) :-) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur & AI researcher working on Making Technology Disappear. CEO @ snips.ai. #AI, #privacy and #blockchain. Follow http://Instagram.com/randhindi This publication features the articles written by the Snips team, fellows, and friends. Snips started as an AI lab in 2013, and now builds Private-by-Design, decentralized, open source voice assistants.
samim
323
8
https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0?source=tag_archive---------9----------------
Obama-RNN — Machine generated political speeches. – samim – Medium
Political speeches are among the most powerful tools leaders use to influence entire populations. Throughout history, political speeches have been used to start wars, end empires, fuel movements & inspire the masses. Political speeches apply many of the tricks found in the field of Social engineering: Congruent communication, intentional body language, Neuro-linguistic programming, HumanBuffer Overflows and more. Read more about The art of human hacking here. In recent years, Barack Obama has emerged as one of the most memorable and effective political speakers on the world stage. Messages like Hope and Yes we can have clearly left a mark on our collective consciousness. Since 2007, Obama’s highly skilled speech writers have written over 4,3megabytes or 730895 words of text, not counting interviews and debates. All of Obama’s speeches are conveniently readable here. With powerful artificial Intelligence / machine learning libraries becoming readily available as open-source, it seems obvious to apply them to speech writing. A particularly interesting class of algorithms are Recurrent Neural Networks (RNN). Recently Andrej Karpathy, a CS PhD student at Stanford has released char-rnn, a Multi-layer Recurrent Neural Networks for character-level language models. The library takes an arbitrary text file as input and learns to predict the next character in the sequence. As the results are pretty amazing, many interesting experiments have sprung up, ranging from composing music, rapping, writing cooking recipes and even re-writing the bible: Step 1 is to feed the model data, the more the better. For this i wrote a web-crawler in python that gathers all publicly available Obama Speeches, parses out the text and removes any interviews/debates. Step 2 is to train the model on the collected text. Training an RNN takes a bit of fiddling, as i painfully found out while training a model on 500mb of classical music midi files (mozart-rnn is wild!). Luckily the standard settings that Andrej suggested were a good starting point for the Obama-RNN. Step 3 is to test the model which automatically generates an unlimited amount of new speeches in the vein of Obama ́s previous speeches. The model can be seeded with a text from which it will start the sequence (e.g. war on terror) and a temperature which makes the output more conservative or diverse, at cost of more mistakes. Here is a selection of some of my favorite speeches the Obama-RNN generated so far. Keep in mind this is a just a quick hack project. With more time & effort the results can be improved. One of the most hilarious patterns to emerge, is that the Obama-RNN really loves to politely say: Good afternoon. Good day. God bless you. Good bless the United States of America. Thank you. I did a test combining Obamas speeches with other famous speeches from the 20st century (including everything from Mother Theresa, Malcom X to Mussolini and Hitler). This gives us an rather insane amalgam of human thought, seen through the “eyes” of a machine. A story for an other day. On this note: God bless you. Good bless the United States of America. Thank you. You can run your own Obama-RNN by following these instructions: Get in touch here: https://twitter.com/samim | http://samim.io/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Designer & Code Magician. Working at the intersection of HCI, Machine Learning & Creativity. Building tools for Enlightenment. Narrative Engineering.
Adam Geitgey
10.4K
15
https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------4----------------
Machine Learning is Fun! Part 2 – Adam Geitgey – Medium
Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话. In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!). This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out! Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished. Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this: We ended up with this simple estimation function: In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value. Instead of using code, let’s represent that same function as a simple diagram: However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model? To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases: Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)! Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model. Let’s combine our four attempts to guess into one big diagram: This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions. There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click: It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together: The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm. In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time. Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess? I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter. Our model might look like this: But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem. Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example: What letter is going to come next? You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing. In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English. To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently. Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters. This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory. Predicting the next letter in a story might seem pretty useless. What’s the point? One cool use might be auto-predict for a mobile phone keyboard: But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us! We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway. To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github. We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example. As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training: You can see that it has figured out that sometimes words have spaces between them, but that’s about it. After about 1000 iterations, things are looking more promising: The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense. But after several thousand more training iterations, it looks pretty good: At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense. Compare that with some real text from the book: Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing! We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters. For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”: Not bad! But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern. In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system. This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers. Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels? First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985: This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those. To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime. Here’s the first level from the game (which you probably remember if you ever played it): If we look closely, we can see the level is made of a simple grid of objects: We could just as easily represent this grid as a sequence of characters with one character representing each object: We’ve replaced each object in the level with a letter: ...and so on, using a different letter for each different kind of object in the level. I ended up with text files that looked like this: Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line: The patterns in a level really emerge when you think of the level as a series of columns: So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well. To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up: Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it. After a little training, our model is generating junk: It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet. After several thousand iterations, it’s starting to look like something: The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool! With a lot more training, the model gets to the point where it generates perfectly valid data: Let’s sample an entire level’s worth of data from our model and rotate it back horizontal: This data looks great! There are several awesome things to notice: Finally, let’s take this level and recreate it in Super Mario Maker: Play it yourself! If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3. The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model. If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free. As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly! For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate. But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been. In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach. Readers have sent me links to other interesting approaches to generating Super Mario levels: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 3! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it.
Arthur Juliani
9K
6
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------5----------------
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
Adam Geitgey
6.8K
11
https://medium.com/@ageitgey/machine-learning-is-fun-part-6-how-to-do-speech-recognition-with-deep-learning-28293c162f7a?source=tag_archive---------6----------------
Machine Learning is Fun Part 6: How to do Speech Recognition with Deep Learning
Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话 , 한국어, Tiếng Việt or Русский. Speech recognition is invading our lives. It’s built into our phones, our game consoles and our smart watches. It’s even automating our homes. For just $50, you can get an Amazon Echo Dot — a magic box that allows you to order pizza, get a weather report or even buy trash bags — just by speaking out loud: The Echo Dot has been so popular this holiday season that Amazon can’t seem to keep them in stock! But speech recognition has been around for decades, so why is it just now hitting the mainstream? The reason is that deep learning finally made speech recognition accurate enough to be useful outside of carefully controlled environments. Andrew Ng has long predicted that as speech recognition goes from 95% accurate to 99% accurate, it will become a primary way that we interact with computers. The idea is that this 4% accuracy gap is the difference between annoyingly unreliable and incredibly useful. Thanks to Deep Learning, we’re finally cresting that peak. Let’s learn how to do speech recognition with deep learning! If you know how neural machine translation works, you might guess that we could simply feed sound recordings into a neural network and train it to produce text: That’s the holy grail of speech recognition with deep learning, but we aren’t quite there yet (at least at the time that I wrote this — I bet that we will be in a couple of years). The big problem is that speech varies in speed. One person might say “hello!” very quickly and another person might say “heeeelllllllllllllooooo!” very slowly, producing a much longer sound file with much more data. Both both sound files should be recognized as exactly the same text — “hello!” Automatically aligning audio files of various lengths to a fixed-length piece of text turns out to be pretty hard. To work around this, we have to use some special tricks and extra precessing in addition to a deep neural network. Let’s see how it works! The first step in speech recognition is obvious — we need to feed sound waves into a computer. In Part 3, we learned how to take an image and treat it as an array of numbers so that we can feed directly into a neural network for image recognition: But sound is transmitted as waves. How do we turn sound waves into numbers? Let’s use this sound clip of me saying “Hello”: Sound waves are one-dimensional. At every moment in time, they have a single value based on the height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look: To turn this sound wave into numbers, we just record of the height of the wave at equally-spaced points: This is called sampling. We are taking a reading thousands of times a second and recording a number representing the height of the sound wave at that point in time. That’s basically all an uncompressed .wav audio file is. “CD Quality” audio is sampled at 44.1khz (44,100 readings per second). But for speech recognition, a sampling rate of 16khz (16,000 samples per second) is enough to cover the frequency range of human speech. Lets sample our “Hello” sound wave 16,000 times per second. Here’s the first 100 samples: You might be thinking that sampling is only creating a rough approximation of the original sound wave because it’s only taking occasional readings. There’s gaps in between our readings so we must be losing data, right? But thanks to the Nyquist theorem, we know that we can use math to perfectly reconstruct the original sound wave from the spaced-out samples — as long as we sample at least twice as fast as the highest frequency we want to record. I mention this only because nearly everyone gets this wrong and assumes that using higher sampling rates always leads to better audio quality. It doesn’t. </end rant> We now have an array of numbers with each number representing the sound wave’s amplitude at 1/16,000th of a second intervals. We could feed these numbers right into a neural network. But trying to recognize speech patterns by processing these samples directly is difficult. Instead, we can make the problem easier by doing some pre-processing on the audio data. Let’s start by grouping our sampled audio into 20-millisecond-long chunks. Here’s our first 20 milliseconds of audio (i.e., our first 320 samples): Plotting those numbers as a simple line graph gives us a rough approximation of the original sound wave for that 20 millisecond period of time: This recording is only 1/50th of a second long. But even this short recording is a complex mish-mash of different frequencies of sound. There’s some low sounds, some mid-range sounds, and even some high-pitched sounds sprinkled in. But taken all together, these different frequencies mix together to make up the complex sound of human speech. To make this data easier for a neural network to process, we are going to break apart this complex sound wave into it’s component parts. We’ll break out the low-pitched parts, the next-lowest-pitched-parts, and so on. Then by adding up how much energy is in each of those frequency bands (from low to high), we create a fingerprint of sorts for this audio snippet. Imagine you had a recording of someone playing a C Major chord on a piano. That sound is the combination of three musical notes— C, E and G — all mixed together into one complex sound. We want to break apart that complex sound into the individual notes to discover that they were C, E and G. This is the exact same idea. We do this using a mathematic operation called a Fourier transform. It breaks apart the complex sound wave into the simple sound waves that make it up. Once we have those individual sound waves, we add up how much energy is contained in each one. The end result is a score of how important each frequency range is, from low pitch (i.e. bass notes) to high pitch. Each number below represents how much energy was in each 50hz band of our 20 millisecond audio clip: But this is a lot easier to see when you draw this as a chart: If we repeat this process on every 20 millisecond chunk of audio, we end up with a spectrogram (each column from left-to-right is one 20ms chunk): A spectrogram is cool because you can actually see musical notes and other pitch patterns in audio data. A neural network can find patterns in this kind of data more easily than raw sound waves. So this is the data representation we’ll actually feed into our neural network. Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural network. The input to the neural network will be 20 millisecond audio chunks. For each little audio slice, it will try to figure out the letter that corresponds the sound currently being spoken. We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word “Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that memory of previous predictions helps the neural network make more accurate predictions going forward. After we run our entire audio clip through the neural network (one chunk at a time), we’ll end up with a mapping of each audio chunk to the letters most likely spoken during that chunk. Here’s what that mapping looks like for me saying “Hello”: Our neural net is predicting that one likely thing I said was “HHHEE_LL_LLLOOO”. But it also thinks that it was possible that I said “HHHUU_LL_LLLOOO” or even “AAAUU_LL_LLLOOO”. We have some steps we follow to clean up this output. First, we’ll replace any repeated characters a single character: Then we’ll remove any blanks: That leaves us with three possible transcriptions — “Hello”, “Hullo” and “Aullo”. If you say them out loud, all of these sound similar to “Hello”. Because it’s predicting one character at a time, the neural network will come up with these very sounded-out transcriptions. For example if you say “He would not go”, it might give one possible transcription as “He wud net go”. The trick is to combine these pronunciation-based predictions with likelihood scores based on large database of written text (books, news articles, etc). You throw out transcriptions that seem the least likely to be real and keep the transcription that seems the most realistic. Of our possible transcriptions “Hello”, “Hullo” and “Aullo”, obviously “Hello” will appear more frequently in a database of text (not to mention in our original audio-based training data) and thus is probably correct. So we’ll pick “Hello” as our final transcription instead of the others. Done! You might be thinking “But what if someone says ‘Hullo’? It’s a valid word. Maybe ‘Hello’ is the wrong transcription!” Of course it is possible that someone actually said “Hullo” instead of “Hello”. But a speech recognition system like this (trained on American English) will basically never produce “Hullo” as the transcription. It’s just such an unlikely thing for a user to say compared to “Hello” that it will always think you are saying “Hello” no matter how much you emphasize the ‘U’ sound. Try it out! If your phone is set to American English, try to get your phone’s digital assistant to recognize the world “Hullo.” You can’t! It refuses! It will always understand it as “Hello.” Not recognizing “Hullo” is a reasonable behavior, but sometimes you’ll find annoying cases where your phone just refuses to understand something valid you are saying. That’s why these speech recognition models are always being retrained with more data to fix these edge cases. One of the coolest things about machine learning is how simple it sometimes seems. You get a bunch of data, feed it into a machine learning algorithm, and then magically you have a world-class AI system running on your gaming laptop’s video card... Right? That sort of true in some cases, but not for speech. Recognizing speech is a hard problem. You have to overcome almost limitless challenges: bad quality microphones, background noise, reverb and echo, accent variations, and on and on. All of these issues need to be present in your training data to make sure the neural network can deal with them. Here’s another example: Did you know that when you speak in a loud room you unconsciously raise the pitch of your voice to be able to talk over the noise? Humans have no problem understanding you either way, but neural networks need to be trained to handle this special case. So you need training data with people yelling over noise! To build a voice recognition system that performs on the level of Siri, Google Now!, or Alexa, you will need a lot of training data — far more data than you can likely get without hiring hundreds of people to record it for you. And since users have low tolerance for poor quality voice recognition systems, you can’t skimp on this. No one wants a voice recognition system that works 80% of the time. For a company like Google or Amazon, hundreds of thousands of hours of spoken audio recorded in real-life situations is gold. That’s the single biggest thing that separates their world-class speech recognition system from your hobby system. The whole point of putting Google Now! and Siri on every cell phone for free or selling $50 Alexa units that have no subscription fee is to get you to use them as much as possible. Every single thing you say into one of these systems is recorded forever and used as training data for future versions of speech recognition algorithms. That’s the whole game! Don’t believe me? If you have an Android phone with Google Now!, click here to listen to actual recordings of yourself saying every dumb thing you’ve ever said into it: So if you are looking for a start-up idea, I wouldn’t recommend trying to build your own speech recognition system to compete with Google. Instead, figure out a way to get people to give you recordings of themselves talking for hours. The data can be your product instead. If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 7! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it.
Adam Geitgey
5.8K
16
https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa?source=tag_archive---------7----------------
Machine Learning is Fun Part 5: Language Translation with Deep Learning and the Magic of Sequences
Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Tiếng Việt or Italiano. We all know and love Google Translate, the website that can instantly translate between 100 different human languages as if by magic. It is even available on our phones and smartwatches: The technology behind Google Translate is called Machine Translation. It has changed the world by allowing people to communicate when it wouldn’t otherwise be possible. But we all know that high school students have been using Google Translate to... umm... assist with their Spanish homework for 15 years. Isn’t this old news? It turns out that over the past two years, deep learning has totally rewritten our approach to machine translation. Deep learning researchers who know almost nothing about language translation are throwing together relatively simple machine learning solutions that are beating the best expert-built language translation systems in the world. The technology behind this breakthrough is called sequence-to-sequence learning. It’s very powerful technique that be used to solve many kinds problems. After we see how it is used for translation, we’ll also learn how the exact same algorithm can be used to write AI chat bots and describe pictures. Let’s go! So how do we program a computer to translate human language? The simplest approach is to replace every word in a sentence with the translated word in the target language. Here’s a simple example of translating from Spanish to English word-by-word: This is easy to implement because all you need is a dictionary to look up each word’s translation. But the results are bad because it ignores grammar and context. So the next thing you might do is start adding language-specific rules to improve the results. For example, you might translate common two-word phrases as a single group. And you might swap the order nouns and adjectives since they usually appear in reverse order in Spanish from how they appear in English: That worked! If we just keep adding more rules until we can handle every part of grammar, our program should be able to translate any sentence, right? This is how the earliest machine translation systems worked. Linguists came up with complicated rules and programmed them in one-by-one. Some of the smartest linguists in the world labored for years during the Cold War to create translation systems as a way to interpret Russian communications more easily. Unfortunately this only worked for simple, plainly-structured documents like weather reports. It didn’t work reliably for real-world documents. The problem is that human language doesn’t follow a fixed set of rules. Human languages are full of special cases, regional variations, and just flat out rule-breaking. The way we speak English more influenced by who invaded who hundreds of years ago than it is by someone sitting down and defining grammar rules. After the failure of rule-based systems, new translation approaches were developed using models based on probability and statistics instead of grammar rules. Building a statistics-based translation system requires lots of training data where the exact same text is translated into at least two languages. This double-translated text is called parallel corpora. In the same way that the Rosetta Stone was used by scientists in the 1800s to figure out Egyptian hieroglyphs from Greek, computers can use parallel corpora to guess how to convert text from one language to another. Luckily, there’s lots of double-translated text already sitting around in strange places. For example, the European Parliament translates their proceedings into 21 languages. So researchers often use that data to help build translation systems. The fundamental difference with statistical translation systems is that they don’t try to generate one exact translation. Instead, they generate thousands of possible translations and then they rank those translations by likely each is to be correct. They estimate how “correct” something is by how similar it is to the training data. Here’s how it works: First, we break up our sentence into simple chunks that can each be easily translated: Next, we will translate each of these chunks by finding all the ways humans have translated those same chunks of words in our training data. It’s important to note that we are not just looking up these chunks in a simple translation dictionary. Instead, we are seeing how actual people translated these same chunks of words in real-world sentences. This helps us capture all of the different ways they can be used in different contexts: Some of these possible translations are used more frequently than others. Based on how frequently each translation appears in our training data, we can give it a score. For example, it’s much more common for someone to say “Quiero” to mean “I want” than to mean “I try.” So we can use how frequently “Quiero” was translated to “I want” in our training data to give that translation more weight than a less frequent translation. Next, we will use every possible combination of these chunks to generate a bunch of possible sentences. Just from the chunk translations we listed in Step 2, we can already generate nearly 2,500 different variations of our sentence by combining the chunks in different ways. Here are some examples: But in a real-world system, there will be even more possible chunk combinations because we’ll also try different orderings of words and different ways of chunking the sentence: Now need to scan through all of these generated sentences to find the one that is that sounds the “most human.” To do this, we compare each generated sentence to millions of real sentences from books and news stories written in English. The more English text we can get our hands on, the better. Take this possible translation: It’s likely that no one has ever written a sentence like this in English, so it would not be very similar to any sentences in our data set. We’ll give this possible translation a low probability score. But look at this possible translation: This sentence will be similar to something in our training set, so it will get a high probability score. After trying all possible sentences, we’ll pick the sentence that has the most likely chunk translations while also being the most similar overall to real English sentences. Our final translation would be “I want to go to the prettiest beach.” Not bad! Statistical machine translation systems perform much better than rule-based systems if you give them enough training data. Franz Josef Och improved on these ideas and used them to build Google Translate in the early 2000s. Machine Translation was finally available to the world. In the early days, it was surprising to everyone that the “dumb” approach to translating based on probability worked better than rule-based systems designed by linguists. This led to a (somewhat mean) saying among researchers in the 80s: Statistical machine translation systems work well, but they are complicated to build and maintain. Every new pair of languages you want to translate requires experts to tweak and tune a new multi-step translation pipeline. Because it is so much work to build these different pipelines, trade-offs have to be made. If you are asking Google to translate Georgian to Telegu, it has to internally translate it into English as an intermediate step because there’s not enough Georgain-to-Telegu translations happening to justify investing heavily in that language pair. And it might do that translation using a less advanced translation pipeline than if you had asked it for the more common choice of French-to-English. Wouldn’t it be cool if we could have the computer do all that annoying development work for us? The holy grail of machine translation is a black box system that learns how to translate by itself— just by looking at training data. With Statistical Machine Translation, humans are still needed to build and tweak the multi-step statistical models. In 2014, KyungHyun Cho’s team made a breakthrough. They found a way to apply deep learning to build this black box system. Their deep learning model takes in a parallel corpora and and uses it to learn how to translate between those two languages without any human intervention. Two big ideas make this possible — recurrent neural networks and encodings. By combining these two ideas in a clever way, we can build a self-learning translation system. We’ve already talked about recurrent neural networks in Part 2, but let’s quickly review. A regular (non-recurrent) neural network is a generic machine learning algorithm that takes in a list of numbers and calculates a result (based on previous training). Neural networks can be used as a black box to solve lots of problems. For example, we can use a neural network to calculate the approximate value of a house based on attributes of that house: But like most machine learning algorithms, neural networks are stateless. You pass in a list of numbers and the neural network calculates a result. If you pass in those same numbers again, it will always calculate the same result. It has no memory of past calculations. In other words, 2 + 2 always equals 4. A recurrent neural network (or RNN for short) is a slightly tweaked version of a neural network where the previous state of the neural network is one of the inputs to the next calculation. This means that previous calculations change the results of future calculations! Why in the world would we want to do this? Shouldn’t 2 + 2 always equal 4 no matter what we last calculated? This trick allows neural networks to learn patterns in a sequence of data. For example, you can use it to predict the next most likely word in a sentence based on the first few words: RNNs are useful any time you want to learn patterns in data. Because human language is just one big, complicated pattern, RNNs are increasingly used in many areas of natural language processing. If you want to learn more about RNNs, you can read Part 2 where we used one to generate a fake Ernest Hemingway book and then used another one to generate fake Super Mario Brothers levels. The other idea we need to review is Encodings. We talked about encodings in Part 4 as part of face recognition. To explain encodings, let’s take a slight detour into how we can tell two different people apart with a computer. When you are trying to tell two faces apart with a computer, you collect different measurements from each face and use those measurements to compare faces. For example, we might measure the size of each ear or the spacing between the eyes and compare those measurements from two pictures to see if they are the same person. You’re probably already familiar with this idea from watching any primetime detective show like CSI: The idea of turning a face into a list of measurements is an example of an encoding. We are taking raw data (a picture of a face) and turning it into a list of measurements that represent it (the encoding). But like we saw in Part 4, we don’t have to come up with a specific list of facial features to measure ourselves. Instead, we can use a neural network to generate measurements from a face. The computer can do a better job than us in figuring out which measurements are best able to differentiate two similar people: This is our encoding. It lets us represent something very complicated (a picture of a face) with something simple (128 numbers). Now comparing two different faces is much easier because we only have to compare these 128 numbers for each face instead of comparing full images. Guess what? We can do the same thing with sentences! We can come up with an encoding that represents every possible different sentence as a series of unique numbers: To generate this encoding, we’ll feed the sentence into the RNN, one word at time. The final result after the last word is processed will be the values that represent the entire sentence: Great, so now we have a way to represent an entire sentence as a set of unique numbers! We don’t know what each number in the encoding means, but it doesn’t really matter. As long as each sentence is uniquely identified by it’s own set of numbers, we don’t need to know exactly how those numbers were generated. Ok, so we know how to use an RNN to encode a sentence into a set of unique numbers. How does that help us? Here’s where things get really cool! What if we took two RNNs and hooked them up end-to-end? The first RNN could generate the encoding that represents a sentence. Then the second RNN could take that encoding and just do the same logic in reverse to decode the original sentence again: Of course being able to encode and then decode the original sentence again isn’t very useful. But what if (and here’s the big idea!) we could train the second RNN to decode the sentence into Spanish instead of English? We could use our parallel corpora training data to train it to do that: And just like that, we have a generic way of converting a sequence of English words into an equivalent sequence of Spanish words! This is a powerful idea: Note that we glossed over some things that are required to make this work with real-world data. For example, there’s additional work you have to do to deal with different lengths of input and output sentences (see bucketing and padding). There’s also issues with translating rare words correctly. If you want to build your own language translation system, there’s a working demo included with TensorFlow that will translate between English and French. However, this is not for the faint of heart or for those with limited budgets. This technology is still new and very resource intensive. Even if you have a fast computer with a high-end video card, it might take about a month of continuous processing time to train your own language translation system. Also, Sequence-to-sequence language translation techniques are improving so rapidly that it’s hard to keep up. Many recent improvements (like adding an attention mechanism or tracking context) are significantly improving results but these developments are so new that there aren’t even wikipedia pages for them yet. If you want to do anything serious with sequence-to-sequence learning, you’ll need to keep with new developments as they occur. So what else can we do with sequence-to-sequence models? About a year ago, researchers at Google showed that you can use sequence-to-sequence models to build AI bots. The idea is so simple that it’s amazing it works at all. First, they captured chat logs between Google employees and Google’s Tech Support team. Then they trained a sequence-to-sequence model where the employee’s question was the input sentence and the Tech Support team’s response was the “translation” of that sentence. When a user interacted with the bot, they would “translate” each of the user’s messages with this system to get the bot’s response. The end result was a semi-intelligent bot that could (sometimes) answer real tech support questions. Here’s part of a sample conversation between a user and the bot from their paper: They also tried building a chat bot based on millions of movie subtitles. The idea was to use conversations between movie characters as a way to train a bot to talk like a human. The input sentence is a line of dialog said by one character and the “translation” is what the next character said in response: This produced really interesting results. Not only did the bot converse like a human, but it displayed a small bit of intelligence: This is only the beginning of the possibilities. We aren’t limited to converting one sentence into another sentence. It’s also possible to make an image-to-sequence model that can turn an image into text! A different team at Google did this by replacing the first RNN with a Convolutional Neural Network (like we learned about in Part 3). This allows the input to be a picture instead of a sentence. The rest works basically the same way: And just like that, we can turn pictures into words (as long as we have lots and lots of training data)! Andrej Karpathy expanded on these ideas to build a system capable of describing images in great detail by processing multiple regions of an image separately: This makes it possible to build image search engines that are capable of finding images that match oddly specific search queries: There’s even researchers working on the reverse problem, generating an entire picture based on just a text description! Just from these examples, you can start to imagine the possibilities. So far, there have been sequence-to-sequence applications in everything from speech recognition to computer vision. I bet there will be a lot more over the next year. If you want to learn more in depth about sequence-to-sequence models and translation, here’s some recommended resources: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it.
Chris Dixon
5.3K
12
https://medium.com/@cdixon/eleven-reasons-to-be-excited-about-the-future-of-technology-ef5f9b939cb2?source=tag_archive---------8----------------
Eleven Reasons To Be Excited About The Future of Technology
In the year 1820, a person could expect to live less than 35 years, 94% of the global population lived in extreme poverty, and less that 20% of the population was literate. Today, human life expectancy is over 70 years, less that 10% of the global population lives in extreme poverty, and over 80% of people are literate. These improvements are due mainly to advances in technology, beginning in the industrial age and continuing today in the information age. There are many exciting new technologies that will continue to transform the world and improve human welfare. Here are eleven of them. Self-driving cars exist today that are safer than human-driven cars in most driving conditions. Over the next 3–5 years they‘ll get even safer, and will begin to go mainstream. The World Health Organization estimates that 1.25 million people die from car-related injuries per year. Half of the deaths are pedestrians, bicyclists, and motorcyclists hit by cars. Cars are the leading cause of death for people ages 15–29 years old. Just as cars reshaped the world in the 20th century, so will self-driving cars in the 21st century. In most cities, between 20–30% of usable space is taken up by parking spaces, and most cars are parked about 95% of the time. Self-driving cars will be in almost continuous use (most likely hailed from a smartphone app), thereby dramatically reducing the need for parking. Cars will communicate with one another to avoid accidents and traffic jams, and riders will be able to spend commuting time on other activities like work, education, and socializing. Attempts to fight climate change by reducing the demand for energy haven’t worked. Fortunately, scientists, engineers, and entrepreneurs have been working hard on the supply side to make clean energy convenient and cost-effective. Due to steady technological and manufacturing advances, the price of solar cells has dropped 99.5% since 1977. Solar will soon be more cost efficient than fossil fuels. The cost of wind energy has also dropped to an all-time low, and in the last decade represented about a third of newly installed US energy capacity. Forward thinking organizations are taking advantage of this. For example, in India there is an initiative to convert airports to self-sustaining clean energy. Tesla is making high-performance, affordable electric cars, and installing electric charging stations worldwide. There are hopeful signs that clean energy could soon be reaching a tipping point. For example, in Japan, there are now more electric charging stations than gas stations. And Germany produces so much renewable energy, it sometimes produces even more than it can use. Computer processors only recently became fast enough to power comfortable and convincing virtual and augmented reality experiences. Companies like Facebook, Google, Apple, and Microsoft are investing billions of dollars to make VR and AR more immersive, comfortable, and affordable. People sometimes think VR and AR will be used only for gaming, but over time they will be used for all sorts of activities. For example, we’ll use them to manipulate 3-D objects: To meet with friends and colleagues from around the world: And even for medical applications, like treating phobias or helping rehabilitate paralysis victims: VR and AR have been dreamed about by science fiction fans for decades. In the next few years, they’ll finally become a mainstream reality. GPS started out as a military technology but is now used to hail taxis, get mapping directions, and hunt Pokémon. Likewise, drones started out as a military technology, but are increasingly being used for a wide range of consumer and commercial applications. For example, drones are being used to inspect critical infrastructure like bridges and power lines, to survey areas struck by natural disasters, and many other creative uses like fighting animal poaching. Amazon and Google are building drones to deliver household items. The startup Zipline uses drones to deliver medical supplies to remote villages that can’t be accessed by roads. There is also a new wave of startups working on flying cars (including two funded by the cofounder of Google, Larry Page). Flying cars use the same advanced technology used in drones but are large enough to carry people. Due to advances in materials, batteries, and software, flying cars will be significantly more affordable and convenient than today’s planes and helicopters. Artificial intelligence has made rapid advances in the last decade, due to new algorithms and massive increases in data collection and computing power. AI can be applied to almost any field. For example, in photography an AI technique called artistic style transfer transforms photographs into the style of a given painter: Google built an AI system that controls its datacenter power systems, saving hundreds of millions of dollars in energy costs. The broad promise of AI is to liberate people from repetitive mental tasks the same way the industrial revolution liberated people from repetitive physical tasks. Some people worry that AI will destroy jobs. History has shown that while new technology does indeed eliminate jobs, it also creates new and better jobs to replace them. For example, with advent of the personal computer, the number of typographer jobs dropped, but the increase in graphic designer jobs more than made up for it. It is much easier to imagine jobs that will go away than new jobs that will be created. Today millions of people work as app developers, ride-sharing drivers, drone operators, and social media marketers— jobs that didn’t exist and would have been difficult to even imagine ten years ago. By 2020, 80% of adults on earth will have an internet-connected smartphone. An iPhone 6 has about 2 billion transistors, roughly 625 times more transistors than a 1995 Intel Pentium computer. Today’s smartphones are what used to be considered supercomputers. Internet-connected smartphones give ordinary people abilities that, just a short time ago, were only available to an elite few: Protocols are the plumbing of the internet. Most of the protocols we use today were developed decades ago by academia and government. Since then, protocol development mostly stopped as energy shifted to developing proprietary systems like social networks and messaging apps. Cryptocurrency and blockchain technologies are changing this by providing a new business model for internet protocols. This year alone, hundreds of millions of dollars were raised for a broad range of innovative blockchain-based protocols. Protocols based on blockchains also have capabilities that previous protocols didn’t. For example, Ethereum is a new blockchain-based protocol that can be used to create smart contracts and trusted databases that are immune to corruption and censorship. While college tuition skyrockets, anyone with a smartphone can study almost any topic online, accessing educational content that is mostly free and increasingly high-quality. Encyclopedia Britannica used to cost $1,400. Now anyone with a smartphone can instantly access Wikipedia. You used to have to go to school or buy programming books to learn computer programming. Now you can learn from a community of over 40 million programmers at Stack Overflow. YouTube has millions of hours of free tutorials and lectures, many of which are produced by top professors and universities. The quality of online education is getting better all the time. For the last 15 years, MIT has been recording lectures and compiling materials that cover over 2000 courses. As perhaps the greatest research university in the world, MIT has always been ahead of the trends. Over the next decade, expect many other schools to follow MIT’s lead. Earth is running out of farmable land and fresh water. This is partly because our food production systems are incredibly inefficient. It takes an astounding 1799 gallons of water to produce 1 pound of beef. Fortunately, a variety of new technologies are being developed to improve our food system. For example, entrepreneurs are developing new food products that are tasty and nutritious substitutes for traditional foods but far more environmentally friendly. The startup Impossible Foods invented meat products that look and taste like the real thing but are actually made of plants. Their burger uses 95% less land, 74% less water, and produces 87% less greenhouse gas emissions than traditional burgers. Other startups are creating plant-based replacements for milk, eggs, and other common foods. Soylent is a healthy, inexpensive meal replacement that uses advanced engineered ingredients that are much friendlier to the environment than traditional ingredients. Some of these products are developed using genetic modification, a powerful scientific technique that has been widely mischaracterized as dangerous. According to a study by the Pew Organization, 88% of scientists think genetically modified foods are safe. Another exciting development in food production is automated indoor farming. Due to advances in solar energy, sensors, lighting, robotics, and artificial intelligence, indoor farms have become viable alternatives to traditional outdoor farms. Compared to traditional farms, automated indoor farms use roughly 10 times less water and land. Crops are harvested many more times per year, there is no dependency on weather, and no need to use pesticides. Until recently, computers have only been at the periphery of medicine, used primarily for research and record keeping. Today, the combination of computer science and medicine is leading to a variety of breakthroughs. For example, just fifteen years ago, it cost $3B to sequence a human genome. Today, the cost is about a thousand dollars and continues to drop. Genetic sequencing will soon be a routine part of medicine. Genetic sequencing generates massive amounts of data that can be analyzed using powerful data analysis software. One application is analyzing blood samples for early detection of cancer. Further genetic analysis can help determine the best course of treatment. Another application of computers to medicine is in prosthetic limbs. Here a young girl is using prosthetic hands she controls using her upper-arm muscles: Soon we’ll have the technology to control prothetic limbs with just our thoughts using brain-to-machine interfaces. Computers are also becoming increasingly effective at diagnosing diseases. An artificial intelligence system recently diagnosed a rare disease that human doctors failed to diagnose by finding hidden patterns in 20 million cancer records. Since the beginning of the space age in the 1950s, the vast majority of space funding has come from governments. But that funding has been in decline: for example, NASA’s budget dropped from about 4.5% of the federal budget in the 1960s to about 0.5% of the federal budget today. The good news is that private space companies have started filling the void. These companies provide a wide range of products and services, including rocket launches, scientific research, communications and imaging satellites, and emerging speculative business models like asteroid mining. The most famous private space company is Elon Musk’s SpaceX, which successfully sent rockets into space that can return home to be reused. Perhaps the most intriguing private space company is Planetary Resources, which is trying to pioneer a new industry: mining minerals from asteroids. If successful, asteroid mining could lead to a new gold rush in outer space. Like previous gold rushes, this could lead to speculative excess, but also dramatically increased funding for new technologies and infrastructure. These are just a few of the amazing technologies we’ll see developed in the coming decades. 2016 is just the beginning of a new age of wonders. As futurist Kevin Kelly says: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. www.cdixon.org/about
Tal Perry
2.6K
17
https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------9----------------
Deep Learning the Stock Market – Tal Perry – Medium
Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going. I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. Why NLP is relevant to Stock prediction In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance. In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure). Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will. You shall know a word by the company it keeps (Firth, J. R. 1957:11) There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort. Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them. The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog. But we can embed more than just words. We can do, say , stock market embeddings. Market2Vec The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie. Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college. The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself. We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How? So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful. Now What Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry. So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either. Going deeper I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning. So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote. Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market. Inside the magical black box What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text. A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope. We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that. Connecting Market2Vec and LSTMs The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them: We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior. What Comes out Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile. In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next. In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were: 1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do. 3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk. 5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index. Sidenote — Why predict the VIX What makes the VIX an interesting target is that Back to our LSTM outputs and the SoftMax How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like: We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ). So before we get into how this thing learns, let’s recap what we’ve done so far How does this thing learn? Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn. So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data. To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is Or in English, the square root of the square of the difference between what actually happened and what we predicted. Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error. This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task. It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task. It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task. Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error. What’s next Now that I’ve laid this out in writing and it still makes sense to me I want So, if you’ve come this far please point out my errors and share your inputs. Other thoughts Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work. Liquidity and efficient use of capital Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs. A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500. This stuff is new The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets. Multiple Time Frames While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most. I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with. MarketVectors When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them. My original consideration was to use an auto-encoder like in this paper but end to end training is cooler. A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder. But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them. The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage. CNNs Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun.
Gil Fewster
3.3K
5
https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------0----------------
The mind-blowing AI announcement from Google that you probably missed.
Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text. I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own. In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System. This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock. Luckily, I’m here to bring you up to speed. Here’s the deal. Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance. Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results. This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input. All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning. The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative. Google Translate invented its own language to help it translate more effectively. What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation. Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes) To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post: Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated. And this leads the Google engineers onto that truly astonishing discovery of creation: So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics. I just thought maybe you’d want to know. Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly. Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective. Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A tinkerer Our community publishes stories worth reading on development, design, and data science.
David Venturi
10.6K
20
https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------1----------------
Every single Machine Learning course on the internet, ranked by your reviews
A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost. I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science. For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization. For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below. For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews. Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources. Each course must fit three criteria: We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only. There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out. We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. We made subjective syllabus judgment calls based on three factors: A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience. A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps: The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves. First off, let’s define deep learning. Here is a succinct description: As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article: My top three recommendations from that list would be: Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline. Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar. Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews. Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available. Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning. Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice: Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course. A few prominent reviewers noted the following: Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews. The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding). Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase. Below are a few of the aforementioned sparkling reviews: Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered. It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R. As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course. Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings. A few prominent reviewers noted the following: Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here. The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews. Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews. Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews. Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent. Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews. Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews. Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews. Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews. Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews. Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews. Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews. Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews. AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews. Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews. StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews. Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews. From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews. Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews. Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews. Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews. Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews. Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews. Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews. Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews. Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews. Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews. Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews. Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews. Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews. The following courses had one or no reviews as of May 2017. Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review. Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available. Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free. Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free. Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase. Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase. Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase. Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks. Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required. The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course. Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours. Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours. Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours. Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours. Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours. Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours. Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free. Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free. Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below). Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well. This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth. The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering. If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page. If you enjoyed reading this, check out some of Class Central’s other pieces: If you have suggestions for courses I missed, let me know in the responses! If you found this helpful, click the 💚 so more people will see it here on Medium. This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program. Our community publishes stories worth reading on development, design, and data science.
Vishal Maini
32K
10
https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------2----------------
A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium
Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future. Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent. Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs. Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models. Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD). Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications. Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem. Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum. This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series. Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic. The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions. The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C. Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe). In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie. Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste. In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing. Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models. The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task. Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself. And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years. The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel. Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start. You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have: Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence. Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale. Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process. And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning! More from Machine Learning for Humans 🤖👶 A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact.
Tim Anglade
7K
23
https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------3----------------
How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native
The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs. While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches. The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps. If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet. While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways. Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.” Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one). All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section. We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done. The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors: For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone. Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack. It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs). While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition. The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images. One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation. Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods. Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons. First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner. At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with. We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile. To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come. The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source). So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million. This has two advantages: There are tradeoffs of course: During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions. After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations. So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture. Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time. With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with. Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular: So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features. Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were: The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit. Our data augmentation rules were as follows: These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation. The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be. The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours. We trained the network in 3 phases: While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training. In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well. Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users. This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries. Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future. If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again. There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future: Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience. UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case. DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch). Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks. For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects. Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide. Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration. ... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com
Sophia Ciocca
53K
9
https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe?source=tag_archive---------4----------------
How Does Spotify Know You So Well? – Member Feature Stories – Medium
Member Feature Story A software engineer explains the science behind personalized music recommendations Photo by studioEAST/Getty Images Photo by studioEAST/Getty Images This Monday — just like every Monday before it — over 100 million Spotify users found a fresh new playlist waiting for them called Discover Weekly. It’s a custom mixtape of 30 songs they’ve never listened to before but will probably love, and it’s pretty much magic. I’m a huge fan of Spotify, and particularly Discover Weekly. Why? It makes me feel seen. It knows my musical tastes better than any person in my entire life ever has, and I’m consistently delighted by how satisfyingly just right it is every week, with tracks I probably would never have found myself or known I would like. For those of you who live under a soundproof rock, let me introduce you to my virtual best friend: As it turns out, I’m not alone in my obsession with Discover Weekly. The user base goes crazy for it, which has driven Spotify to rethink its focus, and invest more resources into algorithm-based playlists. Ever since Discover Weekly debuted in 2015, I’ve been dying to know how it works (What’s more, I’m a Spotify fangirl, so I sometimes like to pretend that I work there and research their products.) After three weeks of mad Googling, I feel like I’ve finally gotten a glimpse behind the curtain. So how does Spotify do such an amazing job of choosing those 30 songs for each person each week? Let’s zoom out for a second to look at how other music services have tackled music recommendations, and how Spotify’s doing it better. Back in the 2000s, Songza kicked off the online music curation scene using manual curation to create playlists for users. This meant that a team of “music experts” or other human curators would put together playlists that they just thought sounded good, and then users would listen to those playlists. (Later, Beats Music would employ this same strategy.) Manual curation worked alright, but it was based on that specific curator’s choices, and therefore couldn’t take into account each listener’s individual music taste. Like Songza, Pandora was also one of the original players in digital music curation. It employed a slightly more advanced approach, instead manually tagging attributes of songs. This meant a group of people listened to music, chose a bunch of descriptive words for each track, and tagged the tracks accordingly. Then, Pandora’s code could simply filter for certain tags to make playlists of similar-sounding music. Around that same time, a music intelligence agency from the MIT Media Lab called The Echo Nest was born, which took a radical, cutting-edge approach to personalized music. The Echo Nest used algorithms to analyze the audio and textual content of music, allowing it to perform music identification, personalized recommendation, playlist creation, and analysis. Finally, taking another approach is Last.fm, which still exists today and uses a process called collaborative filtering to identify music its users might like, but more on that in a moment. So if that’s how other music curation services have handled recommendations, how does Spotify’s magic engine run? How does it seem to nail individual users’ tastes so much more accurately than any of the other services? Spotify doesn’t actually use a single revolutionary recommendation model. Instead, they mix together some of the best strategies used by other services to create their own uniquely powerful discovery engine. To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Let’s dive into how each of these recommendation models work! First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users. After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model. Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song. But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation: What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T. Collaborative filtering then uses that data to say: “Hmmm... You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.” Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right? But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences? With matrix math, done with Python libraries! In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database. Then, the Python library runs this long, complicated matrix factorization formula: When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile. Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared. To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question. Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP. The second type of recommendation models that Spotify employs are Natural Language Processing (NLP) models. The source data for these models, as the name suggests, are regular ol’ words: track metadata, news articles, blogs, and other text around the internet. Natural Language Processing, which is the ability of a computer to understand human speech as it is spoken, is a vast field unto itself, often harnessed through sentiment analysis APIs. The exact mechanisms behind NLP are beyond the scope of this article, but here’s what happens on a very high level: Spotify crawls the web constantly looking for blog posts and other written text about music to figure out what people are saying about specific artists and songs — which adjectives and what particular language is frequently used in reference to those artists and songs, and which other artists and songs are also being discussed alongside them. While I don’t know the specifics of how Spotify chooses to then process this scraped data, I can offer some insight based on how the Echo Nest used to work with them. They would bucket Spotify’s data up into what they call “cultural vectors” or “top terms.” Each artist and song had thousands of top terms that changed on the daily. Each term had an associated weight, which correlated to its relative importance — roughly, the probability that someone will describe the music or artist with that term. Then, much like in collaborative filtering, the NLP model uses these terms and weights to create a vector representation of the song that can be used to determine if two pieces of music are similar. Cool, right? First, a question. You might be thinking: First of all, adding a third model further improves the accuracy of the music recommendation service. But this model also serves a secondary purpose: unlike the first two types, raw audio models take new songs into account. Take, for example, a song your singer-songwriter friend has put up on Spotify. Maybe it only has 50 listens, so there are few other listeners to collaboratively filter it against. It also isn’t mentioned anywhere on the internet yet, so NLP models won’t pick it up. Luckily, raw audio models don’t discriminate between new tracks and popular tracks, so with their help, your friend’s song could end up in a Discover Weekly playlist alongside popular songs! But how can we analyze raw audio data, which seems so abstract? With convolutional neural networks! Convolutional neural networks are the same technology used in facial recognition software. In Spotify’s case, they’ve been modified for use on audio data instead of pixels. Here’s an example of a neural network architecture: This particular neural network has four convolutional layers, seen as the thick bars on the left, and three dense layers, seen as the more narrow bars on the right. The inputs are time-frequency representations of audio frames, which are then concatenated, or linked together, to form the spectrogram. The audio frames go through these convolutional layers, and after passing through the last one, you can see a “global temporal pooling” layer, which pools across the entire time axis, effectively computing statistics of the learned features across the time of the song. After processing, the neural network spits out an understanding of the song, including characteristics like estimated time signature, key, mode, tempo, and loudness. Below is a plot of data for a 30-second snippet of “Around the World” by Daft Punk. Ultimately, this reading of the song’s key characteristics allows Spotify to understand fundamental similarities between songs and therefore which users might enjoy them, based on their own listening history. That covers the basics of the three major types of recommendation models feeding Spotify’s Recommendations Pipeline, and ultimately powering the Discover Weekly playlist! Of course, these recommendation models are all connected to Spotify’s larger ecosystem, which includes giant amounts of data storage and uses lots of Hadoop clusters to scale recommendations and make these engines work on enormous matrices, endless online music articles, and huge numbers of audio files. I hope this was informative and piqued your curiosity like it did mine. For now, I’ll be working my way through my own Discover Weekly, finding my new favorite music while appreciating all the machine learning that’s going on behind the scenes. 🎶 Thanks also to ladycollective for reading this article and suggesting edits. Software engineer, writer, and generally creative human. Interested in art, feminism, mindfulness, and authenticity. http://sophiaciocca.com Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade
François Chollet
35K
18
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec?source=tag_archive---------5----------------
The impossibility of intelligence explosion – François Chollet – Medium
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility. The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment — as seen in the science-fiction movie Transcendence (2014), for instance. Superintelligence would thus imply near-omnipotence, and would pose an existential threat to humanity. This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems. I attempt to base my points on concrete observations about intelligent systems and recursive systems. The reasoning behind intelligence explosion, like many of the early theories about AI that arose in the 1960s and 1970s, is sophistic: it considers “intelligence” in a completely abstract way, disconnected from its context, and ignores available evidence about both intelligent systems and recursively self-improving systems. It doesn’t have to be that way. We are, after all, on a planet that is literally packed with intelligent systems (including us) and self-improving systems, so we can simply observe them and learn from them to answer the questions at hand, instead of coming up with evidence-free circular reasonings. To talk about intelligence and its possible self-improving properties, we should first introduce necessary background and context. What are we talking about when we talk about intelligence? Precisely defining intelligence is in itself a challenge. The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. This is not quite the full picture, so let’s use this definition as a starting point, and expand on it. The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation. A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses — your sensorimotor affordances — are a fundamental part of your mind. Your environment is a fundamental part of your mind. Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself. In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human. What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but we do know that cognitive development in humans and animals is driven by hardcoded, innate dynamics. Human babies are born with an advanced set of reflex behaviors and innate learning templates that drive their early sensorimotor development, and that are fundamentally intertwined with the structure of the human sensorimotor space. The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body. It has even been convincingly argued, for instance by Chomsky, that very high-level human cognitive features, such as our ability to develop language, are innate. Similarly, one can imagine that the octopus has its own set of hardcoded cognitive primitives required in order to learn how to use an octopus body and survive in its octopus environment. The brain of a human is hyper specialized in the human condition — an innate specialization extending possibly as far as social behaviors, language, and common sense — and the brain of an octopus would likewise be hyper specialized in octopus behaviors. A human baby brain properly grafted in an octopus body would most likely fail to adequately take control of its unique sensorimotor space, and would quickly die off. Not so smart now, Mr. Superior Brain. What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization. Saturday Mthiyane, raised by monkeys in South Africa and found at five, kept behaving like a monkey into adulthood — jumping and walking on all four, incapable of language, and refusing to eat cooked food. Feral children who have human contact for at least some of their most formative years tend to have slightly better luck with reeducation, although they rarely graduate to fully-functioning humans. If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment. If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. In practice, geniuses with exceptional cognitive abilities usually live overwhelmingly banal lives, and very few of them accomplish anything of note. In Terman’s landmark “Genetic Studies of Genius”, he notes that most of his exceptionally gifted subjects would pursue occupations “as humble as those of policeman, seaman, typist and filing clerk”. There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news. Of the people who have actually attempted to take over the world, hardly any seem to have had an exceptional intelligence; anecdotally, Hitler was a high-school dropout, who failed to get into the Vienna Academy of Art — twice. People who do end up making breakthroughs on hard problems do so through a combination of circumstances, character, education, intelligence, and they make their breakthroughs through incremental improvement over the work of their predecessors. Success — expressed intelligence — is sufficient ability meeting a great problem at the right time. Most of these remarkable problem-solvers are not even that clever — their skills seem to be specialized in a given field and they typically do not display greater-than-average abilities outside of their own domain. Some people achieve more because they were better team players, or had more grit and work ethic, or greater imagination. Some just happened to have lived in the right context, to have the right conversation at the right time. Intelligence is fundamentally situational. Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances. However, it is a well-documented fact that raw cognitive ability — as measured by IQ, which may be debatable — correlates with social attainment for slices of the spectrum that are close to the mean. This was first evidenced in Terman’s study, and later confirmed by others — for instance, an extensive 2006 metastudy by Strenze found a visible, if somewhat weak, correlation between IQ and socioeconomic success. So, a person with an IQ of 130 is statistically far more likely to succeed in navigating the problem of life than a person with an IQ of 70 — although this is never guaranteed at the individual level — but here’s the thing: this correlation breaks down after a certain point. There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists. At the same time, of the roughly 50,000 humans alive today who have astounding IQs of 170 or higher, how many will solve any problem a tenth as significant as Professor Watson? Why would the real-world utility of raw cognitive ability stall past a certain threshold? This points to a very intuitive fact: that high attainment requires sufficient cognitive ability, but that the current bottleneck to problem-solving, to expressed intelligence, is not latent cognitive ability itself. The bottleneck is our circumstances. Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve. All evidence points to the fact that our current environment, much like past environments over the previous 200,000 years of human history and prehistory, does not allow high-intelligence individuals to fully develop and utilize their cognitive potential. A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential. A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice. It’s not just that our bodies, senses, and environment determine how much intelligence our brains can develop — crucially, our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing. The most fundamental of all cognitive prosthetics is of course language itself — essentially an operating system for cognition, without which we couldn’t think very far. These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms — across time, space, and importantly, across individuality. These cognitive prosthetics, not our brains, are where most of our cognitive abilities reside. We are our tools. An individual human is pretty much useless on its own — again, humans are just bipedal apes. It’s a collective accumulation of knowledge and external systems over thousands of years — what we call “civilization” — that has elevated us above our animal nature. When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation — the researcher offloads large extents of the problem-solving process to computers, to other researchers, to paper notes, to mathematical notation, etc. And they are only able to succeed because they are standing on the shoulder of giants — their own work is but one last subroutine in a problem-solving process that spans decades and thousands of individuals. Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip. An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred. However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence. On an individual level, we are but vectors of civilization, building upon previous work and passing on our findings. We are the momentary transistors on which the problem-solving algorithm of civilization runs. Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves. However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces. AI, in this sense, is no different than computers, or books, or language itself: it’s a technology that empowers our civilization. The advent of superhuman AI will thus be no more of a singularity than the advent of computers, or books, or language. Civilization will develop AI, and just march on. Civilization will eventually transcend what we are now, much like it has transcended what we were 10,000 years ago. It’s a gradual process, not a sudden shift. The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false. Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process. In this case, you may ask, isn’t civilization itself the runaway self-improving brain? Is our civilizational intelligence exploding? No. Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time. Not an explosion. But why? Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) * a, a > 1`. No system exists in a vacuum, and especially not intelligence, nor human civilization. We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them. So we know exactly how such systems behave — in a variety of contexts and over a variety of timescales. You are, yourself, a recursively self-improving system: educating yourself makes you smarter, in turn allowing you to educate yourself more efficiently. Likewise, human civilization is recursively self-improving, over a much longer timescale. Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make. Examples abound. Consider, for instance, software. Writing software obviously empowers software-writing: first, we programmed compilers, that could perform “automated programming”, then we used compilers to develop new languages implementing more powerful programming paradigms. We used these languages to develop advanced developer tools — debuggers, IDEs, linters, bug predictors. In the future, software will even write itself. And what is the end result of this recursively self-improving process? Can you do 2x more with your the software on your computer than you could last year? Will you be able to do 2x more next year? Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it. The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992. But why? Primarily, because the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain. Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it — in software, this would be resource consumption, feature creep, UX issues. When it comes to personal investing, your own rate of spending is one such antagonistic process — the more money you have, the more money you spend. When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them; a society with smarter individuals will need to invest far more in networking and communication, etc. It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses. It is also perhaps not random happenstance that military empires of the past have ended up collapsing after surpassing a certain size. Exponential progress, meet exponential friction. One specific example that is worth paying attention to is that of scientific progress, because it is conceptually very close to intelligence itself — science, as a problem-solving system, is very close to being a runaway superhuman AI. Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science — whether lab hardware (e.g. quantum physics led to lasers, which enabled a wealth of new quantum physics experiments), conceptual tools (e.g. a new theorem, a new theory), cognitive tools (e.g. mathematical notation), software tools, communications protocols that enable scientists to better collaborate (e.g. the Internet)... Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity. How comes? What bottlenecks and adversarial counter-reactions are slowing down recursive self-improvement in science? So many, I can’t even count them. Here are a few. Importantly, every single one of them would also apply to recursively self-improving AIs. In practice, system bottlenecks, diminishing returns, and adversarial reactions end up squashing recursive self-improvement in all of the recursive processes that surround us. Self-improvement does indeed lead to progress, but that progress tends to be linear, or at best, sigmoidal. Your first “seed dollar” invested will not typically lead to a “wealth explosion”; instead, a balance between investment returns and growing spending will usually lead to a roughly linear growth of your savings over time. And that’s for a system that is orders of magnitude simpler than a self-improving mind. Likewise, the first superhuman AI will just be another step on a visibly linear ladder of progress, that we started climbing long ago. The expansion of intelligence can only come from a co-evolution of brains (biological or digital), sensorimotor affordances, environment, and culture — not from merely tuning the gears of some brain in a jar, in isolation. Such a co-evolution has already been happening for eons, and will continue as intelligence moves to an increasingly digital substrate. No “intelligence explosion” will occur, as this process advances at a roughly linear pace. @fchollet, November 2017 Marketing footnote: my book Deep Learning with Python has just been released. If you have Python skills, and you want to understand what deep learning can and cannot do, and how to use it to solve difficult real-world problems, this book was written for you. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Max Pechyonkin
23K
8
https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------8----------------
Understanding Hinton’s Capsule Networks. Part I: Intuition.
Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur
Slav Ivanov
3.9K
17
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------9----------------
The $1700 great Deep Learning box: Assembly, setup and benchmarks
Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Geoff Nesnow
14.9K
19
https://medium.com/@DonotInnovate/73-mind-blowing-implications-of-a-driverless-future-58d23d1f338d?source=tag_archive---------1----------------
73 Mind-Blowing Implications of a Driverless Future
I originally wrote and published a version of this article in September 2016. Since then, quite a bit has happened, further cementing my view that these changes are coming and that the implications will be even more substantial. I decided it was time to update this article with some additional ideas and a few changes. As I write this, Uber just announced that it just ordered 24,000 self-driving Volvos. Tesla just released an electric, long-haul tractor trailer with extraordinary technical specs (range, performance) and self-driving capabilities (UPS just preordered 125!). And, Tesla just announced what will probably be the quickest production car ever made — perhaps the fastest. It will go zero to sixty in about the time it takes you to read zero to sixty. And, of course, it will be able to drive itself. The future is quickly becoming now. Google just ordered thousands of Chryslers for its self-driving fleet (that are already on the roads in AZ). In September of 2016, Uber had just rolled out its first self-driving taxis in Pittsburgh, Tesla and Mercedes were rolling out limited self-driving capabilities and cities around the world were negotiating with companies who want to bring self-driving cars and trucks to their cities. Since then, all of the major car companies have announced significant steps towards mostly or entirely electric vehicles, more investments have been made in autonomous vehicles, driverless trucks now seem to be leading rather than following in terms of the first large scale implementations and there’ve been a few more incidents (i.e. accidents). I believe that the timeframe for significant adoption of this technology has shrunk in the past year as technology has gotten better faster and as the trucking industry has increased its level of interest and investment. I believe that my daughter, who is now just over 1 years old, will never have to learn to drive or own a car. The impact of driverless vehicles will be profound and impact almost every part of our lives. Below are my updated thoughts about what a driverless future will be like. Some of these updates are from feedback to my original article (thanks to those who contributed!!!), some are based on technology advances in the past year and others are just my own speculations. What could happen when cars and trucks drive themselves? 1. People won’t own their own cars. Transport will be delivered as a service from companies who own fleets of self-driving vehicles. There are so many technical, economic, safety advantages to the transportation-as-a-service that this change may come much faster than most people expect. Owning a vehicle as an individual will become a novelty for collectors and maybe competitive racers. 2. Software/technology companies will own more of the world’s economy as companies like Uber, Google and Amazon turn transportation into a pay-as-you-go service. Software will indeed eat this world. Over time, they’ll own so much data about people, patterns, routes and obstacles that new entrants will have huge barriers to enter the market 3. Without government intervention (or some sort of organized movement), there will be a tremendous transfer of wealth to a very small number of people who own the software, battery/power manufacturing, vehicle servicing and charging/power generation/maintenance infrastructure. There will be massive consolidation of companies serving these markets as scale and efficiency will become even more valuable. Cars (perhaps they’ll be renamed with some sort-of-clever acronym) will become like the routers that run the Internet — most consumers won’t know or care who made them or who owns them. 4. Vehicle designs will change radically — vehicles won’t need to withstand crashes in the same way, all vehicles will be electric (self-driving + software + service providers = all electric). They may look different, come in very different shapes and sizes, maybe attach to each other in some situations. There will likely be many significant innovations in materials used for vehicle construction — for example, tires and brakes will be re-optimized with very different assumptions, especially around variability of loads and much more controlled environments. The bodies will likely be primarily made of composites (like carbon fiber and fiberglass) and 3D printed. Electric vehicles with no driver controls will require 1/10th or fewer the number of parts (perhaps even 1/100th) and thus will be quicker to produce and require much less labor. There may even be designs with almost no moving parts (other than wheels and motors, obviously). 5. Vehicles will mostly swap batteries rather than serve as the host of battery charging. Batteries will be charged in distributed and highly optimized centers — likely owned by the same company as the vehicles or another national vendor. There may be some entrepreneurial opportunity and a marketplace for battery charging and swapping, but this industry will likely be consolidated quickly. The batteries will be exchanged without human intervention — likely in a carwash-like drive thru 6. Vehicles (being electric) will be able to provide portable power for a variety of purposes (which will also be sold as a service) — construction job sites (why use generators), disaster/power failures, events, etc. They may even temporarily or permanently replace power distribution networks (i.e. power lines) for remote locations — imagine a distributed power generation network with autonomous vehicles providing “last mile” services to some locations 7. Driver’s licenses will slowly go away as will the Department of Motor Vehicles in most states. Other forms of ID may emerge as people no longer carry driver’s licenses. This will probably correspond with the inevitable digitization of all personal identification — via prints, retina scans or other biometric scanning 8. There won’t be any parking lots or parking spaces on roads or in buildings. Garages will be repurposed — maybe as mini loading docks for people and deliveries. Aesthetics of homes and commercial buildings will change as parking lots and spaces go away. There will be a multi-year boom in landscaping and basement and garage conversions as these spaces become available 9. Traffic policing will become redundant. Police transport will also likely change quite a bit. Unmanned police vehicles may become more common and police officers may use commercial transportation to move around routinely. This may dramatically change the nature of policing, with newfound resources from the lack of traffic policing and dramatically less time spent moving around 10. There will be no more local mechanics, car dealers, consumer car washes, auto parts stores or gas stations. Towns that have been built around major thoroughfares will change or fade 11. The auto insurance industry as we know it will go away (as will the significant investing power of the major players of this industry). Most car companies will go out of business, as will most of their enormous supplier networks. There will be many fewer net vehicles on the road (maybe 1/10th, perhaps even less) that are also more durable, made of fewer parts and much more commoditized 12. Traffic lights and signs will become obsolete. Vehicles may not even have headlights as infrared and radar take the place of the human light spectrum. The relationship between pedestrians (and bicycles) and cars and trucks will likely change dramatically. Some will come in the form of cultural and behavioral changes as people travel in groups more regularly and walking or cycling becomes practical in places where it isn’t today 13. Multi-modal transportation will become a more integrated and normal part of our ways of moving around. In other words, we’ll often take one type of vehicle to another, especially when traveling longer distances. With coordination and integration, the elimination of parking and more deterministic patterns, it will become ever-more efficient to combine modes of transport 14. The power grid will change. Power stations via alternative power sources will become more competitive and local. Consumers and small businesses with solar panels, small scale tidal or wave power generators, windmills and other local power generation will be able to sell KiloWattHours to the companies who own the vehicles. This will change “net metering” rules and possibly upset the overall power delivery model. It might even be the beginning of truly distributed power creation and transport. There will likely be a significant boom in innovation in power production and delivery models. Over time, ownership of these services will probably be consolidated across a very small number of companies 15. Traditional petroleum products (and other fossil fuels) will become much less valuable as electric cars replace fuel powered vehicles and as alternative energy sources become more viable with portability of power (transmission and conversion eat tons of power). There are many geopolitical implications to this possible shift. As implications of climate change become ever-clearer and present, these trends will likely accelerate. Petroleum will continue to be valuable for making plastics and other derived materials, but will not be burned for energy at any scale. Many companies, oil-rich countries and investors have already begun accommodating for these changes 16. Entertainment funding will change as the auto industry’s ad spending goes away. Think about how many ads you see or hear about cars, car financing, car insurance, car accessories and car dealers. There are likely to be many other structural and cultural changes that come from the dramatic changes to the transportation industry. We’ll stop saying “shift into high gear” and other driving-related colloquialisms as the references will be lost on future generations 17. The recent corporate tax rate reductions in the “..Act to Provide for Reconciliation Pursuant to Titles II and V of the Concurrent Resolution on the Budget for Fiscal Year 2018” will accelerate investments in automation including self-driving vehicles and other forms of transportation automation. Flush with new cash and incentives to invest capital soon, many businesses will invest in technology and solutions that reduce their labor costs. 18. The car financing industry will go away, as will the newly huge derivative market for packaged sub-prime auto loans which will likely itself cause a version of the 2008–2009 financial crisis as it blows up. 19. Increases in unemployment, increased student loan, vehicle and other debt defaults could quickly spiral into a full depression. The world that emerges on the other side will likely have even more dramatic income and wealth stratification as entry level jobs related to transportation and the entire supply chain of the existing transportation system go away. The convergence of this with hyper-automation in production and service delivery (AI, robotics, low-cost computing, business consolidation, etc) may permanently change how societies are organized and how people spend their time 20. There will be many new innovations in luggage and bags as people no longer keep stuff in cars and loading and unloading packages from vehicles becomes much more automated. The traditional trunk size and shape will change. Trailers or other similar detachable devices will become much more commonplace to add storage space to vehicles. Many additional on demand services will become available as transportation for goods and services becomes more ubiquitous and cheaper. Imagine being able to design, 3D print and put on an outfit as you travel to a party or the office (if you’re still going to an office)... 21. Consumers will have more money as transportation (a major cost, especially for lower income people and families) gets much cheaper and ubiquitous — though this may be offset by dramatic reductions in employment as technology changes many times faster than people’s ability to adapt to new types of work 22. Demand for taxi and truck drivers will go down, eventually to zero. Someone born today might not understand what a truck driver is or even understand why someone would do that job — much like people born in the last 30 years don’t understand how someone could be employed as a switchboard operator 23. The politics will get ugly as lobbyists for the auto and oil industries unsuccessfully try to stop the driverless car. They’ll get even uglier as the federal government deals with assuming huge pension obligations and other legacy costs associated with the auto industry. My guess is that these pension obgligations won’t ultimately be honored and certain communities will be devastated. The same may be true of pollution clean-up efforts around the factories and chemical plants that were once major components of the vehicle supply chain 24. The new players in vehicle design and manufacturing will be a mix of companies like Uber, Google and Amazon and companies you don’t yet know. There will probably be 2 or 3 major players who control >80% of the customer-facing transportation market. There may become API-like access to these networks for smaller players — much like app marketplaces for iPhone and Android. However, the majority of the revenue will flow to a few large players as it does today to Apple and Google for smartphones 25. Supply chains will be disrupted as shipping changes. Algorithms will allow trucks to be fuller. Excess (latent) capacity will be priced cheaper. New middlemen and warehousing models will emerge. As shipping gets cheaper, faster and generally easier, retail storefronts will continue to lose footing in the marketplace. 26. The role of malls and other shopping areas will continue to shift — to be replaced by places people go for services, not products. There will be virtually no face to face purchases of physical goods. 27. Amazon and/or a few other large players will put Fedex, UPS and USPS out of business as their transportation network becomes orders of magnitude more cost efficient than existing models — largely from a lack of legacy costs like pensions, higher union labor costs and regulations (especially USPS) that won’t keep up with the pace of technology change. 3D printing will also contribute to this as many day-to-day products are printed at home rather than purchased. 28. The same vehicles will often transport people and goods as algorithms optimize all routes. And, off-peak utilization will allow for other very inexpensive delivery options. In other words, packages will be increasingly delivered at night. Add autonomous drone aircraft to this mix and there’ll be very little reason to believe that traditional carriers (Fedex, USPS, UPS, etc) will survive at all. 29. Roads will be much emptier and smaller (over time) as self-driving cars need much less space between them (a major cause of traffic today), people will share vehicles more than today (carpooling), traffic flow will be better regulated and algorithmic timing (i.e. leave at 10 versus 9:30) will optimize infrastructure utilization. Roads will also likely be smoother and turns optimally banked for passenger comfort. High speed underground and above ground tunnels (maybe integrating hyperloop technology or this novel magnetic track solution) will become the high speed network for long haul travel. 30. Short hop domestic air travel may be largely displaced by multi-modal travel in autonomous vehicles. This may be countered by the advent of lower cost, more automated air travel. This too may become part of integrated, multi-modal transportation. 31. Roads will wear out much more slowly with fewer vehicle miles, lighter vehicles (with less safety requirements). New road materials will be developed that drain better, last longer and are more environmentally friendly. These materials might even be power generating (solar or reclamation from vehicle kinetic energy). At the extreme, they may even be replaced by radically different designs — tunnels, magnetic tracks, other hyper-optimized materials 32. Premium vehicle services will have more compartmentalized privacy, more comfort, good business features (quiet, wifi, bluetooth for each passenger, etc), massage services and beds for sleeping. They may also allow for meaningful in-transit real and virtual meetings. This will also likely include aromatherapy, many versions of in-vehicle entertainment systems and even virtual passengers to keep you company. 33. Exhilaration and emotion will almost entirely leave transportation. People won’t brag about how nice, fast, comfortable their cars are. Speed will be measured by times between end points, not acceleration, handling or top speed. 34. Cities will become much more dense as fewer roads and vehicles will be needed and transport will be cheaper and more available. The “walkable city” will continue to be more desirable as walking and biking become easier and more commonplace. When costs and timeframes of transit change, so will the dynamics of who lives and works where. 35. People will know when they leave, when they’ll get where they’re going. There will be few excuses for being late. We will be able to leave later and cram more into a day. We’ll also be able to better track kids, spouses, employees and so forth. We’ll be able to know exactly when someone will arrive and when someone needs to leave to be somewhere at a particular time. 36. There will be no more DUI/OUI offenses. Restaurants and bars will sell more alcohol. People will consume more as they no longer need to consider how to get home and will be able to consume inside vehicles 37. We’ll have less privacy as interior cameras and usage logs will track when and where we go and have gone. Exterior cameras will also probably record surroundings, including people. This may have a positive impact on crime, but will open up many complex privacy issues and likely many lawsuits. Some people may find clever ways to game the system — with physical and digital disguises and spoofing. 38. Many lawyers will lose sources of revenue — traffic offenses, crash litigation will reduce dramatically. Litigation will more likely be “big company versus big company” or “individuals against big companies”, not individuals against each other. These will settle more quickly with less variability. Lobbyists will probably succeed in changing the rules of litigation to favor the bigger companies, further reducing the legal revenue related to transportation. Forced arbitration and other similar clauses will become an explicit component of our contractual relationship with transportation providers. 39. Some countries will nationalize parts of their self-driving transportation networks which will result in lower costs, fewer disruptions and less innovation. 40. Cities, towns and police forces will lose revenue from traffic tickets, tolls (likely replaced, if not eliminated) and fuel tax revenues drop precipitously. These will probably be replaced by new taxes (probably on vehicle miles). These may become a major political hot-button issue differentiating parties as there will probably be a range of regressive versus progressive tax models. Most likely, this will be a highly regressive tax in the US, as fuel taxes are today. 41. Some employers and/or government programs will begin partially or entirely subsidizing transportation for employees and/or people who need the help. The tax treatment of this perk will also be very political. 42. Ambulance and other emergency vehicles will likely be used less and change in nature. More people will take regular autonomous vehicles instead of ambulances. Ambulances will transport people faster. Same may be true of military vehicles. 43. There will be significant innovations in first response capabilities as dependencies on people become reduced over time and as distributed staging of capacity becomes more common. 44. Airports will allow vehicles right into the terminals, maybe even onto the tarmac, as increased controls and security become possible. Terminal design may change dramatically as transportation to and from becomes normalized and integrated. The entire nature of air travel may change as integrated, multi-modal transport gets more sophisticated. Hyper-loops, high speed rail, automated aircraft and other forms of rapid travel will gain as traditional hub and spoke air travel on relatively large planes lose ground. 45. Innovative app-like marketplaces will open up for in-transit purchases, ranging from concierge services to food to exercise to merchandise to education to entertainment purchases. VR will likely play a large role in this. With integrated systems, VR (via headsets or screens or holograms) will become standard fare for trips more than a few minutes in duration. 46. Transportation will become more tightly integrated and packaged into many services — dinner includes the ride, hotel includes local transport, etc. This may even extend to apartments, short-term rentals (like AirBnB) and other service providers. 47. Local transport of nearly everything will become ubiquitous and cheap — food, everything in your local stores. Drones will likely be integrated into vehicle designs to deal with “last few feet” on pickup and delivery. This will accelerate the demise of traditional retail stores and their local economic impact. 48. Biking and walking will become easier, safer and more common as roads get safer and less congested, new pathways (reclaimed from roads/parking lots/roadside parking) come online and with cheap, reliable transport available as a backup. 49. More people will participate in vehicle racing (cars, off road, motorcycles) to replace their emotional connection to driving. Virtual racing experiences may also grow in popularity as fewer people have the real experience of driving. 50. Many, many fewer people will be injured or killed on roads, though we’ll expect zero and be disproportionately upset when accidents do happen. Hacking and non-malicious technical issues will replace traffic as the main cause of delays. Over time, resilience will increase in the systems. 51. Hacking of vehicles will be a serious issue. New software and communications companies and technologies will emerge to address these issues. We’ll see the first vehicle hacking and its consequences. Highly distributed computing, perhaps using some form of blockchain, will likely become part of the solution as a counterbalance to systemic catastrophes — such as many vehicles being affected simultaneously. There will probably be a debate about whether and how law enforcement can control, observe and restrict transportation. 52. Many roads and bridges will be privatized as a small number of companies control most transport and make deals with municipalities. Over time, government may entirely stop funding roads, bridges and tunnels. There will be a significant legislative push to privatize more and more of the transportation network. Much like Internet traffic, there will likely become tiers of prioritization and some notion of in-network versus out-of-network travel and tolls for interconnection. Regulators will have a tough time keeping up with these changes. Most of this will be transparent to end users, but will probably create enormous barriers to entry for transportation start-ups and ultimately reduce options for consumers. 53. Innovators will come along with many awesome uses for driveways and garages that no longer contain cars. 54. There will be a new network of clean, safe, pay-to-use restrooms and other services (food, drinks, etc) that become part of the value-add of competing service providers 55. Mobility for seniors and people with disabilities will be greatly improved (over time) 56. Parents will have more options to move around their kids on their own. Premium secure end-to-end children’s transport services will likely emerge. This may change many family relationships and increase the accessibility of services to parents and children. It may also further stratify the experiences of families with higher income and those with lower income. 57. Person to person movement of goods will become cheaper and open up new markets — think about borrowing a tool or buying something on Craigslist. Latent capacity will make transporting goods very inexpensive. This may also open up new opportunities for P2P services at a smaller scale — like preparing food or cleaning clothes. 58. People will be able to eat/drink in transit (like on a train or plane), consume more information (reading, podcasts, video, etc). This will open up time for other activities and perhaps increased productivity. 59. Some people may have their own “pods” to get into which will then be picked up by an autonomous vehicle, moved between vehicles automatically for logistic efficiencies. These may come in varieties of luxury and quality — the Louis Vuitton pod may replace the Louis Vuitton trunk as the mark of luxury travel 60. There will be no more getaway vehicles or police vehicle chases. 61. Vehicles will likely be filled to the brim with advertising of all sorts (much of which you could probably act on in-route), though there will probably be ways to pay more to have an ad free experience. This will include highly personalized en route advertising that is particularly relevant to who you are, where you’re going. 62. These innovations will make it to the developing world where congestion today is often remarkably bad and hugely costly. Pollution levels will come down dramatically. Even more people will move to the cities. Productivity levels will go up. Fortunes will be made as these changes happen. Some countries and cities will be transformed for the better. Some others will likely experience hyper-privatization, consolidation and monopoly-like controls. This may play out much like the roll-out of cell services in these countries — fast, consolidated and inexpensive. 63. Payment options will be greatly expanded, with packaged deals like cell phones, pre-paid models, pay-as-you-go models being offered. Digital currency transacted automatically via phones/devices will probably quickly replace traditional cash or credit card payments. 64. There will likely be some very clever innovations for movement of pets, equipment, luggage and other non-people items. Autonomous vehicles in the medium future (10–20 years) may have radically different designs that support carrying significantly more payload. 65. Some creative marketers will offer to partially or fully subsidize rides where customers deliver value — by taking surveys, by participating in virtual focus groups, by promoting their brand via social media, etc. 66. Sensors of all sorts will be embedded in vehicles that will have secondary uses — like improving weather forecasting, crime detection and prevention, finding fugitives, infrastructure conditions (such as potholes). This data will be monetized, likely by the companies who own the transportation services. 67. Companies like Google and Facebook will add to their databases everything about customer movements and locations. Unlike GPS chips that only tell them where someone is at the moment (and where they’ve been), autonomous vehicle systems will know where you’re going in real-time (and with whom). 68. Autonomous vehicles will create some new jobs and opportunities for entrepreneurs. However, these will be off-set many times by extraordinary job losses by nearly everyone in the transportation value chain today. In the autonomous future, a large number of jobs will go away. This includes drivers (which is in many states today the most common job), mechanics, gas station employees, most of the people who make cars and car parts or support those who do (due to huge consolidation of makers and supply chains and manufacturing automation), the marketing supply chain for vehicles, many people who work on and build roads/bridges, employees of vehicle insurance and financing companies (and their partners/suppliers), toll booth operators (most of whom have already been displaced), many employees of restaurants that support travelers, truck stops, retail workers and all the people whose businesses support these different types of companies and workers. 69. There will be some hardcore hold-outs who really like driving. But, over time, they’ll become a less statistically relevant voting group as younger people, who’ve never driven, will outnumber them. At first, this may be a 50 state regulated system — where driving yourself may actually become illegal in some states in the next 10 years while other states may continue to allow it for a long time. Some states will try, unsuccessfully, to block autonomous vehicles. 70. There will be lots of discussions about new types of economic systems — from universal basic income to new variations of socialism to a more regulated capitalist system — that will result from the enormous impacts of autonomous vehicles. 71. In the path to a truly driverless future, there will be a number of key tipping points. At the moment, freight delivery may push autonomous vehicle use sooner than people transport. Large trucking companies may have the financial means and legislative influence to make rapid, dramatic changes. They are also better positioned to support hybrid approaches where only parts of their fleet or parts of the routes are automated. 72. Autonomous vehicles will radically change the power centers of the world. They will be the beginning of the end of burning hydrocarbons. The powerful interests who control these industries today will fight viciously to stop this. There may even be wars to slow down this process as oil prices start to plummet and demand dries up. 73. Autonomous vehicles will continue to play a larger role in all aspects of war — from surveillance to troop/robot movement to logistics support to actual engagement. Drones will be complemented by additional on-the-ground, in-space, in-the-water and under-the-water autonomous vehicles. Note: My original article was inspired by a presentation by Ryan Chin, CEO of Optimus Ridespeak at an MIT event about autonomous vehicles. He really got me thinking about how profound these advances could be to our lives. I’m sure some of my thoughts above came from him. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder @mycityatpeace | Faculty @hult_biz | Producer @couragetolisten | Naturally curious dot-connector | More at www.geoffnesnow.com
Blaise Aguera y Arcas
8.7K
15
https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------2----------------
Do algorithms reveal sexual orientation or just expose our stereotypes?
by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft.
François Chollet
16.8K
17
https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704?source=tag_archive---------6----------------
What worries me about AI – François Chollet – Medium
Disclaimer: These are my own personal views. I do not speak for my employer. If you quote this article, please have the honesty to present these ideas as what they are: personal, speculative opinions, to be judged on their own merits. If you were around in the 1980s and 1990s, you may remember the now-extinct phenomenon of “computerphobia”. I have personally witnessed it a few times as late as the early 2000s — as personal computers were introduced into our lives, in our workplaces and homes, quite a few people would react with anxiety, fear, or even aggressivity. While some of us were fascinated by computers and awestruck by the potential they could glimpse in them, most people didn’t understand them. They felt alien, abstruse, and in many ways, threatening. People feared getting replaced by technology. Most of us react to technological shifts with unease at best, panic at worst. Maybe that is true of any change at all. But remarkably, most of what we worry about ends up never happening. Fast-forward a few years, and the computer-haters have learned to live with them and to use them for their own benefit. Computers did not replace us and trigger mass unemployment — and nowadays we couldn’t imagine life without our laptops, tablets, and smartphones. Threatening change has become comfortable status quo. But at the same time as our fears failed to materialize, computers and the internet have enabled threats that almost no one was warning us about in the 1980s and 1990s. Ubiquitous mass surveillance. Hackers going after our infrastructure or our personal data. Psychological alienation on social media. The loss of our patience and our ability to focus. The political or religious radicalization of easily-influenced minds online. Hostile foreign powers hijacking social networks to disrupt Western democracies. If most of our fears turn out to be irrational, inversely, most of the truly worrying developments that have happened in the past as a result of technological change stem from things that most people didn’t worry about until it was already there. A hundred years ago, we couldn’t really forecast that the transportation and manufacturing technologies we were developing would enable a new form of industrial warfare that would wipe out tens of millions in two World Wars. We didn’t recognize early on that the invention of the radio would enable a new form of mass propaganda that would facilitate the rise of fascism in Italy and Germany. The progress of theoretical physics in the 1920s and 1930s wasn’t accompanied by anxious press articles about how these developments would soon enable thermonuclear weapons that would place the world forever under the threat of imminent annihilation. And today, even as alarms have been sounding for decades about the most dire problem of our times, climate, a large fraction (44%) of the American public still chooses to ignore it. As a civilization, we seem to be really bad at correctly identifying future threats and rightfully worrying about them, just as we seem to be extremely prone to panic due to irrational fears. Today, like many times in the past, we are faced with a new wave of radical change: cognitive automation, which could be broadly summed up under the keyword “AI”. And like many time in the past, we are worried that this new set of technologies will harm us — that AI will lead to mass unemployment, or that AI will gain an agency of its own, become superhuman, and choose to destroy us. But what if we’re worrying about the wrong thing, like we have almost every single time before? What if the real danger of AI was far remote from the “superintelligence” and “singularity” narratives that many are panicking about today? In this post, I’d like to raise awareness about what really worries me when it comes to AI: the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments. Of course, this is not the only tangible risk that arises from the development of cognitive technologies — there are many others, in particular issues related to the harmful biases of machine learning models. Other people are raising awareness of these problems far better than I could. I chose to write about mass population manipulation specifically because I see this risk as pressing and direly under-appreciated. This risk is already a reality today, and a number of long-term technological trends are going to considerably amplify it over the next few decades. As our lives become increasingly digitized, social media companies get ever greater visibility into our lives and minds. At the same time, they gain increasing access to behavioral control vectors — in particular via algorithmic newsfeeds, which control our information consumption. This casts human behavior as an optimization problem, as an AI problem: it becomes possible for social media companies to iteratively tune their control vectors in order to achieve specific behaviors, just like a game AI would iterative refine its play strategy in order to beat a level, driven by score feedback. The only bottleneck to this process is the intelligence of the algorithm in the loop — and as it happens, the largest social network company is currently investing billions in fundamental AI research. Let me explain in detail. In the past 20 years, our private and public lives have moved online. We spend an ever greater fraction of each day staring at screens. Our world is moving to a state where most of what we do consists of digital information consumption, modification, or creation. A side effect of this long-term trend is that corporations and governments are now collecting staggering amounts of data about us, in particular through social network services. Who we communicate with. What we say. What content we’ve been consuming — images, movies, music, news. What mood we are in at specific times. Ultimately, almost everything we perceive and everything we do will end up recorded on some remote server. This data, in theory, allows the entities that collect it to build extremely accurate psychological profiles of both individuals and groups. Your opinions and behavior can be cross-correlated with that of thousands of similar people, achieving an uncanny understanding of what makes you tick — probably more predictive than what yourself could achieve through mere introspection (for instance, Facebook “likes” enable algorithms to better assess your personality that your own friends could). This data makes it possible to predict a few days in advance when you will start a new relationship (and with whom), and when you will end your current one. Or who is at risk of suicide. Or which side you will ultimately vote for in an election, even while you’re still feeling undecided. And it’s not just individual-level profiling power — large groups can be even more predictable, as aggregating data points erases randomness and individual outliers. Passive data collection is not where it ends. Increasingly, social network services are in control of what information we consume. What see in our newsfeeds has become algorithmically “curated”. Opaque social media algorithms get to decide, to an ever-increasing extent, which political articles we read, which movie trailers we see, who we keep in touch with, whose feedback we receive on the opinions we express. Integrated over many years of exposure, the algorithmic curation of the information we consume gives the algorithms in charge considerable power over our lives — over who we are, who we become. If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your worldview and your political beliefs. Facebook’s business lies in influencing people. That’s what the service it sells to its customers — advertisers, including political advertisers. As such, Facebook has built a fine-tuned algorithmic engine that does just that. This engine isn’t merely capable of influencing your view of a brand or your next smart-speaker purchase. It can influence your mood, tuning the content it feeds you in order to make you angry or happy, at will. It may even be able to swing elections. In short, social network companies can simultaneously measure everything about us, and control the information we consume. And that’s an accelerating trend. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior, in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A large subset of the field of AI — in particular “reinforcement learning” — is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the target at hand — in this case, us. By moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms. This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack: From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume. Remarkably, mass population manipulation — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat — current technology may well suffice. Social network companies have been working on it for a few years, with significant results. And while they may only be trying to maximize “engagement” and to influence your purchase decisions, rather than to manipulate your view of the world, the tools they’ve developed are already being hijacked by hostile state actors for political purposes — as seen in the 2016 Brexit referendum or the 2016 US presidential election. This is already our reality. But if mass population manipulation is already possible today — in theory — why hasn’t the world been upended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. Until 2015, all ad targeting algorithms across the industry were running on mere logistic regression. In fact, that’s still true to a large extent today — only the biggest players have switched to more advanced models. Logistic regression, an algorithm that predates the computing era, is one of the most basic techniques you could use for personalization. It is the reason why so many of the ads you see online are desperately irrelevant. Likewise, the social media bots used by hostile state actors to sway public opinion have little to no AI in them. They’re all extremely primitive. For now. Machine learning and AI have been making fast progress in recent years, and that progress is only beginning to get deployed in targeting algorithms and social media bots. Deep learning has only started to make its way into newsfeeds and ad networks in 2016. Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. When your product is a social newsfeed, what use are you going to make of natural language processing and reinforcement learning? We’re looking at a company that builds fine-grained psychological profiles of almost two billion humans, that serves as a primary news source for many of them, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it scares me. And consider that Facebook may not even be the most worrying threat here. Ponder, for instance, China’s use of information control to enable unprecedented forms of totalitarianism, such as its “social credit system”. Many people like to pretend that large corporations are the all-powerful rulers of the modern world, but what power they hold is dwarfed by that of governments. If given algorithmic control over our minds, governments may well turn into far worst actors than corporations. Now, what can we do about it? How can we defend ourselves? As technologists, what can we do to avert the risk of mass manipulation via our social newsfeeds? Importantly, the existence of this threat doesn’t mean that all algorithmic curation is bad, or that all targeted advertising is bad. Far from it. Both of these can serve a valuable purpose. With the rise of the Internet and AI, placing algorithms in charge of our information diet isn’t just an inevitable trend — it’s a desirable one. As our lives become increasingly digital and connected, and as our world becomes increasingly information-intensive, we will need AI to serve as our interface to the world. In the long-run, education and self-development will be some of the most impactful applications of AI — and this will happen through dynamics that almost entirely mirror that of a nefarious AI-enabled newsfeed trying to manipulate you. Algorithmic information management has tremendous potential to help us, to empower individuals to realize more of their potential, and to help society better manage itself. The issue is not AI itself. The issue is control. Instead of letting newsfeed algorithms manipulate the user to achieve opaque goals, such as swaying their political opinions, or maximally wasting their time, we should put the user in charge of the goals that the algorithms optimize for. We are talking, after all, about your news, your worldview, your friends, your life — the impact that technology has on you should naturally be placed under your own control. Information management algorithms should not be a mysterious force inflicted on us to serve ends that run opposite to our own interests; instead, they should be a tool in our hand. A tool that we can use for our own purposes, say, for education and personal instead of entertainment. Here’s an idea — any algorithmic newsfeed with significant adoption should: We should build AI to serve humans, not to manipulate them for profit or political gain. What if newsfeed algorithms didn’t operate like casino operators or propagandists? What if instead, they were closer to a mentor or a good librarian, someone who used their keen understanding of your psychology — and that of millions of other similar people — to recommend to you that next book that will most resonate with your objectives and make you grow. A sort of navigation tool for your life — an AI capable of guiding you through the optimal path in experience space to get where you want to go. Can you imagine looking at your own life through the lens of a system that has seen millions of lives unfold? Or writing a book together with a system that has read every book? Or conducting research in collaboration with a system that sees the full scope of current human knowledge? In products where you are fully in control of the AI that interacts with you, a more sophisticated algorithm, instead of being a threat, would be a net positive, letting you achieve your own goals more efficiently. In summary, our future is one where AI will be our interface to the world — a world made of digital information. This can equally lead to empowering individuals to gain greater control over their lives, or to a total loss of agency. Unfortunately, social media is currently engaged on the wrong road. But it’s still early enough that we can reverse course. As an industry, we need to develop product categories and markets where the incentives are aligned with placing the user in charge of the algorithms that affect them, instead of using AI to exploit the user’s mind for profit or political gain. We need to strive towards products that are the anti-Facebook. In the far future, such products will likely take the form of AI assistants. Digital mentors programmed to help you, that put you in control of the objectives they pursue in their interactions with you. And in the present, search engines could be seen as an early, more primitive example of an AI-driven information interface that serves users instead of seeking to hijack their mental space. Search is a tool that you deliberately use to reach specific goals, rather than a passive always-on feed that elects what to show you. You tell it what to it should do for you. And instead of seeking to maximally waste your time, a search engine attempts to minimize the time it takes to go from question to answer, from problem to solution. You may be thinking, since a search engine is still an AI layer between us and the information we consume, could it bias its results to attempt to manipulate us? Yes, that risk is latent in every information-management algorithm. But in stark contrast with social networks, market incentives in this case are actually aligned with users needs, pushing search engines to be as relevant and objective as possible. If they fail to be maximally useful, there’s essentially no friction for users to move to a competing product. And importantly, a search engine would have a considerably smaller psychological attack surface than a social newsfeed. The threat we’ve profiled in this post requires most of the following to be present in a product: Most AI-driven information-management products don’t meet these requirements. Social networks, on the other hand, are a frightening combination of risk factors. As technologists, we should gravitate towards products that do not feature these characteristics, and push back against products that combine them all, if only because of their potential for dangerous misuse. Build search engines and digital assistants, not social newsfeeds. Make your recommendation engines transparent, configurable, and constructive, rather than slot-like machines that maximize “engagement” and wasted hours of human time. Invest your UI, UX, and AI expertise into building great configuration panels for your algorithm, to enable your users to use your product on their own terms. And importantly, we should educate users about these issues, so that they reject manipulative products, generating enough market pressure to align the incentives of the technology industry with that of consumers. Conclusion: the fork in the road ahead One path leads to a place that really scares me. The other leads to a more humane future. There’s still time to take the better one. If you work on these technologies, keep this in mind. You may not have evil intentions. You may simply not care. You may simply value your RSUs more than our shared future. But whether or not you care, because you have a hand in shaping the infrastructure of the digital world, your choices affect us all. And you may eventually be held responsible for them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Simon Greenman
10.2K
16
https://towardsdatascience.com/who-is-going-to-make-money-in-ai-part-i-77a2f30b8cef?source=tag_archive---------7----------------
Who Is Going To Make Money In AI? Part I – Towards Data Science
We are in the midst of a gold rush in AI. But who will reap the economic benefits? The mass of startups who are all gold panning? The corporates who have massive gold mining operations? The technology giants who are supplying the picks and shovels? And which nations have the richest seams of gold? We are currently experiencing another gold rush in AI. Billions are being invested in AI startups across every imaginable industry and business function. Google, Amazon, Microsoft and IBM are in a heavyweight fight investing over $20 billion in AI in 2016. Corporates are scrambling to ensure they realise the productivity benefits of AI ahead of their competitors while looking over their shoulders at the startups. China is putting its considerable weight behind AI and the European Union is talking about a $22 billion AI investment as it fears losing ground to China and the US. AI is everywhere. From the 3.5 billion daily searches on Google to the new Apple iPhone X that uses facial recognition to Amazon Alexa that cutely answers our questions. Media headlines tout the stories of how AI is helping doctors diagnose diseases, banks better assess customer loan risks, farmers predict crop yields, marketers target and retain customers, and manufacturers improve quality control. And there are think tanks dedicated to studying the physical, cyber and political risks of AI. AI and machine learning will become ubiquitous and woven into the fabric of society. But as with any gold rush the question is who will find gold? Will it just be the brave, the few and the large? Or can the snappy upstarts grab their nuggets? Will those providing the picks and shovel make most of the money? And who will hit pay dirt? As I started thinking about who was going to make money in AI I ended up with seven questions. Who will make money across the (1) chip makers, (2) platform and infrastructure providers, (3) enabling models and algorithm providers, (4) enterprise solution providers, (5) industry vertical solution providers, (6) corporate users of AI and (7) nations? While there are many ways to skin the cat of the AI landscape, hopefully below provides a useful explanatory framework — a value chain of sorts. The companies noted are representative of larger players in each category but in no way is this list intended to be comprehensive or predictive. Even though the price of computational power has fallen exponentially, demand is rising even faster. AI and machine learning with its massive datasets and its trillions of vector and matrix calculations has a ferocious and insatiable appetite. Bring on the chips. NVIDIA’s stock is up 1500% in the past two years benefiting from the fact that their graphical processing unit (GPU) chips that were historically used to render beautiful high speed flowing games graphics were perfect for machine learning. Google recently launched its second generation of Tensor Processing Units (TPUs). And Microsoft is building its own Brainwave AI machine learning chips. At the same time startups such as Graphcore, who has raised over $110M, is looking to enter the market. Incumbents chip providers such as IBM, Intel, Qualcomm and AMD are not standing still. Even Facebook is rumoured to be building a team to design its own AI chips. And the Chinese are emerging as serious chip players with Cambricon Technology announcing the first cloud AI chip this past week. What is clear is that the cost of designing and manufacturing chips then sustaining a position as a global chip leader is very high. It requires extremely deep pockets and a world class team of silicon and software engineers. This means that there will be very few new winners. Just like the gold rush days those that provide the cheapest and most widely used picks and shovels will make a lot of money. The AI race is now also taking place in the cloud. Amazon realised early that startups would much rather rent computers and software than buy it. And so it launched Amazon Web Services (AWS) in 2006. Today AI is demanding so much compute power that companies are increasingly turning to the cloud to rent hardware through Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. The fight is on among the tech giants. Microsoft is offering their hybrid public and private Azure cloud service that allegedly has over one million computers. And in the past few weeks they announced that their Brainwave hardware solutionsdramatically accelerate machine learning with their own Bing search engine performance improving by a factor of ten. Google is rushing to play catchup with its own GoogleCloud offering. And we are seeing the Chinese Alibaba starting to take global share. Amazon — Microsoft — Google and IBM are going to continue to duke this one out. And watch out for the massively scaled cloud players from China. The big picks and shovels guys will win again. Today Google is the world’s largest AI company attracting the best AI minds, spending small country size GDP budgets on R&D, and sitting on the best datasets gleamed from the billions of users of their services. AI is powering Google’s search, autonomous vehicles, speech recognition, intelligent reasoning, massive search and even its own work on drug discovery and disease diangosis. And the incredible AI machine learning software and algorithms that are powering all of Google’s AI activity — TensorFlow — is now being given away for free. Yes for free! TensorFlow is now an open source software project available to the world. And why are they doing this? As Jeff Dean, head of Google Brain, recently said there are 20 million organisations in the world that could benefit from machine learning today. If millions of companies use this best in class free AI software then they are likely to need lots of computing power. And who is better served to offer that? Well Google Cloud is of course optimised for TensorFlow and related AI services. And once you become reliant on their software and their cloud you become a very sticky customer for many years to come. No wonder it is a brutal race for global AI algorithm dominance with Amazon — Microsoft — IBM also offering their own cheap or free AI software services. We are also seeing a fight for not only machine learning algorithms but cognitive algorithms that offer services for conversational agents and bots, speech, natural language processing (NLP) and semantics, vision, and enhanced core algorithms. One startup in this increasingly contested space is Clarifai who provides advanced image recognition systems for businesses to detect near-duplicates and visual searches. It has raised nearly $40M over the past three years. The market for vision related algorithms and services is estimated to be a cumulative $8 billion in revenue between 2016 and 2025. The giants are not standing still. IBM, for example, is offering its Watson cognitive products and services. They have twenty or so APIs for chatbots, vision, speech, language, knowledge management and empathy that can be simply be plugged into corporate software to create AI enabled applications. Cognitive APIs are everywhere. KDnuggets lists here over 50 of the top cognitive services from the giants and startups. These services are being put into the cloud as AI as a Service (AIaaS) to make them more accessible. Just recently Microsoft’s CEO Satya Nadella claimed that a million developers are using their AI APIs, services and tools for building AI-powered apps and nearly 300,000 developers are using their tools for chatbots. I wouldn’t want to be a startup competing with these Goliaths. The winners in this space are likely to favour the heavyweights again. They can hire the best research and engineering talent, spend the most money, and have access to the largest datasets. To flourish startups are going to have to be really well funded, supported by leading researchers with a whole battery of IP patents and published papers, deep domain expertise, and have access to quality datasets. And they should have excellent navigational skills to sail ahead of the giants or sail different races. There will many startup casualties, but those that can scale will find themselves as global enterprises or quickly acquired by the heavyweights. And even if a startup has not found a path to commercialisation, then they could become acquihires (companies bought for their talent) if they are working on enabling AI algorithms with a strong research oriented team. We saw this in 2014 when DeepMind, a two year old London based company that developed unique reinforcement machine learning algorithms, was acquired by Google for $400M. Enterprise software has been dominated by giants such as Salesforce, IBM, Oracle and SAP. They all recognise that AI is a tool that needs to be integrated into their enterprise offerings. But many startups are rushing to become the next generation of enterprise services filling in gaps where the incumbents don’t currently tread or even attempting to disrupt them. We analysed over two hundred use cases in the enterprise space ranging from customer management to marketing to cybersecurity to intelligence to HR to the hot area of Cognitive Robotic Process Automation (RPA). The enterprise field is much more open than previous spaces with a veritable medley of startups providing point solutions for these use cases. Today there are over 200 AI powered companies just in the recruitment space, many of them AI startups. Cybersecurity leader DarkTrace and RPA leader UiPathhave war chests in the $100 millions. The incumbents also want to make sure their ecosystems stay on the forefront and are investing in startups that enhance their offering. Salesforce has invested in Digital Genius a customer management solution and similarly Unbable that offers enterprise translation services. Incumbents also often have more pressing problems. SAP, for example, is rushing to play catchup in offering a cloud solution, let alone catchup in AI. We are also seeing tools providers trying to simplify the tasks required to create, deploy and manage AI services in the enterprise. Machine learning training, for example, is a messy business where 80% of time can be spent on data wrangling. And an inordinate amount of time is spent on testing and tuning of what is called hyperparameters. Petuum, a tools provider based in Pittsburgh in the US, has raised over $100M to help accelerate and optimise the deployment of machine learning models. Many of these enterprise startup providers can have a healthy future if they quickly demonstrate that they are solving and scaling solutions to meet real world enterprise needs. But as always happens in software gold rushes there will be a handful of winners in each category. And for those AI enterprise category winners they are likely to be snapped up, along with the best in-class tool providers, by the giants if they look too threatening. AI is driving a race for the best vertical industry solutions. There are a wealth of new AI powered startups providing solutions to corporate use cases in the healthcare, financial services, agriculture, automative, legal and industrial sectors. And many startups are taking the ambitious path to disrupt the incumbent corporate players by offering a service directly to the same customers. It is clear that many startups are providing valuable point solutions and can succeed if they have access to (1) large and proprietary data training sets, (2) domain knowledge that gives them deep insights into the opportunities within a sector, (3) a deep pool of talent around applied AI and (4) deep pockets of capital to fund rapid growth. Those startups that are doing well generally speak the corporate commercial language of customers, business efficiency and ROI in the form of well developed go-to-market plans. For example, ZestFinance has raised nearly $300M to help improve credit decision making that will provide fair and transparent credit to everyone. They claim they have the world’s best data scientists. But they would, wouldn’t they? For those startups that are looking to disrupt existing corporate players they need really deep pockets. For example, Affirm, that offers loans to consumers at the point of sale, has raised over $700M. These companies quickly need to create a defensible moat to ensure they remain competitive. This can come from data network effects where more data begets better AI based services and products that gets more revenue and customers that gets more data. And so the flywheel effect continues. And while corporates might look to new vendors in their industry for AI solutions that could enhance their top and bottom line, they are not going to sit back and let upstarts muscle in on their customers. And they are not going to sit still and let their corporate competitors gain the first advantage through AI. There is currently a massive race for corporate innovation. Large companies have their own venture groups investing in startups, running accelerators and building their own startups to ensure that they are leaders in AI driven innovation. Large corporates are in a strong position against the startups and smaller companies due to their data assets. Data is the fuel for AI and machine learning. Who is better placed to take advantage of AI than the insurance company that has reams of historic data on underwriting claims? The financial services company that knows everything about consumer financial product buying behaviour? Or the search company that sees more user searches for information than any other? Corporates large and small are well positioned to extract value from AI. In fact Gartner research predicts AI-derived business value is projected to reach up to $3.9 trillion by 2022. There are hundreds if not thousands of valuable use cases that AI can addresses across organisations. Corporates can improve their customer experience, save costs, lower prices, drive revenues and sell better products and services powered by AI. AI will help the big get bigger often at the expense of smaller companies. But they will need to demonstrate strong visionary leadership, an ability to execute, and a tolerance for not always getting technology enabled projects right on the first try. Countries are also also in a battle for AI supremacy. China has not been shy about its call to arms around AI. It is investing massively in growing technical talent and developing startups. Its more lax regulatory environment, especially in data privacy, helps China lead in AI sectors such as security and facial recognition. Just recently there was an example of Chinese police picking out one most wanted face in a crowd of 50,000 at a music concert. And SenseTime Group Ltd, that analyses faces and images on a massive scale, reported it raised $600M becoming the most valuable global AI startup. The Chinese point out that their mobile market is 3x the size of the US and there are 50x more mobile payments taking place — this is a massive data advantage. The European focus on data privacy regulation could put them at a disadvantage in certain areas of AI even if the Union is talking about a $22B investment in AI. The UK, Germany, France and Japan have all made recent announcements about their nation state AI strategies. For example, President Macron said the French government will spend $1.85 billion over the next five years to support the AI ecosystem including the creation of large public datasets. Companies such as Google’s DeepMind and Samsung have committed to open new Paris labs and Fujitsu is expanding its Paris research centre. The British just announced a $1.4 billion push into AI including funding of 1000 AI PhDs. But while nations are investing in AI talent and the ecosystem, the question is who will really capture the value. Will France and the UK simply be subsidising PhDs who will be hired by Google? And while payroll and income taxes will be healthy on those six figure machine learning salaries, the bulk of the economic value created could be with this American company, its shareholders, and the smiling American Treasury. AI will increase productivity and wealth in companies and countries. But how will that wealth be distributed when the headlines suggest that 30 to 40% of our jobs will be taken by the machines? Economists can point to lessons from hundreds of years of increasing technology automation. Will there be net job creation or net job loss? The public debate often cites Geoffrey Hinton, the godfather of machine learning, who suggested radiologists will lose their jobs by the dozen as machines diagnose diseases from medical images. But then we can look to the Chinese who are using AI to assist radiologists in managing the overwhelming demand to review 1.4 billion CT scans annually for lung cancer. The result is not job losses but an expanded market with more efficient and accurate diagnosis. However there is likely to be a period of upheaval when much of the value will go to those few companies and countries that control AI technology and data. And lower skilled countries whose wealth depends on jobs that are targets of AI automation will likely suffer. AI will favour the large and the technologically skilled. In examining the landscape of AI it has became clear that we are now entering a truly golden era for AI. And there are few key themes appearing as to where the economic value will migrate: In short it looks like the AI gold rush will favour the companies and countries with control and scale over the best AI tools and technology, the data, the best technical workers, the most customers and the strongest access to capital. Those with scale will capture the lion’s share of the economic value from AI. In some ways ‘plus ça change, plus c’est la même chose.’ But there will also be large golden nuggets that will be found by a few choice brave startups. But like any gold rush many startups will hit pay dirt. And many individuals and societies will likely feel like they have not seen the benefits of the gold rush. This is the first part in a series of articles I intend to write on the topic of the economics of AI. I welcome your feedback. Written by Simon Greenman I am a lover of technology and how it can be applied in the business world. I run my own advisory firm Best Practice AI helping executives of enterprises and startups accelerate the adoption of ROI based AI applications . Please get in touch to discuss this. If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. And please post your comments or you can email me directly or find me on LinkedIn or twitter or follow me at Simon Greenman. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI guy. MapQuest guy. Grow, innovate and transform companies with tech. Start-up investor, mentor and geek. Sharing concepts, ideas, and codes.
Aman Agarwal
7K
24
https://medium.freecodecamp.org/explained-simply-how-an-ai-program-mastered-the-ancient-game-of-go-62b8940a9080?source=tag_archive---------8----------------
Explained Simply: How an AI program mastered the ancient game of Go
This is about AlphaGo, Google DeepMind’s Go playing AI that shook the technology world in 2016 by defeating one of the best players in the world, Lee Sedol. Go is an ancient board game which has so many possible moves at each step that future positions are hard to predict — and therefore it requires strong intuition and abstract thinking to play. Because of this reason, it was believed that only humans could be good at playing Go. Most researchers thought that it would still take decades to build an AI which could think like that. In fact, I’m releasing this essay today because this week (March 8–15) marks the two-year anniversary of the AlphaGo vs Sedol match! But AlphaGo didn’t stop there. 8 months later, it played 60 professional games on a Go website under disguise as a player named “Master”, and won every single game, against dozens of world champions, of course without resting between games. Naturally this was a HUGE achievement in the field of AI and sparked worldwide discussions about whether we should be excited or worried about artificial intelligence. Today we are going to take the original research paper published by DeepMind in the Nature journal, and break it down paragraph-by-paragraph using simple English. After this essay, you’ll know very clearly what AlphaGo is, and how it works. I also hope that after reading this you will not believe all the news headlines made by journalists to scare you about AI, and instead feel excited about it. Worrying about the growing achievements of AI is like worrying about the growing abilities of Microsoft Powerpoint. Yes, it will get better with time with new features being added to it, but it can’t just uncontrollably grow into some kind of Hollywood monster. You DON’T need to know how to play Go to understand this paper. In fact, I myself have only read the first 3–4 lines in Wikipedia’s opening paragraph about it. Instead, surprisingly, I use some examples from basic Chess to explain the algorithms. You just have to know what a 2-player board game is, in which each player takes turns and there is one winner at the end. Beyond that you don’t need to know any physics or advanced math or anything. This will make it more approachable for people who only just now started learning about machine learning or neural networks. And especially for those who don’t use English as their first language (which can make it very difficult to read such papers). If you have NO prior knowledge of AI and neural networks, you can read the “Deep Learning” section of one of my previous essays here. After reading that, you’ll be able to get through this essay. If you want to get a shallow understanding of Reinforcement Learning too (optional reading), you can find it here. Here’s the original paper if you want to try reading it: As for me: Hi I’m Aman, an AI and autonomous robots engineer. I hope that my work will save you a lot of time and effort if you were to study this on your own. Do you speak Japanese? Ryohji Ikebe has kindly written a brief memo about this essay in Japanese, in a series of Tweets. As you know, the goal of this research was to train an AI program to play Go at the level of world-class professional human players. To understand this challenge, let me first talk about something similar done for Chess. In the early 1990s, IBM came out with the Deep Blue computer which defeated the great champion Gary Kasparov in Chess. (He’s also a very cool guy, make sure to read more about him later!) How did Deep Blue play? Well, it used a very brute force method. At each step of the game, it took a look at all the possible legal moves that could be played, and went ahead to explore each and every move to see what would happen. And it would keep exploring move after move for a while, forming a kind of HUGE decision tree of thousands of moves. And then it would come back along that tree, observing which moves seemed most likely to bring a good result. But, what do we mean by “good result”? Well, Deep Blue had many carefully designed chess strategies built into it by expert chess players to help it make better decisions — for example, how to decide whether to protect the king or get advantage somewhere else? They made a specific “evaluation algorithm” for this purpose, to compare how advantageous or disadvantageous different board positions are (IBM hard-coded expert chess strategies into this evaluation function). And finally it chooses a carefully calculated move. On the next turn, it basically goes through the whole thing again. As you can see, this means Deep Blue thought about millions of theoretical positions before playing each move. This was not so impressive in terms of the AI software of Deep Blue, but rather in the hardware — IBM claimed it to be one of the most powerful computers available in the market at that time. It could look at 200 million board positions per second. Now we come to Go. Just believe me that this game is much more open-ended, and if you tried the Deep Blue strategy on Go, you wouldn’t be able to play well. There would be SO MANY positions to look at at each step that it would simply be impractical for a computer to go through that hell. For example, at the opening move in Chess there are 20 possible moves. In Go the first player has 361 possible moves, and this scope of choices stays wide throughout the game. This is what they mean by “enormous search space.” Moreover, in Go, it’s not so easy to judge how advantageous or disadvantageous a particular board position is at any specific point in the game — you kinda have to play the whole game for a while before you can determine who is winning. But let’s say you magically had a way to do both of these. And that’s where deep learning comes in! So in this research, DeepMind used neural networks to do both of these tasks (if you haven’t read about them yet, here’s the link again). They trained a “policy neural network” to decide which are the most sensible moves in a particular board position (so it’s like following an intuitive strategy to pick moves from any position). And they trained a “value neural network” to estimate how advantageous a particular board arrangement is for the player (or in other words, how likely you are to win the game from this position). They trained these neural networks first with human game examples (your good old ordinary supervised learning). After this the AI was able to mimic human playing to a certain degree, so it acted like a weak human player. And then to train the networks even further, they made the AI play against itself millions of times (this is the “reinforcement learning” part). With this, the AI got better because it had more practice. With these two networks alone, DeepMind’s AI was able to play well against state-of-the-art Go playing programs that other researchers had built before. These other programs had used an already popular pre-existing game playing algorithm, called the “Monte Carlo Tree Search” (MCTS). More about this later. But guess what, we still haven’t talked about the real deal. DeepMind’s AI isn’t just about the policy and value networks. It doesn’t use these two networks as a replacement of the Monte Carlo Tree Search. Instead, it uses the neural networks to make the MCTS algorithm work better... and it got so much better that it reached superhuman levels. THIS improved variation of MCTS is “AlphaGo”, the AI that beat Lee Sedol and went down in AI history as one of the greatest breakthroughs ever. So essentially, AlphaGo is simply an improved implementation of a very ordinary computer science algorithm. Do you understand now why AI in its current form is absolutely nothing to be scared of? Wow, we’ve spent a lot of time on the Abstract alone. Alright — to understand the paper from this point on, first we’ll talk about a gaming strategy called the Monte Carlo Tree Search algorithm. For now, I’ll just explain this algorithm at enough depth to make sense of this essay. But if you want to learn about it in depth, some smart people have also made excellent videos and blog posts on this: 1. A short video series from Udacity2. Jeff Bradberry’s explanation of MCTS3. An MCTS tutorial by Fullstack Academy The following section is long, but easy to understand (I’ll try my best) and VERY important, so stay with me! The rest of the essay will go much quicker. Let’s talk about the first paragraph of the essay above. Remember what I said about Deep Blue making a huge tree of millions of board positions and moves at each step of the game? You had to do simulations and look at and compare each and every possible move. As I said before, that was a simple approach and very straightforward approach — if the average software engineer had to design a game playing AI, and had all the strongest computers of the world, he or she would probably design a similar solution. But let’s think about how do humans themselves play chess? Let’s say you’re at a particular board position in the middle of the game. By game rules, you can do a dozen different things — move this pawn here, move the queen two squares here or three squares there, and so on. But do you really make a list of all the possible moves you can make with all your pieces, and then select one move from this long list? No — you “intuitively” narrow down to a few key moves (let’s say you come up with 3 sensible moves) that you think make sense, and then you wonder what will happen in the game if you chose one of these 3 moves. You might spend 15–20 seconds considering each of these 3 moves and their future — and note that during these 15 seconds you don’t have to carefully plan out the future of each move; you can just “roll out” a few mental moves guided by your intuition without TOO much careful thought (well, a good player would think farther and more deeply than an average player). This is because you have limited time, and you can’t accurately predict what your opponent will do at each step in that lovely future you’re cooking up in your brain. So you’ll just have to let your gut feeling guide you. I’ll refer to this part of the thinking process as “rollout”, so take note of it!So after “rolling out” your few sensible moves, you finally say screw it and just play the move you find best. Then the opponent makes a move. It might be a move you had already well anticipated, which means you are now pretty confident about what you need to do next. You don’t have to spend too much time on the rollouts again. OR, it could be that your opponent hits you with a pretty cool move that you had not expected, so you have to be even more careful with your next move.This is how the game carries on, and as it gets closer and closer to the finishing point, it would get easier for you to predict the outcome of your moves — so your rollouts don’t take as much time. The purpose of this long story is to describe what the MCTS algorithm does on a superficial level — it mimics the above thinking process by building a “search tree” of moves and positions every time. Again, for more details you should check out the links I mentioned earlier. The innovation here is that instead of going through all the possible moves at each position (which Deep Blue did), it instead intelligently selects a small set of sensible moves and explores those instead. To explore them, it “rolls out” the future of each of these moves and compares them based on their imagined outcomes.(Seriously — this is all I think you need to understand this essay) Now — coming back to the screenshot from the paper. Go is a “perfect information game” (please read the definition in the link, don’t worry it’s not scary). And theoretically, for such games, no matter which particular position you are at in the game (even if you have just played 1–2 moves), it is possible that you can correctly guess who will win or lose (assuming that both players play “perfectly” from that point on). I have no idea who came up with this theory, but it is a fundamental assumption in this research project and it works. So that means, given a state of the game s, there is a function v*(s) which can predict the outcome, let’s say probability of you winning this game, from 0 to 1. They call it the “optimal value function”. Because some board positions are more likely to result in you winning than other board positions, they can be considered more “valuable” than the others. Let me say it again: Value = Probability between 0 and 1 of you winning the game. But wait — say there was a girl named Foma sitting next to you while you play Chess, and she keeps telling you at each step if you’re winning or losing. “You’re winning... You’re losing... Nope, still losing...” I think it wouldn’t help you much in choosing which move you need to make. She would also be quite annoying. What would instead help you is if you drew the whole tree of all the possible moves you can make, and the states that those moves would lead to — and then Foma would tell you for the entire tree which states are winning states and which states are losing states. Then you can choose moves which will keep leading you to winning states. All of a sudden Foma is your partner in crime, not an annoying friend. Here, Foma behaves as your optimal value function v*(s). Earlier, it was believed that it’s not possible to have an accurate value function like Foma for the game of Go, because the games had so much uncertainty. BUT — even if you had the wonderful Foma, this wonderland strategy of drawing out all the possible positions for Foma to evaluate will not work very well in the real world. In a game like Chess or Go, as we said before, if you try to imagine even 7–8 moves into the future, there can be so many possible positions that you don’t have enough time to check all of them with Foma. So Foma is not enough. You need to narrow down the list of moves to a few sensible moves that you can roll out into the future. How will your program do that? Enter Lusha. Lusha is a skilled Chess player and enthusiast who has spent decades watching grand masters play Chess against each other. She can look at your board position, look quickly at all the available moves you can make, and tell you how likely it would be that a Chess expert would make any of those moves if they were sitting at your table. So if you have 50 possible moves at a point, Lusha will tell you the probability that each move would be picked by an expert. Of course, a few sensible moves will have a much higher probability and other pointless moves will have very little probability. She is your policy function, p(a\s). For a given state s, she can give you probabilities for all the possible moves that an expert would make. Wow — you can take Lusha’s help to guide you in how to select a few sensible moves, and Foma will tell you the likelihood of winning from each of those moves. You can choose the move that both Foma and Lusha approve. Or, if you want to be extra careful, you can roll out the moves selected by Lusha, have Foma evaluate them, pick a few of them to roll out further into the future, and keep letting Foma and Lusha help you predict VERY far into the game’s future — much quicker and more efficient than to go through all the moves at each step into the future. THIS is what they mean by “reducing the search space”. Use a value function (Foma) to predict outcomes, and use a policy function (Lusha) to give you grand-master probabilities to help narrow down the moves you roll out. These are called “Monte Carlo rollouts”. Then while you backtrack from future to present, you can take average values of all the different moves you rolled out, and pick the most suitable action. So far, this has only worked on a weak amateur level in Go, because the policy functions and value functions that they used to guide these rollouts weren’t that great. Phew. The first line is self explanatory. In MCTS, you can start with an unskilled Foma and unskilled Lusha. The more you play, the better they get at predicting solid outcomes and moves. “Narrowing the search to a beam of high probability actions” is just a sophisticated way of saying, “Lusha helps you narrow down the moves you need to roll out by assigning them probabilities that an expert would play them”. Prior work has used this technique to achieve strong amateur level AI players, even with simple (or “shallow” as they call it) policy functions. Yeah, convolutional neural networks are great for image processing. And since a neural network takes a particular input and gives an output, it is essentially a function, right? So you can use a neural network to become a complex function. So you can just pass in an image of the board position and let the neural network figure out by itself what’s going on. This means it’s possible to create neural networks which will behave like VERY accurate policy and value functions. The rest is pretty self explanatory. Here we discuss how Foma and Lusha were trained. To train the policy network (predicting for a given position which moves experts would pick), you simply use examples of human games and use them as data for good old supervised learning. And you want to train another slightly different version of this policy network to use for rollouts; this one will be smaller and faster. Let’s just say that since Lusha is so experienced, she takes some time to process each position. She’s good to start the narrowing-down process with, but if you try to make her repeat the process , she’ll still take a little too much time. So you train a *faster policy network* for the rollout process (I’ll call it... Lusha’s younger brother Jerry? I know I know, enough with these names). After that, once you’ve trained both of the slow and fast policy networks enough using human player data, you can try letting Lusha play against herself on a Go board for a few days, and get more practice. This is the reinforcement learning part — making a better version of the policy network. Then, you train Foma for value prediction: determining the probability of you winning. You let the AI practice through playing itself again and again in a simulated environment, observe the end result each time, and learn from its mistakes to get better and better. I won’t go into details of how these networks are trained. You can read more technical details in the later section of the paper (‘Methods’) which I haven’t covered here. In fact, the real purpose of this particular paper is not to show how they used reinforcement learning on these neural networks. One of DeepMind’s previous papers, in which they taught AI to play ATARI games, has already discussed some reinforcement learning techniques in depth (And I’ve already written an explanation of that paper here). For this paper, as I lightly mentioned in the Abstract and also underlined in the screenshot above, the biggest innovation was the fact that they used RL with neural networks for improving an already popular game-playing algorithm, MCTS. RL is a cool tool in a toolbox that they used to fine-tune the policy and value function neural networks after the regular supervised training. This research paper is about proving how versatile and excellent this tool it is, not about teaching you how to use it. In television lingo, the Atari paper was a RL infomercial and this AlphaGo paper is a commercial. A quick note before you move on. Would you like to help me write more such essays explaining cool research papers? If you’re serious, I’d be glad to work with you. Please leave a comment and I’ll get in touch with you. So, the first step is in training our policy NN (Lusha), to predict which moves are likely to be played by an expert. This NN’s goal is to allow the AI to play similar to an expert human. This is a convolutional neural network (as I mentioned before, it’s a special kind of NN that is very useful in image processing) that takes in a simplified image of a board arrangement. “Rectifier nonlinearities” are layers that can be added to the network’s architecture. They give it the ability to learn more complex things. If you’ve ever trained NNs before, you might have used the “ReLU” layer. That’s what these are. The training data here was in the form of random pairs of board positions, and the labels were the actions chosen by humans when they were in those positions. Just regular supervised learning. Here they use “stochastic gradient ASCENT”. Well, this is an algorithm for backpropagation. Here, you’re trying to maximise a reward function. And the reward function is just the probability of the action predicted by a human expert; you want to increase this probability. But hey — you don’t really need to think too much about this. Normally you train the network so that it minimises a loss function, which is essentially the error/difference between predicted outcome and actual label. That is called gradient DESCENT. In the actual implementation of this research paper, they have indeed used the regular gradient descent. You can easily find a loss function that behaves opposite to the reward function such that minimising this loss will maximise the reward. The policy network has 13 layers, and is called “SL policy” network (SL = supervised learning). The data came from a... I’ll just say it’s a popular website on which millions of people play Go. How good did this SL policy network perform? It was more accurate than what other researchers had done earlier. The rest of the paragraph is quite self-explanatory. As for the “rollout policy”, you do remember from a few paragraphs ago, how Lusha the SL policy network is slow so it can’t integrate well with the MCTS algorithm? And we trained another faster version of Lusha called Jerry who was her younger brother? Well, this refers to Jerry right here. As you can see, Jerry is just half as accurate as Lusha BUT it’s thousands of times faster! It will really help get through rolled out simulations of the future faster, when we apply the MCTS. For this next section, you don’t *have* to know about Reinforcement Learning already, but then you’ll have to assume that whatever I say works. If you really want to dig into details and make sure of everything, you might want to read a little about RL first. Once you have the SL network, trained in a supervised manner using human player moves with the human moves data, as I said before you have to let her practice by itself and get better. That’s what we’re doing here. So you just take the SL policy network, save it in a file, and make another copy of it. Then you use reinforcement learning to fine-tune it. Here, you make the network play against itself and learn from the outcomes. But there’s a problem in this training style. If you only forever practice against ONE opponent, and that opponent is also only practicing with you exclusively, there’s not much of new learning you can do. You’ll just be training to practice how to beat THAT ONE player. This is, you guessed it, overfitting: your techniques play well against one opponent, but don’t generalize well to other opponents. So how do you fix this? Well, every time you fine-tune a neural network, it becomes a slightly different kind of player. So you can save this version of the neural network in a list of “players”, who all behave slightly differently right? Great — now while training the neural network, you can randomly make it play against many different older and newer versions of the opponent, chosen from that list. They are versions of the same player, but they all play slightly differently. And the more you train, the MORE players you get to train even more with! Bingo! In this training, the only thing guiding the training process is the ultimate goal, i.e winning or losing. You don’t need to specially train the network to do things like capture more area on the board etc. You just give it all the possible legal moves it can choose from, and say, “you have to win”. And this is why RL is so versatile; it can be used to train policy or value networks for any game, not just Go. Here, they tested how accurate this RL policy network was, just by itself without any MCTS algorithm. As you would remember, this network can directly take a board position and decide how an expert would play it — so you can use it to single-handedly play games.Well, the result was that the RL fine-tuned network won against the SL network that was only trained on human moves. It also won against other strong Go playing programs. Must note here that even before training this RL policy network, the SL policy network was already better than the state of the art — and now, it has further improved! And we haven’t even come to the other parts of the process like the value network. Did you know that baby penguins can sneeze louder than a dog can bark? Actually that’s not true, but I thought you’d like a little joke here to distract from the scary-looking equations above. Coming to the essay again: we’re done training Lusha here. Now back to Foma — remember the “optimal value function”: v*(s) -> that only tells you how likely you are to win in your current board position if both players play perfectly from that point on?So obviously, to train an NN to become our value function, we would need a perfect player... which we don’t have. So we just use our strongest player, which happens to be our RL policy network. It takes the current state board state s, and outputs the probability that you will win the game. You play a game and get to know the outcome (win or loss). Each of the game states act as a data sample, and the outcome of that game acts as the label. So by playing a 50-move game, you have 50 data samples for value prediction. Lol, no. This approach is naive. You can’t use all 50 moves from the game and add them to the dataset. The training data set had to be chosen carefully to avoid overfitting. Each move in the game is very similar to the next one, because you only move once and that gives you a new position, right? If you take the states at all 50 of those moves and add them to the training data with the same label, you basically have lots of “kinda duplicate” data, and that causes overfitting. To prevent this, you choose only very distinct-looking game states. So for example, instead of all 50 moves of a game, you only choose 5 of them and add them to the training set. DeepMind took 30 million positions from 30 million different games, to reduce any chances of there being duplicate data. And it worked! Now, something conceptual here: there are two ways to evaluate the value of a board position. One option is a magical optimal value function (like the one you trained above). The other option is to simply roll out into the future using your current policy (Lusha) and look at the final outcome in this roll out. Obviously, the real game would rarely go by your plans. But DeepMind compared how both of these options do. You can also do a mixture of both these options. We will learn about this “mixing parameter” a little bit later, so make a mental note of this concept! Well, your single neural network trying to approximate the optimal value function is EVEN BETTER than doing thousands of mental simulations using a rollout policy! Foma really kicked ass here. When they replaced the fast rollout policy with the twice-as-accurate (but slow) RL policy Lusha, and did thousands of simulations with that, it did better than Foma. But only slightly better, and too slowly. So Foma is the winner of this competition, she has proved that she can’t be replaced. Now that we have trained the policy and value functions, we can combine them with MCTS and give birth to our former world champion, destroyer of grand masters, the breakthrough of a generation, weighing two hundred and sixty eight pounds, one and only Alphaaaaa GO! In this section, ideally you should have a slightly deeper understanding of the inner workings of the MCTS algorithm, but what you have learned so far should be enough to give you a good feel for what’s going on here. The only thing you should note is how we’re using the policy probabilities and value estimations. We combine them during roll outs, to narrow down the number of moves we want to roll out at each step. Q(s,a) represents the value function, and u(s,a) is a stored probability for that position. I’ll explain. Remember that the policy network uses supervised learning to predict expert moves? And it doesn’t just give you most likely move, but rather gives you probabilities for each possible move that tell how likely it is to be an expert move. This probability can be stored for each of those actions. Here they call it “prior probability”, and they obviously use it while selecting which actions to explore. So basically, to decide whether or not to explore a particular move, you consider two things: First, by playing this move, how likely are you to win? Yes, we already have our “value network” to answer this first question. And the second question is, how likely is it that an expert would choose this move? (If a move is super unlikely to be chosen by an expert, why even waste time considering it. This we get from the policy network) Then let’s talk about the “mixing parameter” (see came back to it!). As discussed earlier, to evaluate positions, you have two options: one, simply use the value network you have been using to evaluate states all along. And two, you can try to quickly play a rollout game with your current strategy (assuming the other player will play similarly), and see if you win or lose. We saw how the value function was better than doing rollouts in general. Here they combine both. You try giving each prediction 50–50 importance, or 40–60, or 0–100, and so on. If you attach a % of X to the first, you’ll have to attach 100-X to the second. That’s what this mixing parameter means. You’ll see these hit and trial results later in the paper. After each roll out, you update your search tree with whatever information you gained during the simulation, so that your next simulation is more intelligent. And at the end of all simulations, you just pick the best move. Interesting insight here! Remember how the RL fine-tuned policy NN was better than just the SL human-trained policy NN? But when you put them within the MCTS algorithm of AlphaGo, using the human trained NN proved to be a better choice than the fine-tuned NN. But in the case of the value function (which you would remember uses a strong player to approximate a perfect player), training Foma using the RL policy works better than training her with the SL policy. “Doing all this evaluation takes a lot of computing power. We really had to bring out the big guns to be able to run these damn programs.” Self explanatory. “LOL, our program literally blew the pants off of every other program that came before us” This goes back to that “mixing parameter” again. While evaluating positions, giving equal importance to both the value function and the rollouts performed better than just using one of them. The rest is self explanatory, and reveals an interesting insight! Self explanatory. Self explanatory. But read that red underlined sentence again. I hope you can see clearly now that this line right here is pretty much the summary of what this whole research project was all about. Concluding paragraph. “Let us brag a little more here because we deserve it!” :) Oh and if you’re a scientist or tech company, and need some help in explaining your science to non-technical people for marketing, PR or training etc, I can help you. Drop me a message on Twitter: @mngrwl From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Engineer, teacher, learner of foreign languages, lover of history, cinema and art. Our community publishes stories worth reading on development, design, and data science.
Lance Ulanoff
15.1K
5
https://medium.com/@LanceUlanoff/did-google-duplex-just-pass-the-turing-test-ffcfe6868b02?source=tag_archive---------9----------------
Did Google Duplex just pass the Turing Test? – Lance Ulanoff – Medium
I think it was the first “Um.” That was the moment when I realized I was hearing something extraordinary: A computer carrying out a completely natural and very human-sounding conversation with a real person. And it wasn’t just a random talk. This conversation had a purpose, a destination: to make an appointment at a hair salon. The entity making the call and appointment was Google Assistant running Duplex, Google’s still experimental AI voice system and the venue was Google I/O, Google’s yearly developer conference, which this year focused heavily on the latest developments in AI, Machine- and Deep-Learning. Google CEO Sundar Pichai explained that what we were hearing was a real phone call made to a hair salon that didn’t know it was part of an experiment or that they were talking to a computer. He launched Duplex by asking Google Assistant to book a haircut appointment for Tuesday morning. The AI did the rest. Duplex made the call and, when someone at the salon picked up, the voice AI started the conversation with: “Hi, I’m calling to book a woman’s hair cut appointment for a client, um, I’m looking for something on May third?” When the attendant asked Duplex to give her one second, Duplex responded with: “Mmm-hmm.” The conversation continued as the salon representative presented various dates and times and the AI asked about other options. Eventually, the AI and the salon worker agreed on an appointment date and time. What I heard was so convincing I had trouble discerning who was the salon worker and who (what) was the Duplex AI. It was stunning and somewhat disconcerting. I liken it to the feeling you’d get if a store mannequin suddenly smiled at you. It was easily the most remarkable human-computer conversation I’d ever heard and the closest thing I’ve seen a voice AI passing the Turing Test, which is the AI threshold suggested by Computer Scientist Alan Turing in the 1950s. Turing posited that by 2000 computers would be able to fool humans into thinking they were conversing with other humans at least 30% of the time. He was right. In 2014, a chatbot named Eugene Goostman successfully impersonated a wise-ass 14-year old programmer during lengthy text-based chats with unsuspecting humans. Turing, however hadn’t necessarily considered voice-based systems and, for obvious reasons, talking computers are somewhat less adept at fooling humans. Spend a few minutes conversing with your voice assistant of choice and you’ll soon discover their limitations. Their speech can be stilted, pronunciations off and response times can be slow (especially if they’re trying to access a cloud-based server) and forget about conversations. Most can handle two consecutive queries at most and they virtually all require a trigger phrase like “Alexa” or “Hey Siri.” (Google is working on removing unnecessary “Okay Googles” in short back and forth convos with the digital assistant). Google Assistant running Duplex didn’t exhibit any of those short comings. It sounded like a young female assistant carefully scheduling her boss’s haircut. In addition to the natural cadence, Google added speech disfluencies (the verbal ticks, “ums,” “uhs,” and “mm-hmms”) and latency or pauses that naturally occur when people are speaking. The result is a perfectly human voice produced entirely by a computer. The second call demonstration, where a male-voiced Duplex tried to make restaurant reservations, was even more remarkable. The human call participant didn’t entirely understand Duplex’s verbal requests and then told Duplex that, for the number of people it wanted to bring to the restaurant, they didn’t need a reservation. Duplex handled all this without missing a beat. “The amazing thing is that the assistant can actually understand the nuances of conversation,” said Pichai during the keynote. That ability comes by way of neural network technology and intensive machine learning, For as accomplished as Duplex is in making hair appointments and restaurant reservations, it might stumble in deeper or more abstract conversations. In a blog post on Duplex development, Google engineers explained that they constrained Duplex’s training to “closed domains” or well-defined topics (like dinner reservations and hair appointments) This gave them the ability to perform intense exploration of the topics and focus training. Duplex was guided during training within the domain by “experienced operators” who could keep track of mistakes and worked with engineers to improve responses. In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider. Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex? Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more. I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test. It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar. But that’s the future. For now, Duplex’s performance stands as a powerful proof of concept for our long-imagined future of conversational AI’s capable of helping, entertaining and engaging with us. It’s the first major step on the path to the AI depicted in the movie Her where Joaquin Phoenix starred as a man who falls in love with his chatty voice assistant played by the disembodied voice of Scarlett Johansson. So, no, Duplex didn’t pass the Turing test, but I do wonder what Alan Turing would think of it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.
Gant Laborde
1.3K
7
https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------0----------------
Machine Learning: how to go from Zero to Hero – freeCodeCamp
If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text. Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough. However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you. A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter. A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players. Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory. I’m talking about Machine Learning. You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm. So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I. Wow! Right? That’s a crazy process! Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane. Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games. Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks! I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML. Secondly, if you don’t influence the world, the world will influence you. Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML. The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work? If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos. If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting. Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley). It’s cool watching data go through a trained model, but you can even watch your neural network get trained. One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model! Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above. These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now. There are tons of resources available. I’ll be recommending two approaches. In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch! If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you. I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone. This one costs money after Level 1. Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper. Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did. This course is 100% free. You only have to pay for a certificate if you want one. With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple. But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways. If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course. TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course. Plenty more information on available courses and rankings can be found here. If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready. I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive! Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now. I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in. It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps! See? I hope this article has inspired you and those around you to learn ML! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech Our community publishes stories worth reading on development, design, and data science.
Emmanuel Ameisen
935
11
https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------1----------------
Reinforcement Learning from scratch – Insight Data
Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below. In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone. Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here! Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer. This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in. For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here. Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems. 1/It all starts with slot machines Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine. Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests. While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best. In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor. 2/Adding different states The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits. Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world. 3/Learning about the consequences of our actions There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning. First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate? We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward. Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates. 3+/Introducing Monte Carlo Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them. 4/The world is rarely discrete The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function! A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently. The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space. 4+/Leveraging deep learning for representations In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations. One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients. 5/Learning directly from the screen There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment). Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat! As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand. 6/Nuanced actions So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger. We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :). 7/Next steps for the brave There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored: That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here. Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Lead at Insight AI @EmmanuelAmeisen Insight Fellows Program - Your bridge to a career in data
Irhum Shafkat
2K
15
https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------2----------------
Intuitively Understanding Convolutions for Deep Learning
The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel. The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer. Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature. This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion. Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides. Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input. The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on. More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes. Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others. So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique. Each filter in a convolution layer produces one and only one output channel, and they do it like so: Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others). Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel. Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel. And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process. Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data. Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer: And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be: (Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2]) The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0. It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer. In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class. Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters. So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good? The answer lies in the feature combinations the prior leads the parameters to learn. Early on in this article, we discussed that: So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image. If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid! Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality. And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution: For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image! So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc. There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning: One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize. Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in. A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see. The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1]. We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field: This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area. (Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween) This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer. Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a. The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge. Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds. The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image. Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train. The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts. By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data. With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model. Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare. Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable. CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding. Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curious programmer, tinkers around in Python and deep learning. Sharing concepts, ideas, and codes.
Abhishek Parbhakar
937
6
https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------3----------------
Must know Information Theory concepts in Deep Learning (AI)
Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields: In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences: It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first? Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”. Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence. Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences: Also called Information Entropy or Shannon Entropy. Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments: If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy. Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy. A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments. Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it. For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero. For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as: where p(x_i) is the probability of i^th outcome of X. Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are. Cross entropy between two probability distributions p and q defined over same set of outcomes is given by: Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable. Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship. Mutual information of two discrete random variables X and Y is defined as: where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively. Also called Relative Entropy. KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other. Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence. KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’. KL divergence of a probability distribution Q from another probability distribution P is defined as: KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders. Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948. Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings. In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes.
Aman Dalmia
2.3K
17
https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------4----------------
What I learned from interviewing at multiple AI companies and start-ups
Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work. This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation. NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger. To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps: a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview: As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be: • Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional). Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine: Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles. All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself. b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about. c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise. I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors. Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are. It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile. There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one. This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not: There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow: Problem > 1 or 2 previous approaches > Our approach > Result > Intuition The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly. Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling. Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life. That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄 We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says: This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said: We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely. Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning. If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough. Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA). Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future. Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for. I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills. This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love. Follow our publication to see more product & design stories featured by the Journal team. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Fanatic • Math Lover • Dreamer The official Journal blog
Gaurav Kaila
2.1K
10
https://medium.com/nanonets/how-we-flew-a-drone-to-monitor-construction-projects-in-africa-using-deep-learning-b792f5c9c471?source=---------5----------------
How to easily automate Drone-based monitoring using Deep Learning
This article is a comprehensive overview of using deep learning based object detection methods for aerial imagery via drones. Did you know Drones and it’s associated functions are set to be a $50 billion industry by 2023? Currently drones being used in domains such as agriculture, construction, public safety and security to name a few and are rapidly being adopted by others. With deep-learning based computer vision now powering these drones, industry experts are now predicting unprecedented use in previously unimaginable or infeasible applications. We explore some of these applications along with challenges in automation of drone-based monitoring through deep learning. Finally, a case-study is presented for automating remote inspection of construction projects in Africa using Nanonets machine learning framework. Man has always been feed fascinated with the view of the world from the top — building watch-towers, high fortwalls, capturing the highest mountain peak. To capture a glimpse and share it with the world, people went to great lengths to defy gravity, enlisting the help of ladders, tall buildings, kites, balloons, planes, and rockets. Today, access to drones that can fly as high as 2kms is possible even for the general public. These drones have high resolution cameras attached to them that are capable of acquiring quality images which can be used for various kinds of analysis. With easier access to drones, we’re seeing a lot of interest and activity by photographers & hobbyists, who are using it to make creative projects such as capturing inequality in South Africa or breathtaking views of New York which might make Woody Allen proud. We explore some here: Energy : Inspection of solar farms Routine inspection and maintenance is a herculean task for solar farms. The traditional manual inspection method can only support the inspection frequency of once in three months. Because of the hostile environment, solar panels may have defects; broken solar panel units reduce the power output efficiency. Agriculture: Early plant disease detection Researchers at Imperial College London is mounting multi-spectral cameras on drones that will use special filters to capture reflected light from selected regions of the electromagnetic spectrum. Stressed plants typically display a ‘spectral signature’ that distinguishes them from healthy plants. Public Safety: Shark detection Analysis of overhead view of a large mass of land/water can yield a vast amount of information in terms of security and public safety. One such example is spotting sharks in the water off the coast of Australia. Australia-based Westpac Group has developed a deep learning based object detection system to detect sharks in the water. There are various other applications to aerial images such as Civil Engineering (routine bridge inspections, power line surveillance and traffic surveying), Oil and Gas (on- & offshore inspection of oil and gas platforms, drilling rigs), Public Safety (motor vehicle accidents, nuclear accidents, structural fires, ship collisions, plane and train crashes) & Security (Traffic surveillance, Border surveillance,Coastal surveillance, Controlling hostile demonstrations and rioting). To comprehensively capture terrain & landscapes, the process of acquiring aerial images can be summarised in two steps. After image stitching, the generated map can be used for various kinds of analysis for the applications mentioned above. High-resolution aerial imagery is increasingly available at the global scale and contains an abundance of information about features of interest that could be correlated with maintenance, land development, disease control, defect localisation, surveillance, etc. Unfortunately, such data are highly unstructured and thus challenging to extract meaningful insights from at scale, even with intensive manual analysis. For eg, classification of urban land use is typically based on surveys performed by trained professionals. As such, this task is labor-intensive, infrequent, slow, and costly. As a result, such data are mostly available in developed countries and big cities that have the resources and the vision necessary to collect and curate it. Another motivation for automating the analysis of aerial imagery stems from the urgency of predicting changes in the region of interest. For eg, crowd counting and crowd behaviour is frequently done during large public gatherings such as concerts, football matches, protests, etc. Traditionally, a human is behind the analysis of images being streamed from a CCTV camera directly to the command centre. As you may imagine, there are several problems with this approach such as human latency or error in detecting an event and lack of sufficient views via standard-static CCTV cameras. Below are some of the commonly occurring challenges when using aerial imagery. There are several challenges to overcome when automating the analysis of drone imagery. Following lists a few of them with a prospective solution: Pragmatic Master, a South-African robotics-as-a-service collaborated with Nanonets for automation of remotely monitoring progress of a housing construction project in Africa. We aim to detect the following infrastructure to capture the construction progress of a house in it’s various stages : a foundation (start), wallplate (in-progress), roof (partially complete), apron (finishing touches) and geyser (ready-to-move in) Pragmatic Master chose Nanonets as it’s deep learning provider because of it’s easy-to-use web platform and plug&play APIs. The end-to-end process of using the Nanonets API is as simple as four steps. 2. Labelling of images: Labelling images is probably the hardest and the most time-consuming step in any supervised machine learning pipeline, but at Nanonets we have this covered for you. We have in-house experts that have multiple years of working with aerial images. They will annotate your images with high precision and accuracy to aid better model training. For the Pragmatic Master use-case, we were labelling the following objects and their total count in all the images. 3. Model training: At Nanonets we employ the principle of Transfer Learning while training on your images. This involves re-training a pre-trained model that has already been pre-trained with a large number of aerial images. This helps the model identify micro patterns such as edges, lines and contours easily on your images and focus on the more specific macro patterns such as houses, trees, humans, cars, etc. Transfer learning also gives a boost in term of training time as the model does not need to be trained for a large number of iterations to give a good performance. Our proprietary deep learning software smartly selects the best model along with optimising the hyper-parameters for your use-case. This involves searching through multiple models and through a hyperspace of parameters using advanced search algorithms. The hardest objects to detect are the smallest ones, due to their low resolution. Our model training strategy is optimised to detect very small objects such as Geysers and Aprons which have an area of a few pixels. Following are the mean average precision per class that we get, Roof: 95.1%Geyser: 88%Wallplate: 92%Apron: 81% Note: Adding more images can lead to an increase in the mean average precision. Our API also supports detecting multiple objects in the same image such as Roofs and Aprons in one image. 4. Test & Integrate: Once the model is trained, you can either integrate Nanonet’s API directly into your system or we also provide a docker image with the trained model and inference code that you can use. Docker images can easily scale and provide a fault tolerant inference system. Customer trust is our top priority. We are committed towards providing you ownership and control over your content at all times. We provide two plans for using our service, For both the plans, we use highly sophisticated data privacy and security protocols in collaboration with Amazon Web Services, which is our cloud partner. Your dataset is anonymised and goes through minimal human intervention during the pre-processing and training process. All our human labellers have signed a non-disclosure agreement (NDA) to protect your data from going into wrong hands. As we believe in the philosophy of “Your data is yours!”, you can request us to delete your data from our servers at any stage. NanoNets is a web service that makes it easy to use Deep Learning. You can build a model with your own data to achieve high accuracy & use our APIs to integrate the same in your application. Pragmatic Master is a South African robotics as a service company that provides camera-mounted drones to acquire images of construction, farming and mining sites. These images are analysed to track progress, identify challenges, eliminate inefficiencies and provide an overall aerial view of the site. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Machine Learning Engineer NanoNets: Machine Learning API
James Loy
8.5K
6
https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6?source=---------6----------------
How to build your own Neural Network from scratch in Python
Motivation: As part of my personal journey to gain a better understanding of Deep Learning, I’ve decided to build a Neural Network from scratch without a deep learning library like TensorFlow. I believe that understanding the inner workings of a Neural Network is important to any aspiring Data Scientist. This article contains what I’ve learned, and hopefully it’ll be useful for you as well! Most introductory texts to Neural Networks brings up brain analogies when describing them. Without delving into brain analogies, I find it easier to simply describe Neural Networks as a mathematical function that maps a given input to a desired output. Neural Networks consist of the following components The diagram below shows the architecture of a 2-layer Neural Network (note that the input layer is typically excluded when counting the number of layers in a Neural Network) Creating a Neural Network class in Python is easy. Training the Neural Network The output ŷ of a simple 2-layer Neural Network is: You might notice that in the equation above, the weights W and the biases b are the only variables that affects the output ŷ. Naturally, the right values for the weights and biases determines the strength of the predictions. The process of fine-tuning the weights and biases from the input data is known as training the Neural Network. Each iteration of the training process consists of the following steps: The sequential graph below illustrates the process. As we’ve seen in the sequential graph above, feedforward is just simple calculus and for a basic 2-layer neural network, the output of the Neural Network is: Let’s add a feedforward function in our python code to do exactly that. Note that for simplicity, we have assumed the biases to be 0. However, we still need a way to evaluate the “goodness” of our predictions (i.e. how far off are our predictions)? The Loss Function allows us to do exactly that. There are many available loss functions, and the nature of our problem should dictate our choice of loss function. In this tutorial, we’ll use a simple sum-of-sqaures error as our loss function. That is, the sum-of-squares error is simply the sum of the difference between each predicted value and the actual value. The difference is squared so that we measure the absolute value of the difference. Our goal in training is to find the best set of weights and biases that minimizes the loss function. Now that we’ve measured the error of our prediction (loss), we need to find a way to propagate the error back, and to update our weights and biases. In order to know the appropriate amount to adjust the weights and biases by, we need to know the derivative of the loss function with respect to the weights and biases. Recall from calculus that the derivative of a function is simply the slope of the function. If we have the derivative, we can simply update the weights and biases by increasing/reducing with it(refer to the diagram above). This is known as gradient descent. However, we can’t directly calculate the derivative of the loss function with respect to the weights and biases because the equation of the loss function does not contain the weights and biases. Therefore, we need the chain rule to help us calculate it. Phew! That was ugly but it allows us to get what we needed — the derivative (slope) of the loss function with respect to the weights, so that we can adjust the weights accordingly. Now that we have that, let’s add the backpropagation function into our python code. For a deeper understanding of the application of calculus and the chain rule in backpropagation, I strongly recommend this tutorial by 3Blue1Brown. Now that we have our complete python code for doing feedforward and backpropagation, let’s apply our Neural Network on an example and see how well it does. Our Neural Network should learn the ideal set of weights to represent this function. Note that it isn’t exactly trivial for us to work out the weights just by inspection alone. Let’s train the Neural Network for 1500 iterations and see what happens. Looking at the loss per iteration graph below, we can clearly see the loss monotonically decreasing towards a minimum. This is consistent with the gradient descent algorithm that we’ve discussed earlier. Let’s look at the final prediction (output) from the Neural Network after 1500 iterations. We did it! Our feedforward and backpropagation algorithm trained the Neural Network successfully and the predictions converged on the true values. Note that there’s a slight difference between the predictions and the actual values. This is desirable, as it prevents overfitting and allows the Neural Network to generalize better to unseen data. Fortunately for us, our journey isn’t over. There’s still much to learn about Neural Networks and Deep Learning. For example: I’ll be writing more on these topics soon, so do follow me on Medium and keep and eye out for them! I’ve certainly learnt a lot writing my own Neural Network from scratch. Although Deep Learning libraries such as TensorFlow and Keras makes it easy to build deep nets without fully understanding the inner workings of a Neural Network, I find that it’s beneficial for aspiring data scientist to gain a deeper understanding of Neural Networks. This exercise has been a great investment of my time, and I hope that it’ll be useful for you as well! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Graduate Student in Machine Learning @ Georgia Tech | LinkedIn: https://www.linkedin.com/in/jamesloy1/ Sharing concepts, ideas, and codes.
Chintan Trivedi
1.2K
8
https://towardsdatascience.com/using-deep-q-learning-in-fifa-18-to-perfect-the-art-of-free-kicks-f2e4e979ee66?source=---------7----------------
Using Deep Q-Learning in FIFA 18 to perfect the art of free-kicks
A code tutorial in Tensorflow that uses Reinforcement Learning to take free kicks. In my previous article, I presented an AI bot trained to play the game of FIFA using Supervised Learning technique. With this approach, the bot quickly learnt the basics of the game like passing and shooting. However, the training data required to improve it further quickly became cumbersome to gather and provided little-to-no improvements, making this approach very time consuming. For this sake, I decided to switch to Reinforcement Learning, as suggested by almost everyone who commented on that article! In this article, I’ll provide a short description of what Reinforcement Learning is and how I applied it to this game. A big challenge in implementing this is that we do not have access to the game’s code, so we can only make use of what we see on the game screen. Due to this reason, I was unable to train the AI on the full game, but could find a work-around to implement it for skill games in practice mode. For this tutorial, I will be trying to teach the bot to take 30-yard free kicks, but you can modify it to play other skill games as well. Let’s start with understanding the Reinforcement Learning technique and how we can formulate our free kick problem to fit this technique. Contrary to Supervised Learning, we do not need to manually label the training data in Reinforcement Learning. Instead, we interact with our environment and observe the outcome of our interaction. We repeat this process multiple times gaining examples of positive and negative experiences, which acts as our training data. Thus, we learn by experimentation and not imitation. Let’s say our environment is in a particular state s, and upon taking an action a, it changes to state s’. For this particular action, the immediate reward you observe in the environment is r. Any set of actions that follow this action will have their own immediate rewards, until you stop interacting due to a positive or a negative experience. These are called future rewards. Thus, for the current state s, we will try to estimate out of all actions possible which action will fetch us the maximum immediate + future reward, denoted by Q(s,a) called the Q-function. This gives us Q(s,a) = r + γ * Q(s’,a’) which denotes the expected final reward by taking action a in state s. Here, γ is a discount factor to account for uncertainty in predicting the future, thus we want to trust the present a bit more than the future. Deep Q-learning is a special type of Reinforcement Learning technique where the Q-function is learnt by a deep neural network. Given the environment’s state as an image input to this network, it tries to predict the expected final reward for all possible actions like a regression problem. The action with the maximum predicted Q-value is chosen as our action to be taken in the environment. Hence the name Deep Q-Learning. Note: If we had a performance meter in kick-off mode of FIFA like there is in the practice mode, we might have been able to formulate this problem for playing the entire game and not restrict ourselves to just taking free-kicks. That, or we need access to game’s internal code which we don’t have. Anyways, let’s make the most of what we do have. While the bot has not mastered all different kinds of free kicks, it has learnt some situations very well. It almost always hits the target in absence of wall of players but struggles in its presence. Also, when it hasn’t encountered a situation frequently in training like not facing the goal, it behaves bonkers. However, with every training epoch, this behavior was noticed to decrease on an average. As shown in the figure above, the average goal scoring rate grows from 30% to 50% on an average after training for 1000 epochs. This means the current bot scores about half of the free kicks it attempts (for reference, a human would average around 75–80%). Do consider that FIFA tends to behave non-deterministically which makes learning very difficult. More results in video format can be found on my YouTube channel, with the video embedded below. Please subscribe to my channel if you wish to keep track of all my projects. We shall implement this in python using tools like Tensorflow (Keras) for Deep Learning and pytesseract for OCR. The git link is provided below with the requirements setup instructions in the repository description. I would recommend below gists of code only for the purpose of understanding this tutorial since some lines have been removed for brevity. Please use the full code from git while running it. Let’s go over the 4 main parts of the code. We do not have any readymade API available that gives us access to the code. So, let’s make our own API instead! We’ll use game’s screenshots to observe the state, simulated key-presses to take action in the game environment and Optical Character Recognition to read our reward in the game. We have three main methods in our FIFA class: observe(), act(), _get_reward() and an additional method is_over() to check if the free kick has been taken or not. Throughout the training process, we want to store all our experiences and observed rewards. We will use this as the training data for our Q-Learning model. So, for every action we take, we store the experience <s, a, r, s’> along with a game_over flag. The target label that our model will try to learn is the final reward for each action which is a real number for our regression problem. Now that we can interact with the game and store our interactions in memory, let’s start training our Q-Learning model. For this, we will attain a balance between exploration (taking a random action in the game) and exploitation (taking action predicted by our model). This way we can perform trial-and-error to obtain different experiences in the game. The parameter epsilon is used for this purpose, which is an exponentially decreasing factor that balances exploration and exploitation. In the beginning, when we know nothing, we want to do more exploration but as number of epochs increases and we learn more, we want to do more exploitation and less exploration. Hence. the decaying value of the epsilon parameter. For this tutorial I have only trained the model for 1000 epochs due to time and performance constraints, but in the future I would like to push it to at least 5000 epochs. At the heart of the Q-Learning process is a 2-layered Dense/Fully Connected Network with ReLU activation. It takes the 128-dimensional feature map as input state and outputs 4 Q-values for each possible action. The action with the maximum predicted Q-value is the desired action to be taken as per the network’s policy for the given state. This is the starting point of execution of this code, but you’ll have to make sure the game FIFA 18 is running in windowed mode on a second display and you load up the free kick practice mode under skill games: shooting menu. Make sure the game controls are in sync with the keys you have hard-coded in the FIFA.py script. Overall, I think the results are quite satisfactory even though it fails to reach human level of performance. Switching from Supervised to Reinforcement technique for learning helps ease the pain of collecting training data. Given enough time to explore, it performs very well in problems like learning how to play simple games. However, Reinforcement setting seems to fail when it encounters unfamiliar situations, which makes me believe formulating it as a regression problem cannot extrapolate information as well as formulating it as a classification problem in supervised setting. Perhaps a combination of the two could address the weaknesses of both these approaches. Maybe that’s where we’ll see the best results in building AI for games. Something for me to try in the future! I would like to acknowledge this tutorial of Deep Q-Learning and this git repository of gaming with python for providing majority of the code. With the exception of the FIFA “custom-API”, most of the code’s backbone has come from these sources. Thanks to these guys! Thank you for reading! If you liked this tutorial, please follow me on medium, github or subscribe to my YouTube channel. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist, AI Enthusiast, Blogger, YouTuber, Chelsea FC Fanatic. Also, looking to build my virtual clone before I die. Sharing concepts, ideas, and codes.
Abhishek Parbhakar
1.7K
3
https://towardsdatascience.com/why-data-scientists-love-gaussian-6e7a7b726859?source=---------8----------------
Why Data Scientists love Gaussian? – Towards Data Science
For Deep Learning & Machine Learning engineers out of all the probabilistic models in the world, Gaussian distribution model simply stands out. Even if you have never worked on an AI project, there is a significant chance that you have come across the Gaussian model. Gaussian distribution model, often identified with its iconic bell shaped curve, also referred as Normal distribution, is so popular mainly because of three reasons. Incredible number of processes in nature and social sciences naturally follows the Gaussian distribution. Even when they don’t, the Gaussian gives the best model approximation for these processes. Some examples include- Central limit theorem states that when we add large number of independent random variables, irrespective of the original distribution of these variables, their normalized sum tends towards a Gaussian distribution. For example, the distribution of total distance covered in an random walk tends towards a Gaussian probability distribution. The theorem’s implications include that large number of scientific and statistical methods that have been developed specifically for Gaussian models can also be applied to wide range of problems that may involve any other types of distributions. The theorem can also been seen as a explanation why many natural phenomena follow Gaussian distribution. Unlike many other distribution that changes their nature on transformation, a Gaussian tends to remain a Gaussian. For every Gaussian model approximation, there may exist a complex multi-parameter distribution that gives better approximation. But still Gaussian is preferred because it makes the math a lot simpler! Gaussian distribution is named after great mathematician and physicist Carl Friedrich Gauss. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes.
Leon Zhou
184
6
https://towardsdatascience.com/the-best-words-cf6fc2333c31?source=---------9----------------
The Best Words – Towards Data Science
Uttered in the heat of a campaign rally in South Carolina on December 30, 2015, this statement was just another of a growing collection of “Trumpisms” by our now-President, Donald J. Trump. These statements both made Donald more beloved by his supporters as their relatable President, while also a cause of ridicule by seemingly everyone else. Regardless of one’s personal views of the man, it cannot be denied Donald has a way of speaking that is, well, so uniquely him — his smatterings of superlatives and apparent disregard for the constraints of traditional sentence structure are just a few of the things that make his speech instantly recognizable from that of his predecessors or peers. It was this unique style that interested me, and I set out to try and capture it using machine learning — to generate text that looked and sounded like something Donald Trump might say. To learn President Trump’s style, I first had to gather sufficient examples of it. I focused my efforts on two primary sources. The obvious first place to look for words by Donald Trump was his Twitter feed. The current president is unique in his use of the platform as a direct and unfiltered connection to the American people. Furthermore, as a figure of interest, his words have naturally been collected and organized for posterity, saving me the hassle of using the ever-changing and restrictive Twitter API. All in all, there were a little under 31,000 Tweets available for my use. In addition to his online persona, however, I also wanted to gain a glimpse into his more formal role as President. For this, I turned to the White House Briefing Statements Archive. With the help of some Python tools, I was able to quickly amass a table of about 420 transcripts of speeches and other remarks by the President. These transcripts covered a variety of events, such as meetings with foreign dignitaries, round tables with Congressional members, and awards presentations. Unlike with the Tweets, where every word was written or dictated by Trump himself, these transcripts involved other politicians and inquisitive reporters. Separating Donald’s words from those of others seemed to be a daunting task. Enter regular expressions — a boring name for a powerful and decidedly not-boring tool. Regular expressions allow you to specify a pattern to search for; this pattern can contain any number of very specific constraints, wildcards, or other restrictions to return exactly what you want, and no more. With some trial and error, I was able to generate a complex regular expression to only return words the President spoke, leaving and discarding any other words or annotations. Typically, one of the first steps in working with text is to normalize it. The extent and complexity of this normalization varies according to one’s needs, ranging from simply removing punctuation or capital letters, to reducing all variants of a word to a base root. An example of this workflow can be seen here. For me, however, the specific idiosyncrasies and patterns that would be lost in normalization were exactly what I needed to preserve. So, in hopes of making my generated text just that much more believable and authentic, I elected to bypass most of the standard normalization workflow. Before diving into a deep learning model, I was curious to explore another frequently used text generation method, the Markov chain. Markov chains have been the go-to for joke text generation for a long time — a quick search will reveal ones for Star Trek, past presidents, the Simpsons, and many others. The quick and dirty of the Markov chain is that it only cares about the current word in determining what should come next. This algorithm looks at every single time a specific word appears, and every word that comes immediately after it. The next word is selected randomly with a probability proportional to its frequency. Let me illustrate with a quick example: Donald Trump says the word “taxes.” If, in real life, 70% of the time after he says “taxes” he follows up with the word “bigly,” the Markov chain will choose the next word to be “bigly” 70% of the time. But sometimes, he doesn’t say “bigly.” Sometimes he ends the sentence, or moves on to a different word. The chain will most likely choose “bigly,” but there’s a chance it’ll go for any of the other available options, thus introducing some variety in our generated text. And repeat ad nauseam, or until the end of the sentence. This is great for quick and dirty applications, but it’s easy to see where it can go wrong. As the Markov chain only ever cares about the current word, it can easily be sidetracked. A sentence that started off talking about the domestic economy could just as easily end talking about The Apprentice. With my limited text data set, most of my Markov chain outputs were nonsensical. But, occasionally there were some flashes of brilliance and hilarity: For passably-real text, however, I needed something more sophisticated. Recurrent Neural Networks (RNNs) have established themselves as the architecture of choice for many text or sequence-based applications. The detailed inner workings of RNNs are outside the scope of this post, but a strong (relatively) beginner-friendly introduction may be found here. The distinguishing feature of these neural units is that they have an internal “memory” of sorts. Word choice and grammar depend heavily on surrounding context, so this “memory” is extremely useful in creating a coherent thought by keeping track of tense, subjects and objects, and so on. The downside of these types of networks is that they are extraordinarily computationally expensive — on my piddly laptop, running the entirety of my text through the model once would take over an hour, and considering I’d need to do so about 200 times, this was no good. This is where cloud computing comes in. A number of established tech companies offer cloud services, the largest being Amazon, Google, and Microsoft. On a heavy-GPU computing instance, that one-hour-plus-per-cycle time became ninety seconds, an over 40x reduction in time! Can you tell if this following statement is real or not? This was text generated off of Trump’s endorsement of the Republican gubernatorial candidate, but it might pass as something that Trump tweeted in the run-up to the 2016 general election. The more complex neural networks I implemented, with hidden fully-connected layers before and after the recurrent layer, were capable of generating internally-consistent text given any seed of 40 characters or less. Less complex networks stumbled a little on consistency, but still captured the tonal feel of President Trump’s speech: While not quite producing text at a level capable of fooling you or me consistently, this attempt opened my eyes to the power of RNNs. In short order, these networks learned spelling, some aspects of grammar, and in some instances, how to use hashtags and hyperlinks — imagine what a better-designed network with more text to learn from, and time to learn might produce. If you’re interested in looking at the code behind these models, you can find the repository here. And, don’t hesitate to reach out with any questions or feedback you may have! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I am a data scientist with a background in chemical engineering and biotech. I am also homeless and live in my car, but that's another thing entirely. Hire me! Sharing concepts, ideas, and codes.
Dr. GP Pulipaka
2
6
https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------3----------------
3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and...
Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form. Latent semantic analysis Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document. Brief overview of linear algebra The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK. Deep Learning at scale with Google Colab notebooks Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper. Once this step is complete, it shows the following log without errors: The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab If it shows successful retrieval of access token, then Colab is all set. At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file. Once the drive is mounted, Colab has access to the datasets from Google drive. Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook. PyCharm IDE The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal. Results from OSX Terminal Jupyter Notebook on standalone machine Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine: References Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013 Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing. Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience
Erick Muzart Fonseca dos Santos
16
2
https://medium.com/deeplearningbrasilia/o-grupo-de-estudo-em-deep-learning-de-bras%C3%ADlia-est%C3%A1-planejando-o-pr%C3%B3ximo-ciclo-de-encontros-do-4861851ec0ff?source=---------5----------------
O Grupo de Estudo em Deep Learning de Brasília está planejando o próximo ciclo de encontros do...
O Grupo de Estudo em Deep Learning de Brasília está planejando o próximo ciclo de encontros do grupo, que deve iniciar-se a partir do meio de junho de 2018. Ainda há tempo para manifestar suas preferências para participar do grupo! Para tal, favor preencher o seguinte questionário para que possamos agregar as preferências de nossa comunidade e selecionar as opções que melhor atenderem a todos: https://goo.gl/forms/H4K77sD1DxW6diIt1 Agradecemos se puder divulgar o grupo junto a sua rede de contatos com interesse nos temas de aprendizado automático e Deep Learning, para que possamos iniciar o próximo ciclo já com o máximo de interessados desde o primeiro dia! Seguem abaixo alguns dos resultados iniciais do grupo: Quanto aos resultados inciais do questionário, segue uma síntese das primeiras 50 respostas: Dentre os tópicos de mais interesse destacam-se: 1o Deep Learning: 87,5% 2o Machine Learning: 78,6% 3o Aplicações de Deep Learning em projetos: 69,6% 4o Processamento de Linguagem Natural: 51,8% Preferência por curso: 1o Machine Learning, da fast.ai: 67,9% 2o Deep Learning, parte 2, da fast.ai: 46,4% 3o Deep Learning, parte 1, da fast.ai: 44,6% Atenciosamente, Organização do Grupo de Estudo em Deep Learning, de Brasília From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Publicações dos membros do Grupo de Estudo em Deep Learning de Brasília
Chris Kalahiki
30
15
https://towardsdatascience.com/beethoven-picasso-and-artificial-intelligence-caf644fc72f9?source=---------6----------------
Beethoven, Picasso, and Artificial Intelligence – Towards Data Science
When people think of the greatest artists who’ve ever lived, they probably think of names like Beethoven or Picasso. No one would ever think of a computer as a great artist. But what if one day, that was indeed the case. Could computers learn to create incredible drawings like the Mona Lisa? Perhaps one day a robot will be capable of composing the next great symphony. Some experts believe this to be the case. In fact, some of the greatest minds in artificial intelligence are diligently working to develop programs that can create drawing and music independently from humans. The use of artificial intelligence in the field of art has even been picked up by tech giants the likes of Google. The projects that are included in this paper could have drastic implications in our everyday lives. They may also change the way we view art. They also showcase the incredible advancement that has been made in the field of artificial intelligence. Image recognition is not as far as the research goes. Nor is the ability to generate music in the styling of the great artists of our past. Although these topics will be touched upon, we will focus on several more advanced achievements such as text descriptions being turned into images and generating art and music that is totally original. Each of these projects bring something new and innovative to the table and show us exactly how the art space is a great place to further explore applications of artificial intelligence. We will be discussing problems that have been faced in these projects and how they have been overcome. The future of AI looks bright. Let’s look at what the future may hold. In doing this, we may be able to better understand the impact that artificial intelligence can have in an area that is driven by human creativity. Machines must be educated. They learn from instruction. How do we lead machines away from emulating what already exists, and have them create new techniques? “No creative artist will create art today that tries to emulate the Baroque or Impressionist style, or any other traditional style, unless trying to do so ironically” [4]. This problem isn’t limited to paintings either. Music can be very structured in some respects, but is also a form of art that requires vast creativity. So how do we go about solving such a problem? The first concept we will discuss is something called GAN (Generative Adversarial Networks). GANs, although quite complex, are becoming an outdated model. If artificial intelligence in the art space is to advance, researchers and developers will have to work to find better methods to allow machines to generate art and music. Two of these such methods are presented in the form of Sketch-RNN and CAN (Creative Adversarial Networks). Each of these methods have their advantages over GANs. First, let’s explore what exactly a GAN is. Below is a small excerpt explaining how a GAN works: Generative Adversarial Network (GAN) has two sub networks, a generator and a discriminator. The discriminator has access to a set of images (training images). The discriminator tries to discriminate between “real” images (from the training set) and “fake” images generated by the generator. The generator tries to generate images similar to the training set without seeing the images [4]. The more images the generator creates, the closer they get to the images from the training set. The idea is that after a certain number of images are generated, the GAN will create images that are very similar to what we consider art. This is a very impressive accomplishment to say the least. But what if we take it a step further? Many issues associated with the GAN are simply limitations on what it can do. The GAN is powerful, but can’t do quite as much as we would like. For example, the generator in the model described above will continue to create images closer and closer to the images given to the discriminator that it isn’t producing original art. Could a GAN be trained to draw alongside a user? It’s not likely. The model wouldn’t be able to turn a text-based description of an image into an actual picture either. As impressive as the GAN may be, we would all agree that it can be improved. Each of the shortcoming mentioned have actually been addressed and, to an extent, solved. Let’s look at how this is done. Sketch-RNN is a recurrent neural network model developed by Google. The goal of Sketch-RNN is to help machines learn to create art in a manner similar to the way a human may learn. It has been used in a Google AI Experiment to be able to sketch alongside a user. While doing so, it can provide the users with suggestions and even complete the user’s sketch when they decide to take a break. Sketch-RNN is exposed to a massive number of sketches provided through a dataset of vector drawings obtained through another Google application that we will discuss later. Each of these sketches are tagged to let the program know what object is in the sketch. The data set represents the sketch as a set of pen strokes. This allows Sketch-RNN to then learn what aspects each sketch of a certain object has in common. If a user begins to draw a cat, Sketch-RNN could then show the user other common features that could be on the cat. This model could have many new creative applications. “The decoder-only model trained on various classes can assist the creative process of an artist by suggesting many possible ways of finishing a sketch” [3]. The Sketch-RNN team even believes that, given a more complex dataset, the applications could be used in an educational sense to teach users how to draw. These applications of Sketch-RNN couldn’t be nearly as easily achieved with GAN alone. Another method used to improve upon GAN is the Creative Adversarial Network. In their paper regarding adversarial networks generating art, several researchers discuss a new way of generating art through CANs. The idea is that the CAN has two adversary networks. One, the generator, has no access to any art. It has no basis to go off of when generating images. The other network, the discriminator, is trained to classify the images generated as being art or not. When an image is generated, the discriminator gives the generator two pieces of information. The first is whether it believes the generated image comes from the same distributor as the pieces of art it was trained on, and the other being how the discriminator can fit the generated image into one of the categories of art it was taught. This technique is fantastic in that it helps the generator create images that are both emulative of past works of art in the sense that it learns what was good about those images and creative in a sense that it is taught to produce new and different artistic concepts. This is a big difference from GAN creating art that emulated the training images. Eventually, the CAN will learn how to produce only new and innovative artwork. One final future for the vanilla GAN is StackGAN. StackGAN is a text to photo-realistic image synthesizer that uses stacked generative adversarial networks. Given a text description, the StackGAN is able to create images that are very much related to the given text. This wouldn’t be doable with a normal GAN model as it would be much too difficult to generate photo-realistic images from a text description even with a state-of-the-art training database. This is where StackGAN comes in. It breaks the problem down into 2 parts. “Low-resolution images are generated by our Stage-I GAN. On the top of our Stage-I GAN, we stack Stage-II GAN to generate realistic high-resolution images conditioned on Stage-I results and text descriptions” [7]. It is through the conditioning on Stage-I results and text descriptions that Stage-II GAN can find details that Stage-I GAN may have missed and create higher resolution images. By breaking the problem down into smaller subproblems, the StackGAN can tackle problems that aren’t possible with a regular GAN. On the next page is an image showing the difference between a regular GAN and each step of the StackGAN. It is through advancements like these that have been made in recent years that we can continue to push the boundaries of what AI can do. We have just seen three ways to improve upon a concept that was already quite complex and innovative. Each of these advancements have a practical, everyday use. As we continue to improve on artificial intelligence techniques, we will able to do more and more in regard to, not just art and music, but a wide variety of tasks to improve our lives. Images aren’t the only type of art that artificial intelligence can impact though. Its effect on music is being explored as we speak. We will now explore some specific cases and their impact on both music and artificial intelligence. In doing this, we should be able to see how art can do as much for AI as AI does for it. Both fields benefit heavily from the types of projects that we are exploring here. Could a machine ever be able to create a piece of music the likes of Johann Sebastian Bach? In a project known as DeepBach, several researchers looked to create pieces similar to Bach’s chorales. The beauty of DeepBach is that it “is able to generate coherent musical phrases and provides, for instance, varied reharmonizations of melodies without plagiarism” [6]. What this means it that DeepBach can create music with correct structure and be original. It is just in the style of Bach. It isn’t just a mashup of his works. DeepBach is creating new content. The developers of DeepBach went on to test whether their product could actually fool listeners. As part of the experiment, over 1,250 people were asked to vote whether pieces presented to them were in fact composed by Bach. The subjects had varying degrees of musical expertise. The results showed that as the model for DeepBach’s complexity increased, the subjects had more and more trouble distinguishing the chorales of Bach from those of DeepBach. This experiment shows us that through the use of artificial intelligence and machine learning, it is quite possible to recreate original works in the likeness of the greats. But is that the limit to what artificial intelligence can do in the field of art and music? DeepBach has achieved something that would have been unheard of in the not so distant past, but it certainly isn’t the fullest extent of what AI can do to benefit the field of music. What if we want to create new and innovative music? Maybe AI can change the way music is created all together. There must be projects that do more to push the envelope. As a matter of fact, that is exactly what the team behind Magenta look to do. Magenta is a project being conducted by the Google Brain team and lead by Douglas Eck. Eck has been working for Google since 2010, but that isn’t where his interest in Music began. Eck helped found Brain Music and Sound, an international laboratory for brain, music, and sound research. He was also involved at the McGill Centre for Interdisciplinary Research in Music Media and Technology, and was an Associate Professor in Computer Science at the University of Montreal. Magenta’s goal is to be “a research project to advance the state of the art in machine intelligence for music and art generation” [2]. It is an open source project that uses TensorFlow. Magenta aims to learn how to generate art and music in a way that is indeed generative. It must go past just emulating existing music. This is distinctly different that projects along the line of DeepBach which set out to emulate existing music in a way that wasn’t plagiarizing existing pieces of music. Eck and company realize that art is about capturing elements of surprise and drawing attention to certain aspects. “This leads to perhaps the biggest challenge: combining generation, attention and surprise to tell a compelling story. So much of machine-generated music and art is good in small chunks, but lacks any sort of long-term narrative arc” [2]. Such a perspective gives computer-generated music more substance, and helps it to become less of a gimmick. One of the projects the magenta team has developed is called NSynth. The idea behind NSynth is to be able to create new sounds that have never been heard before, but beyond that, to reimagine how music synthesis can be done. Unlike ordinary synthesizers that focus on “a specific arrangement of oscillators or an algorithm for sample playback, such as FM Synthesis or Granular Synthesis” [5], NSynth generates sounds on an individual level. To do this, it uses deep neural networks. Google has even launched an experiment that allows users to really see what NSynth can do by allowing them to fuse together the sounds of existing instruments to create new hybrid sounds that have never been heard before. As an example, users can take two instruments such as a banjo and a tuba, and take parts of each of their sounds to create a totally new instrument. The experiment also allowed users to decide what percentage of each instrument would be used. Projects like Magenta go above and beyond in showing us the full extent of what artificial intelligence can do in the way of generating music. They explore new applications of artificial intelligence that can generate new ideas independent of humans. It is the closest we have come to machine creativity. Although machines aren’t yet able to truly think and express creativity, they may soon be able to generate new and unique art and music for us to enjoy. Don’t worry though. Eck doesn’t intend to replace artists with AI. Instead he looks to provide artists with tools to create music in an entirely new way. As we look ahead to a few more of the ways that AI has been used to accomplish new and innovative ideas in the art space, we look at projects like Quick, Draw! and Deep Dream. These projects showcase amazing progress in the space while pointing out some issues that researchers in AI will have to work out in the years to come. Quick, Draw! is an application from the Google Creative Lab, trained to recognize quick drawings much like one would see in a game of Pictionary. The program can recognize simple objects such as cats and apples based on common aspects of the many pictures it was given before. Although the program will not get every picture right each time it is used, it continues to learn from the similarities in the picture drawn and the hundreds of pictures before it. The science behind Quick, Draw! “uses some of the same technology that helps Google Translate recognize your handwriting. To understand handwritings or drawings, you don’t just look at what the person drew. You look at how they actually drew it” [1]. It is presented in the form of a game, with the user drawing a picture of an object chosen by the application. The program then has 20 seconds to recognize the image. In each session, the user is given a total of 6 objects. The images are then stored to the database used to train application. This happens to be the same database we saw earlier in the Sketch-RNN application. This image recognition is a very practical use of artificial intelligence in the realm of art and music. It can do a lot to benefit us in our everyday lives. But this only begins to scratch the surface of what artificial intelligence can do in this field. Although this is very impressive, we might point out that the application doesn’t truly understand what is being drawn. It is just picking up on patterns. In fact, this distinction is part of the gap between simple AI techniques and true artificial general intelligence. Machines that truly understand what the objects in images are don’t appear to be coming in the near future. Another interesting project in the art space is Google’s Deep Dream project, which uses AI to create new and unique images. Unfortunately, the Deep Dream Generator Team wouldn’t go into too much detail about the technology itself (mostly fearing it would be too long for an email) [8]. They did, however, explain that convolutional neural networks train on the famous ImageNet dataset. Those neural networks are then used to create art-like images. Essentially, Deep Dream takes the styling of one image and uses it to modify another image. The results can be anything from a silly fusion to an artistic masterpiece. This occurs when the program identifies the unique stylings of an image provided by the user and imposes those stylings onto another image that the user provides. What can easily be observed through the use of Deep Dream is that computers aren’t yet capable of truly understanding what they are doing with respect to art. They can be fed complex algorithms to generate images, but don’t fundamentally understand what it is they are generating. For example, a computer may see a knife cutting through an onion and assume the knife and onion are one object. The lack of an ability to truly understand the contents of an image is one dilemma that researchers have yet to solve. Perhaps as we continue to make advances in artificial intelligence we will be able to have machines that do truly understand what objects are in an image and even the emotions evoked by their music. The only way for this to be achieved is by reaching true artificial general intelligence (AGI). IN the meantime, the Deep Dream team believes that generative models will be able to create some really interesting pieces of art and digital content. For this section, we will consider where artificial intelligence could be heading in the art space. We will take a look at how AI has impacted the space and in what ways it can continue to do so. We will also look at ways art and music could continue to impact AI in the years to come. Although I don’t feel that we have completely mastered the ability to emulate the great artists of our past, it is just a matter of time before that problem is solved. The real task to be solved is that of creating new innovations in art and music. We need to work towards creation without emulation. It is quite clear that we are headed in that direction through projects like CAN and Magenta. Artificial general intelligence (AGI) is not the only way to complete this task. As a matter of fact, even those who dispute the possibility of AGI would have a hard time disputing the creation of unique works of art by a machine. One path that may be taken to further improve art and music through AI is to create more advanced datasets to use in training the complex networks like Sketch-RNN and Deep Dream. AI needs to be trained to be able to perform as expected. That training has a huge impact on the results we get. Shouldn’t we want to train our machines in the most beneficial way possible. Even developing software like Sketch-RNN to use the ImageNet dataset used in Deep Dream could be huge in educating artists on techniques for drawing complex, realistic images. Complex datasets could very well be our answer to more efficient training. Until our machines can think and learn like we do, we will need to be very careful what data is used to train them. One of the ways that art and music can help to impact AI is by providing another method of Turing Testing machines. For those who dream of creating AGI, what better way to test the machine’s ability that to create something that tests the full extent of human-like creativity? Art is the truest representation of human creativity. That is, in fact, its essence. Although art is probably not the ultimate end game for artificial intelligence, it could be one of the best ways to test the limits of what a machine can do. The day that computers can create original musical composition and create images based on descriptions given by a user could very well be the day that we stop being able to distinguish man from machine. There are many benefits to using artificial intelligence in the music space. Some of them have already been seen in the projects we have discussed so far. We have seen how artificial intelligence could be used for image recognition as well as their ability to turn our words into fantastic images. We have also seen how AI can be used to synthesize new sounds that have never been heard. We know that artificial intelligence can be used to create art alongside us as well as independently from us. It can be taught to mimic music from the past and can create novel ideas. All of these accomplishments are a part of what will drive AI research into the future. Who knows? Perhaps one day we will achieve artificial general intelligence and machines will be able to understand what is really in the images it is given. Maybe our computers will be able to understand how their art makes us feel. There is a clear path showing us where to go from here. I firmly believe that it is up to us to continue this research and test the limits of what artificial intelligence can do, both in the field of art and in our everyday lives. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computer Science student at Louisiana Tech University with an interest in anything AI. Sharing concepts, ideas, and codes.
Adam Geitgey
14.2K
15
https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------0----------------
Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks
Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that! This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture: Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished! (If you haven’t already read part 1 and part 2, read them now!) You might have seen this famous xkcd comic before. The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years. In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one. So let’s do it — let’s write a program that can recognize birds! Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”. In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in: We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”. Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set: The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers? The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is: To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers: The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes: Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups. Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone. All that’s left is to train the neural network with images of “8”s and not-“8"s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images. Here’s some of our training data: We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition! It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right? Well, of course it’s not that simple. First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image: But now the really bad news: Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything: This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only. That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered. We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one? This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this! When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image? We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image: Using this technique, we can easily create an endless supply of training data. More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns. To make the network bigger, we just stack up layer upon layer of nodes: We call this a “deep neural network” because it has more layers than a traditional neural network. This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly. But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network. Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects. There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is! As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture: As a human, you instantly recognize the hierarchy in this picture: Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on. But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks. We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up. We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images). Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture. Here’s how it’s going to work, step by step — Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile: By doing this, we turned our original image into 77 equally-sized tiny image tiles. Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile: However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting. We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this: In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting. The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big: To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all! We’ll just look at each 2x2 square of the array and keep the biggest number: The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits. So far, we’ve reduced a giant image down into a fairly small array. Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network. So from start to finish, our whole five-step pipeline looks like this: Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network. When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data. The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize. For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc. Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like: In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories! So how do you know which steps you need to combine to make your image classifier work? Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error! Now finally we know enough to write a program that can decide if a picture is a bird or not. As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics. Here’s a few of the birds from our combined data set: And here’s some of the 52,000 non-bird images: This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data! To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network. Here’s the code to define and train the network: If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer. As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there. Congrats! Our program can now recognize birds in images! Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not. But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time. That seems pretty good, right? Well... it depends! Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things. For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless. We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed. Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories — Using our validation set of 15,000 images, here’s how many times our predictions fell into each category: Why do we break our results down like this? Because not all mistakes are created equal. Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did. Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did: This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one! Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images. You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next? If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it.
Adam Geitgey
15.2K
13
https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------1----------------
Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning
Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic: This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do! Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)! So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result. But face recognition is really a series of several related problems: As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects: Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately. We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms: Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib. The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart! If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action: Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline. Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short. To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces: Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it: Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker: If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image: This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve! But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image. To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest. The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way: To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces: Using this technique, we can now easily find faces in any image: If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images. Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer: To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps. To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan. The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face: Here’s the result of locating the 68 face landmarks on our test image: Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations): Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate. If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks. Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting! The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right? There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours. What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about: Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else? It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure. The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face. The training process works by looking at 3 face images at a time: Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart: After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements. Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist. This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy. But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team! So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image: So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person. If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this. This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image. You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work. All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person! So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon: Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show: It works! And look how well it works for faces in different poses — even sideways faces! Let’s review the steps we followed: Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer: UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below! I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself! Original OpenFace instructions: If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter: You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 5! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it.
Arthur Juliani
9K
6
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------2----------------
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
Dhruv Parthasarathy
4.3K
12
https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------5----------------
A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN
At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com
Sebastian Heinz
4.4K
13
https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------6----------------
A simple deep learning model for stock price prediction using TensorFlow
For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind. Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB). Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format. The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values. A quick look at the S&P time series using pyplot.plot(data['SP500']): Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute. The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017. There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values. Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler. Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction. TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog): In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get "filled" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow: After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex. As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1). The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly. The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch. Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail. The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article. It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer. After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications. Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model. The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network. The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets. However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved. The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research. Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp. Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies. Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient. After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets. A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch. The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies. During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice! One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario. Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post. The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;) If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice. Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork. Lastly, follow me on: Twitter | LinkedIn From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts.
Max Pechyonkin
23K
8
https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------7----------------
Understanding Hinton’s Capsule Networks. Part I: Intuition.
Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur
Slav Ivanov
3.9K
17
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------8----------------
The $1700 great Deep Learning box: Assembly, setup and benchmarks
Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Stefan Kojouharov
14.2K
7
https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------9----------------
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data
Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Netflix Technology Blog
99
11
https://medium.com/netflix-techblog/distributed-neural-networks-with-gpus-in-the-aws-cloud-ccf71e82056b?source=tag_archive---------0----------------
Distributed Neural Networks with GPUs in the AWS Cloud
by Alex Chen, Justin Basilico, and Xavier Amatriain As we have described previously on this blog, at Netflix we are constantly innovating by looking for better ways to find the best movies and TV shows for our members. When a new algorithmic technique such as Deep Learning shows promising results in other domains (e.g. Image Recognition, Neuro-imaging, Language Models, and Speech Recognition), it should not come as a surprise that we would try to figure out how to apply such techniques to improve our product. In this post, we will focus on what we have learned while building infrastructure for experimenting with these approaches at Netflix. We hope that this will be useful for others working on similar algorithms, especially if they are also leveraging the Amazon Web Services (AWS) infrastructure. However, we will not detail how we are using variants of Artificial Neural Networks for personalization, since it is an active area of research. Many researchers have pointed out that most of the algorithmic techniques used in the trendy Deep Learning approaches have been known and available for some time. Much of the more recent innovation in this area has been around making these techniques feasible for real-world applications. This involves designing and implementing architectures that can execute these techniques using a reasonable amount of resources in a reasonable amount of time. The first successful instance of large-scale Deep Learning made use of 16000 CPU cores in 1000 machines in order to train an Artificial Neural Network in a matter of days. While that was a remarkable milestone, the required infrastructure, cost, and computation time are still not practical. Andrew Ng and his team addressed this issue in follow up work . Their implementation used GPUs as a powerful yet cheap alternative to large clusters of CPUs. Using this architecture, they were able to train a model 6.5 times larger in a few days using only 3 machines. In another study, Schwenk et al. showed that training these models on GPUs can improve performance dramatically, even when comparing to high-end multicore CPUs. Given our well-known approach and leadership in cloud computing, we sought out to implement a large-scale Neural Network training system that leveraged both the advantages of GPUs and the AWS cloud. We wanted to use a reasonable number of machines to implement a powerful machine learning solution using a Neural Network approach. We also wanted to avoid needing special machines in a dedicated data center and instead leverage the full, on-demand computing power we can obtain from AWS. In architecting our approach for leveraging computing power in the cloud, we sought to strike a balance that would make it fast and easy to train Neural Networks by looking at the entire training process. For computing resources, we have the capacity to use many GPU cores, CPU cores, and AWS instances, which we would like to use efficiently. For an application such as this, we typically need to train not one, but multiple models either from different datasets or configurations (e.g. different international regions). For each configuration we need to perform hyperparameter tuning, where each combination of parameters requires training a separate Neural Network. In our solution, we take the approach of using GPU-based parallelism for training and using distributed computation for handling hyperparameter tuning and different configurations. Some of you might be thinking that the scenario described above is not what people think of as a distributed Machine Learning in the traditional sense. For instance, in the work by Ng et al. cited above, they distribute the learning algorithm itself between different machines. While that approach might make sense in some cases, we have found that to be not always the norm, especially when a dataset can be stored on a single instance. To understand why, we first need to explain the different levels at which a model training process can be distributed. In a standard scenario, we will have a particular model with multiple instances. Those instances might correspond to different partitions in your problem space. A typical situation is to have different models trained for different countries or regions since the feature distribution and even the item space might be very different from one region to the other. This represents the first initial level at which we can decide to distribute our learning process. We could have, for example, a separate machine train each of the 41 countries where Netflix operates, since each region can be trained entirely independently. However, as explained above, training a single instance actually implies training and testing several models, each corresponding to a different combinations of hyperparameters. This represents the second level at which the process can be distributed. This level is particularly interesting if there are many parameters to optimize and you have a good strategy to optimize them, like Bayesian optimization with Gaussian Processes. The only communication between runs are hyperparameter settings and test evaluation metrics. Finally, the algorithm training itself can be distributed. While this is also interesting, it comes at a cost. For example, training ANN is a comparatively communication-intensive process. Given that you are likely to have thousands of cores available in a single GPU instance, it is very convenient if you can squeeze the most out of that GPU and avoid getting into costly across-machine communication scenarios. This is because communication within a machine using memory is usually much faster than communication over a network. The following pseudo code below illustrates the three levels at which an algorithm training process like us can be distributed. In this post we will explain how we addressed level 1 and 2 distribution in our use case. Note that one of the reasons we did not need to address level 3 distribution is because our model has millions of parameters (compared to the billions in the original paper by Ng). Before we addressed distribution problem though, we had to make sure the GPU-based parallel training was efficient. We approached this by first getting a proof-of-concept to work on our own development machines and then addressing the issue of how to scale and use the cloud as a second stage. We started by using a Lenovo S20 workstation with a Nvidia Quadro 600 GPU. This GPU has 98 cores and provides a useful baseline for our experiments; especially considering that we planned on using a more powerful machine and GPU in the AWS cloud. Our first attempt to train our Neural Network model took 7 hours. We then ran the same code to train the model in on a EC2’s cg1.4xlarge instance, which has a more powerful Tesla M2050 with 448 cores. However, the training time jumped from 7 to over 20 hours. Profiling showed that most of the time was spent on the function calls to Nvidia Performance Primitive library, e.g. nppsMulC_32f_I, nppsExp_32f_I. Calling the npps functions repeatedly took 10x more system time on the cg1 instance than in the Lenovo S20. While we tried to uncover the root cause, we worked our way around the issue by reimplementing the npps functions using the customized cuda kernel, e.g. replace nppsMulC_32f_I function with: Replacing all npps functions in this way for the Neural Network code reduced the total training time on the cg1 instance from over 20 hours to just 47 minutes when training on 4 million samples. Training 1 million samples took 96 seconds of GPU time. Using the same approach on the Lenovo S20 the total training time also reduced from 7 hours to 2 hours. This makes us believe that the implementation of these functions is suboptimal regardless of the card specifics. While we were implementing this “hack”, we also worked with the AWS team to find a principled solution that would not require a kernel patch. In doing so, we found that the performance degradation was related to the NVreg_CheckPCIConfigSpace parameter of the kernel. According to RedHat, setting this parameter to 0 disables very slow accesses to the PCI configuration space. In a virtualized environment such as the AWS cloud, these accesses cause a trap in the hypervisor that results in even slower access. NVreg_CheckPCIConfigSpace is a parameter of kernel module nvidia-current, that can be set using: We tested the effect of changing this parameter using a benchmark that calls MulC repeatedly (128x1000 times). Below are the results (runtime in sec) on our cg1.4xlarge instances: As you can see, disabling accesses to PCI space had a spectacular effect in the original npps functions, decreasing the runtime by 95%. The effect was significant even in our optimized Kernel functions saving almost 25% in runtime. However, it is important to note that even when the PCI access is disabled, our customized functions performed almost 60% better than the default ones. We should also point out that there are other options, which we have not explored so far but could be useful for others. First, we could look at optimizing our code by applying a kernel fusion trick that combines several computation steps into one kernel to reduce the memory access. Finally, we could think about using Theano, the GPU Match compiler in Python, which is supposed to also improve performance in these cases. While our initial work was done using cg1.4xlarge EC2 instances, we were interested in moving to the new EC2 GPU g2.2xlarge instance type, which has a GRID K520 GPU (GK104 chip) with 1536 cores. Currently our application is also bounded by GPU memory bandwidth and the GRID K520‘s memory bandwidth is 198 GB/sec, which is an improvement over the Tesla M2050’s at 148 GB/sec. Of course, using a GPU with faster memory would also help (e.g. TITAN’s memory bandwidth is 288 GB/sec). We repeated the same comparison between the default npps functions and our customized ones (with and without PCI space access) on the g2.2xlarge instances. One initial surprise was that we measured worse performance for npps on the g2 instances than the cg1 when PCI space access was enabled. However, disabling it improved performance between 45% and 65% compared to the cg1 instances. Again, our KernelMulC customized functions are over 70% better, with benchmark times under a second. Thus, switching to G2 with the right configuration allowed us to run our experiments faster, or alternatively larger experiments in the same amount of time. Once we had optimized the single-node training and testing operations, we were ready to tackle the issue of hyperparameter optimization. If you are not familiar with this concept, here is a simple explanation: Most machine learning algorithms have parameters to tune, which are called often called hyperparameters to distinguish them from model parameters that are produced as a result of the learning algorithm. For example, in the case of a Neural Network, we can think about optimizing the number of hidden units, the learning rate, or the regularization weight. In order to tune these, you need to train and test several different combinations of hyperparameters and pick the best one for your final model. A naive approach is to simply perform an exhaustive grid search over the different possible combinations of reasonable hyperparameters. However, when faced with a complex model where training each one is time consuming and there are many hyperparameters to tune, it can be prohibitively costly to perform such exhaustive grid searches. Luckily, you can do better than this by thinking of parameter tuning as an optimization problem in itself. One way to do this is to use a Bayesian Optimization approach where an algorithm’s performance with respect to a set of hyperparameters is modeled as a sample from a Gaussian Process. Gaussian Processes are a very effective way to perform regression and while they can have trouble scaling to large problems, they work well when there is a limited amount of data, like what we encounter when performing hyperparameter optimization. We use package spearmint to perform Bayesian Optimization and find the best hyperparameters for the Neural Network training algorithm. We hook up spearmint with our training algorithm by having it choose the set of hyperparameters and then training a Neural Network with those parameters using our GPU-optimized code. This model is then tested and the test metric results used to update the next hyperparameter choices made by spearmint. We’ve squeezed high performance from our GPU but we only have 1–2 GPU cards per machine, so we would like to make use of the distributed computing power of the AWS cloud to perform the hyperparameter tuning for all configurations, such as different models per international region. To do this, we use the distributed task queue Celery to send work to each of the GPUs. Each worker process listens to the task queue and runs the training on one GPU. This allows us, for example, to tune, train, and update several models daily for all international regions. Although the Spearmint + Celery system is working, we are currently evaluating more complete and flexible solutions using HTCondor or StarCluster. HTCondor can be used to manage the workflow of any Directed Acyclic Graph (DAG). It handles input/output file transfer and resource management. In order to use Condor, we need each compute node register into the manager with a given ClassAd (e.g. SLOT1_HAS_GPU=TRUE; STARD_ATTRS=HAS_GPU). Then the user can submit a job with a configuration “Requirements=HAS_GPU” so that the job only runs on AWS instances that have an available GPU. The main advantage of using Condor is that it also manages the distribution of the data needed for the training of the different models. Condor also allows us to run the Spearmint Bayesian optimization on the Manager instead of having to run it on each of the workers. Another alternative is to use StarCluster , which is an open source cluster computing framework for AWS EC2 developed at MIT. StarCluster runs on the Oracle Grid Engine (formerly Sun Grid Engine) in a fault-tolerant way and is fully supported by Spearmint. Finally, we are also looking into integrating Spearmint with Jobman in order to better manage the hyperparameter search workflow. Figure below illustrates the generalized setup using Spearmint plus Celery, Condor, or StarCluster: Implementing bleeding edge solutions such as using GPUs to train large-scale Neural Networks can be a daunting endeavour. If you need to do it in your own custom infrastructure, the cost and the complexity might be overwhelming. Levering the public AWS cloud can have obvious benefits, provided care is taken in the customization and use of the instance resources. By sharing our experience we hope to make it much easier and straightforward for others to develop similar applications. We are always looking for talented researchers and engineers to join our team. So if you are interested in solving these types of problems, please take a look at some of our open positions on the Netflix jobs page . Originally published at techblog.netflix.com on February 10, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more.
Francesco Gadaleta
3
4
https://hackernoon.com/gradient-descent-vs-coordinate-descent-9b5657f1c59f?source=tag_archive---------1----------------
Gradient descent vs coordinate descent – Hacker Noon
When it comes to function minimization, it’s time to open a book of optimization and linear algebra. I am currently working on variable selection and lasso-based solutions in genetics. What lasso does is basically minimizing the loss function and an penalty in order to set to zero some regression coefficients and select only those covariates that are really associated with the response. Pheew, the shortest summary of lasso ever! We all know that, provided the function to be minimized is convex, a good direction to follow, in order to find a local minimum, is towards the negative gradient of the function. Now, my question is how good or bad is following the negative gradient with respect to a coordinate descent approach that loops across all dimensions and minimizes along each? There is no better way to try this with real code and start measuring. Hence, I wrote some code that implements both gradient descent and coordinate descent. The comparison might not be completely fair because the learning rate in the gradient descent procedure is fixed at 0.1 (which in some cases might be slower indeed). But even with some tuning (maybe with some linear search) or adaptive learning rates, it’s quite common to see that coordinate descent overcomes its brother gradient descent many times. This occurs much more often when the number of covariates becomes very high, as in many computational biology problems. In the figure below, I plot the analytical solution in red, the gradient descent minimisation in blue and the coordinate descent in green, across a number of iterations. A small explanation is probably necessary to read the function that performs coordinate descent. For a more mathematical explanation refer to the original post. Coordinate descent will update each variable in a Round Robin fashion. Despite the learning rate of the gradient descent procedure (which could indeed speed up convergence), the comparison between the two is fair at least in terms of complexity. Coordinate descent needs to perform operations for each coordinate update. Gradient descent performs the same number of operations . The R code that performs this comparison and generates the plot above is given below Feel free to download this code (remember to cite me and send me some cookies). Happy descent! Originally published at worldofpiggy.com on May 31, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Machine learning, math, crypto, blockchain, fitchain.io how hackers start their afternoons.
Milo Spencer-Harper
2.2K
3
https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------0----------------
How to build a multi-layered neural network in Python
In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron. But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be? The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output. Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs. You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers. In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples. The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further. Also available here: https://github.com/miloharper/multi-layer-neural-network This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post. What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”. Ok, let’s try running it using the Terminal command: python main.py You should get a result that looks like this: First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close! You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”. That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
Jim Fleming
294
3
https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f?source=tag_archive---------1----------------
Loading a TensorFlow graph with the C++ API – Jim Fleming – Medium
Check out the related post: Loading TensorFlow graphs from Node.js (using the C API). The current documentation around loading a graph with C++ is pretty sparse so I spent some time setting up a barebones example. In the TensorFlow repo there are more involved examples, such as building a graph in C++. However, the C++ API for constructing graphs is not as complete as the Python API. Many features (including automatic gradient computation) are not available from C++ yet. Another example in the repo demonstrates defining your own operations but most users will never need this. I imagine the most common use case for the C++ API is for loading pre-trained graphs to be standalone or embedded in other applications. Be aware, there are some caveats to this approach that I’ll cover at the end. Let’s start by creating a minimal TensorFlow graph and write it out as a protobuf file. Make sure to assign names to your inputs and operations so they’re easier to assign when we execute the graph later. The node’s do have default names but they aren’t very useful: Variable_1 or Mul_3. Here’s an example created with Jupyter: Let’s create a new folder like tensorflow/tensorflow/<my project name> for your binary or library to live. I’m going to call the project loader since it will be loading a graph. Inside this project folder we’ll create a new file called <my project name>.cc (e.g. loader.cc). If you’re curious, the .cc extension is essentially the same as .cpp but is preferred by Google’s code guidelines. Inside loader.cc we’re going to do a few things: Now we create a BUILD file for our project. This tells Bazel what to compile. Inside we want to define a cc_binary for our program. You can also use the linkshared option on the binary to produce a shared library or the cc_library rule if you’re going to link it using Bazel. Here’s the final directory structure: You could also call bazel run :loader to run the executable directly, however the working directory for bazel run is buried in a temporary folder and ReadBinaryProto looks in the current working directory for relative paths. And that should be all we need to do to compile and run C++ code for TensorFlow. The last thing to cover are the caveats I mentioned: Hopefully someone can shed some light on these last points so we can begin to embed TensorFlow graphs in applications. If you are that person, message me on Twitter or email. If you’d like help deploying TensorFlow in production, I do consulting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CTO and lead ML engineer at Fomoro — focused on machine learning and applying cutting-edge research for businesses — previously @rdio What I’m working on.
Milo Spencer-Harper
1.8K
4
https://medium.com/deep-learning-101/how-to-generate-a-video-of-a-neural-network-learning-in-python-62f5c520e85c?source=tag_archive---------2----------------
Video of a neural network learning – Deep Learning 101 – Medium
As part of my quest to learn about AI, I generated a video of a neural network learning. Many of the examples on the Internet use matrices (grids of numbers) to represent a neural network. This method is favoured, because it is: However, it’s difficult to understand what is happening. From a learning perspective, being able to visually see a neural network is hugely beneficial. The video you are about to see, shows a neural network trying to solve this pattern. Can you work it out? It’s the same problem I posed in my previous blog post. The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. Our neural network will cycle through these 7 examples, 60,000 times. To speed up the video, I will only show you 13 of these cycles, pausing for a second on each frame. Why the number 13? It ensures the video lasts exactly as long as the music. Each time she considers an example in the training set, you will see her think (you will see her neurons and her synaptic connections glow). She will then calculate the error (the difference between the output and the desired output). She will then propagate this error backwards, adjusting her synaptic connections. Green synaptic connections represent positive weights (a signal flowing through this synapse will excite the next neuron to fire). Red synaptic connections represent negative weights (a signal flowing through this synapse will inhibit the next neuron from firing). Thicker synapses represent stronger connections (larger weights). In the beginning, her synaptic weights are randomly assigned. Notice how some synapses are green (positive) and others are red (negative). If these synapses turn out to be beneficial in calculating the right answer, she will strengthen them over time. However, if they are unhelpful, these synapses will wither. It’s even possible for a synapse which was originally positive to become negative, and vice versa. An example of this, is the first synapse into the output neuron — early on in the video it turns from red to green. In the beginning her brain looks like this: Did you notice that all her neurons are dark? This is because she isn’t currently thinking about anything. The numbers to the right of each neuron, represent the level of neural activity and vary between 0 and 1. Ok. Now she is going to think about the pattern we saw earlier. Watch the video carefully to see her synapses grow thicker as she learns. Did you notice how I slowed the video down at the beginning, by skipping only a small number of cycles? When I first shot the video, I didn’t do this. However, I realised that learning is subject to the ‘Law of diminishing returns’. The neural network changes more rapidly during the initial stage of training, which is why I slowed this bit down. Now that she has learned about the pattern using the 7 examples in the training set, let’s examine her brain again. Do you see how she has strengthened some of her synapses, at the expense of others? For instance, do you remember how the third column in the training set is irrelevant in determining the answer? You can see she has discovered this, because the synapses coming out of her third input neuron have almost withered away, relative to the others. Let’s give her a new situation [1, 1, 0] to think about. You can see her neural pathways light up. She has estimated 0.01. The correct answer is 0. So she was very close! Pretty cool. Traditional computer programs can’t learn. But neural networks can learn and adapt to new situations. Just like the human mind! How did I do it? I used the Python library matplotlib, which provides methods for drawing and animation. I created the glow effects using alpha transparency. You can view my full source code here: Thanks for reading! If you enjoyed reading this article, please click the heart icon to ‘Recommend’. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Fundamentals and Latest Developments in #DeepLearning
Christian Hernandez
364
7
https://medium.com/crossing-the-pond/into-the-age-of-context-f0aed15171d7?source=tag_archive---------3----------------
Into the Age of Context – Crossing the Pond – Medium
I spent most of my early career proclaiming that “This!” was the “year of mobile”. The year of mobile was actually 2007 when the iPhone launched and accelerated a revolution around mobile computing. As The Economist recently put it “Just eight years later Apple’s iPhone exemplifies the early 21st century’s defining technology.” It’s not a question of whether Smartphones have become our primary computing interaction device, it’s a question of by how much relative to other interaction mediums. So let’s agree that we are currently living in the Era of Mobile. Looking forward to the next 5 year though, I personally believe we will move from the Era of Mobile to the Age of Context. (credit to Robert Scoble and Shel Israel for their book with that same term). Let me first define what I mean by Age of Context. In the Age of Context personal data (ex: calendar and email, location and time) is integrated with publicly available data (ex: traffic data, pollution level) and app-level data (ex: Uber surge pricing, number of steps tracked by my FitBit) to intelligently drive me towards an action (ex: getting me to walk to my next meeting instead of ordering a car). It is an age in which we, and the devices and sensors around us, generate massive reams of data and in which self-teaching algorithms drill into that data to derive insight and recommend or auto-generate an action. It is an era in which our biological computational capacity and actions, are enhanced (and improved) by digital services. The Age of Context is being brought about by a number of technology trends which have been accelerating in a parallel and are now coming together. The first, and most obvious trend, is the proliferation of supercomputers in our pockets. Industry analysts forecast 1.87 billion phones will be shipped by 2018. These devices carry not only a growing amount of processing power, but also the ecosystem of applications and services which integrate with sensors and functionality on the device to allow us to, literally, remote control our life. In the evolution from the current Era of Mobile to the future Age of Context, the supercomputers in our pocket evolve from information delivery and application interaction layers, to notification context-aware action drivers. Smartphones will soon be complemented by wearable computing devices (be that the Apple Watch or a future evolution of Google Glass). These new form factors are ideally suited for an Era in which data needs to be compiled into succinct notifications and action enablers. In the last 10 years, the “web” has evolved into a social web on top of which identities and deep insight into each of us powers services and experiences. It allows Goodreads to associate books with my identity, Vivino to determine that I like earthy red wines, Unilever to best target me for an ad on Facebook and Netflix to mine my data to then commission a show it knows I will like. This identity layer is now being overlayed with a financial layer in which, associated with my digital identity, I also have a secure digital payment mechanism. This transactional financial layer will begin to enable seamless transactions. In the Age of Context, the Starbuck app will know that I usually emerge from the tube at 9:10am and walk to their local store to order a “tall Americano, extra shot.” At 9:11, as I reach street level my phone, or watch, or wearable computing device will know where I am (close to Starbucks and to the office), know my routine, have my payment information stored and simply generate an action-driver that says “Tall Americano, extra shot. Order?” A few minutes later I can pick up my coffee, which has already been paid for. These services are already possible today. A parallel and accelerating trend which will power the Age of Context is the proliferation of intelligent and connected sensors around us. Call that Internet of Things or call it simply a democratization and consumerization of devices that capture data (for now) and act on data (eventually). While the end number varies, industry analysts all believe the number of connected devices starts to get very big very fast. Gartner predicts that by 2020 there will be 25 billion connected devices with the vast majority of those being consumer-centric. Today my Jawbone is a fairly basic data collection device. It knows that I walked 8,000 steps and slept too little, but it doesn’t drive me to action other than providing me with a visualization of the data. In the Age of Context this will change, as larger and larger data sets of sensor data, combined with other data combined with intelligent analytics allows data to become actionable. In the future my Jawbone won’t simply count my steps, it will also be able to integrate with other data sets to generate personal health insights. It will have tracked over time that my blood pressure rises every morning at 9:20 after I have consumed the third coffee of the day. Comparing my blood rate to thousands of others of my age range and demographic background it will know that the levels are unhealthy and it will help me take a conscious decision not to consume that extra coffee through a notification. Data will derive insight and that insight will, hopefully, drive action. One could argue that the parallel trends of mobile, sensors and the social web are already mainstream. What then is bringing them together to weave the Age of Context? The glue is data. The massive amounts of data the growing number of internet users and connected devices generate each day. More critically, the cost of storing this data has dropped to nearly zero. Deloitte estimated that in 1992 the cost of storing a Gigabyte of data was $569 and that by 2012 the cost had dropped to $0.03. But data by itself is just bits and bytes. The second key trend that is weaving the Age of Context is the breakthroughs in algorithms and models to analyze this data in close-to-real-time. For the Age of Context to come about, systems must know how to query and how to act on all the possible contextual data points to drive the simplified actions outlined in the examples above. The advances (and investment) into machine learning and AI are the final piece of the puzzle needed to turn data from information to action. The most visible example of the Age of Context today is Google Now. Google has a lot of information about me: it knows what “work” is as I spend most of the time there between 9am and 7pm, it knows what “home” is as I spend most of the evenings there. Since I use Google Apps it knows what my first meeting is. Since I search for Duke Basketball on a regular basis it knows I care about the scores. Since I usually take the tube and Google has access to the London TfL data, it knows that I will be late to my next meeting. But even though Google Now recently opened up its API to third party developers, it is still fairly Google-biased and Google-optimized. For the Age of Context to thrive the platforms that power it must be interlinked across data and applications. Whether this age comes about through intelligent agents (like Siri or Viv or the character from Her) or a “meta-app” layer sitting across vertical apps and services is still unclear. The missing piece for much of this to come about is a common meta-language for vertical and punctual apps to share data and actions. This common language will likely be an evolution of the various deep-linking standards being developed. Facebook has a flavour, Android has a flavour, and a myriad of startups have flavours. An emerging standard will not only enable the Age of Context but also probably crown the champion of this new era as the standard will also own the interactions, the interlinkages and the paths to monetization across devices and experiences. The trends above are all happening around us, the standards and algorithms are all being built by brilliant minds across the world, the interface layers and devices are already with us. The Age of Context is being created at an accelerating pace and I can’t wait to see what gets built and how our day to day lives are enhanced by this new era. Thanks to John Henderson for his feedback and thoughts on this post. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder and Managing Partner @whitestarvc Former product and mobile guy at smallish companies that became big. Salvadoran-born Londoner. #YGL of the @wef Stories from the White Star Capital team and our portfolio companies on entrepreneurship and scaling globally
Venture Scanner
207
5
https://medium.com/@VentureScanner/the-state-of-artificial-intelligence-in-six-visuals-8bc6e9bf8f32?source=tag_archive---------4----------------
The State of Artificial Intelligence in Six Visuals
We cover many emerging markets in the startup ecosystem. Previously, we published posts that summarized Financial Technology, Internet of Things, Bitcoin, and MarTech in six visuals. This week, we do the same with Artificial Intelligence (AI). At this time, we are tracking 855 AI companies across 13 categories, with a combined funding amount of $8.75billion. To see all of our AI related posts, check out our blog! The six Artificial Intelligence visuals below help make sense of this dynamic market: Deep Learning/Machine Learning Applications: Machine learning is the technology of computer algorithms that operate based on its learnings from existing data. Deep learning is a subset of machine learning that focuses on deeply layered neural networks. The following companies utilize deep learning/machine learning technology in a specific way or use-case in their products. Computer Vision/Image Recognition: Computer vision is the method of processing and analyzing images to understand and produce information from them. Image recognition is the process of scanning images to identify objects and faces. The following companies either build computer vision/image recognition technology or utilize it as the core offering in their products. Deep Learning/Machine Learning (General): Machine learning is the technology of computer algorithms that operate based on its learning from existing data. Deep learning is a subset of machine learning that focuses on deeply layered neural networks. The following companies either build deep learning/machine learning technology or utilize it as the core offering of their products. Natural Language Processing: Natural language processing is the method through which computers process human language input and convert into understandable representations to derive meaning from them. The following companies either build natural language processing technology or utilize it as the core offering in their products (excluding all speech recognition companies). Smart Robots: Smart robot companies build robots that can learn from their experience and act and react autonomously based on the conditions of their environment. Virtual Personal Assistants: Virtual personal assistants are software agents that use artificial intelligence to perform tasks and services for an individual, such as customer service, etc. Natural Language Processing (Speech Recognition): Speech recognition is a subset of natural language processing that focuses on processing a sound clip of human speech and deriving meaning from it. Computer Vision/Image Recognition: Computer vision is the method of processing and analyzing images to understand and produce information from them. Image recognition is the process of scanning images to identify objects and faces. The following companies utilize computer vision/image recognition technology in a specific way or use-case in their products. Recommendation Engines and Collaborative Filtering: Recommendation engines are systems that predict the preferences and interests of users for certain items (movies, restaurants) and deliver personalized recommendations to them. Collaborative filtering is a method of predicting a user’s preferences and interests by collecting the preference information from many other similar users. Gesture Control: Gesture control is the process through which humans interact and communicate with computers with their gestures, which are recognized and interpreted by the computers. Video Automatic Content Recognition: Video automatic content recognition is the process through which the computer compares a sampling of video content with a source content file to identify what the content is through its unique characteristics. Context Aware Computing: Context aware computing is the process through which computers become aware of their environment and their context of use, such as location, orientation, lighting and adapt their behavior accordingly. Speech to Speech Transition: Speech to speech translation is the process through which human speech in one language is processed by the computer and translated into another language instantly. The bar graph above summarizes the number of companies in each Artificial Intelligence category to show which are dominating the current market. Currently, the “Deep Learning/Machine Learning Applications” category is leading the way with a total of 200 companies, followed by “Natural Language Processing (Speech Recognition)” with 130 companies. The bar graph above summarizes the average company funding per Artificial Intelligence category. Again, the “Deep Learning/Machine Learning Applications” category leads the way with an average of $13.8M per funded company. The SEM category includes companies that help marketers with managing and scaling their paid-search programs. The graph above compares total venture funding in Artificial Intelligence to the number of companies in each category. “Deep Learning/Machine Learning Applications” seems to be the category with the most traction. The following infographic is an updated heat map indicating where Artificial Intelligence startups exist across 62 countries. Currently, the United States is leading the way with 415 companies. The United Kingdom is in second with 67 companies followed by Canada with 29. The bar graph above summarizes Artificial Intelligence by median age of category. The “Speech Recognition” and “Video Content Recognition” categories have the highest median age at 8 years, followed by “Computer Vision (General)” at 6.5 years. As Artificial Intelligence continues to develop, so too will its moving parts. We hope this post provides some big picture clarity on this booming industry. Venture Scanner enables corporations to research, identify, and connect with the most innovative technologies and companies. We do this through a unique combination of our data, technology, and expert analysts. If you have any questions, reach out to info@venturescanner.com. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Technology and analyst powered research firm. Visit us at www.venturescanner.com.
Illia Polosukhin
108
3
https://medium.com/@ilblackdragon/tensorflow-tutorial-part-2-9ffe47049c92?source=tag_archive---------5----------------
Tensorflow Tutorial — Part 2 – Illia Polosukhin – Medium
In the previous Part 1 of this tutorial, I introduced a bit of TensorFlow and Scikit Flow and showed how to build a simple logistic regression model on Titanic dataset. In this part let’s go deeper and try multi-layer fully connected neural networks, writing your custom model to plug into the Scikit Flow and top it with trying out convolutional networks. Of course, there is not much point of yet another linear/logistic regression framework. An idea behind TensorFlow (and many other deep learning frameworks) is to be able to connect differentiable parts of the model together and optimize them given the same cost (or loss) function. Scikit Flow already implements a convenient wrapper around TensorFlow API for creating many layers of fully connected units, so it’s simple to start with deep model by just swapping classifier in our previous model to the TensorFlowDNNClassifier and specify hidden units per layer: This will create 3 layers of fully connected units with 10, 20 and 10 hidden units respectively, with default Rectified linear unit activations. We will be able to customize this setup in the next part. I didn’t play much with hyperparameters, but previous DNN model actually yielded worse accuracy then a logistic regression. We can explore if this is due to overfitting on under-fitting in a separate post. For the sake of this example, I though want to show how to switch to the custom model where you can have more control. This model is very similar to the previous one, but we changed the activation function from a rectified linear unit to a hyperbolic tangent (rectified linear unit and hyperbolic tangent are most popular activation functions for neural networks). As you can see, creating a custom model is as easy as writing a function, that takes X and y inputs (which are Tensors) and returns two tensors: predictions and loss. This is where you can start learning TensorFlow APIs to create parts of sub-graph. What kind of TensorFlow tutorial would this be without an example of digit recognition? :) This is just an example how you can try different types of datasets and models, not limiting to only floating number features. Here, we take digits dataset and write a custom model: We’ve created conv_model function, that given tensor X and y, runs 2D convolutional layer with the most simple max pooling — just maximum. The result is passed as features to skflow.models.logistic_regression, which handles classification to required number of classes by attaching softmax over classes and computing cross entropy loss. It’s easy now to modify this code to add as many layers as you want (some of the state-of-the-art image recognition models are hundred+ layers of convolutions, max pooling, dropout and etc). The Part 3 is expanding the model for Titanic dataset with handling categorical variables. PS. Thanks to Vlad Frolov for helping with missing articles and pointing mistakes in the draft :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder @ NEAR.AI — teaching machines to code. I’m tweeting as @ilblackdragon.
Derrick Harris
124
9
https://medium.com/s-c-a-l-e/how-baidu-mastered-mandarin-with-deep-learning-and-lots-of-data-1d94032564a5?source=tag_archive---------6----------------
Baidu explains how it’s mastering Mandarin with deep learning
On Aug. 8 at the International Neural Network Society conference on big data in San Francisco, Baidu senior research engineer Awni Hannun presented on a new model that the Chinese search giant has developed for handling voice queries in Mandarin. The model, which is accurate 94 percent of the time in tests, is based on a powerful deep learning system called Deep Speech that Baidu first unveiled in December 2014. In this lightly edited interview, Hannun explains why his new research is important, why Mandarin is such a tough language to learn and where we can expect to see future advances in deep learning methods. SCALE: How accurate is Deep Speech at translating Mandarin? AWNI HANNUN: It has a 6 percent character error rate, which essentially means that it gets wrong 6 out of 100 characters. To put that in context, this is in my opinion — and to the best of our lab’s knowledge — the best system at transcribing Mandarin voice queries in the world. In fact, we ran an experiment where we had a few people at the lab who speak Chinese transcribe some of the examples that we were testing the system on. It turned out that our system was better at transcribing examples than they were — if we restricted it to transcribing without the help of the internet and such things. What is it about Mandarin that makes it such a challenge compared with other languages? There are a couple of differences with Mandarin that made us think it would be very difficult to have our English speech system work well with it. One is that it’s a tonal language, so when you say a word in a different pitch, it changes the meaning of the word, which is definitely not the case in English. In traditional speech recognition, it’s actually a desirable property that there is some pitch invariance, which essentially means that it tries to ignore pitch when it does the transcription. So you have to change a bunch of things to get a system to work with Mandarin, or any Chinese for that matter. However, for us, it was not the case that we had to change a whole bunch of things, because our pipeline is much simpler than the traditional speech pipeline. We don’t do a whole lot of pre-processing on the audio in order to make it pitch-invariant, but rather just let the model learn what’s relevant from the data to most effectively transcribe it properly. It was actually able to do that fine in Mandarin without having to change the input. The other thing that is very different about Chinese — Mandarin, in this case — is the character set. The English alphabet is 26 letters, whereas in Chinese it’s something like 80,000 different characters. Our system directly outputs a character at a time as it’s building its transcription, so we speculated it would be very challenging to have to do that on 80,000 characters at each step versus 26. That’s a challenge we were able to overcome just by using characters that people commonly say, which is a smaller subset. Baidu has been handling a fairly high volume of voice searches for a while now. How is the Deep Speech system better than the previous system for handling queries in Mandarin? Baidu has a very active system for voice search in Mandarin, and it works pretty well. I think in terms of total query activity, it’s still a relatively small percentage. We want to make that share larger, or at least enable people to use it more by making the accuracy of the system better. Can you describe the difference between a search-based system like Deep Speech and something like Microsoft’s Skype Translate, which is also based on deep learning? Typically, the way it’s done is there are three modules in the pipeline. The first is a speech-transcription module, the second is the machine-translation module and the third would be the speech-synthesis module. What we’re talking about, specifically, is just the speech-transcription module, and I’m sure Microsoft has one as part of Skype Translate. Our system is different than that system in that it’s more what we call end-to-end. Rather than having a lot of human-engineered components that have been developed over decades of speech research — by looking at the system and saying what what features are important or which phonemes the model should predict — we just have some input data, which is an audio .WAV file on which we do very little pre-processing. And then we have a big, deep neural network that outputs directly to characters. We give it enough data that it’s able to learn what’s relevant from the input to correctly transcribe the output, with as little human intervention as possible. One thing that’s pleasantly surprising to us is that we had to do very little changing to it — other than scaling it and giving it the right data — to make this system we showed in December that worked really well on English work remarkably well in Chinese, as well. What’s the usual timeline to get this type of system from R&D into production? It’s not an easy process, but I think it’s easier than the process of getting a model to be very accurate — in the sense that it’s more of an engineering problem than a research problem. We’re actively working on that now, and I’m hopeful our research system will be in production in the near term. Baidu has plans — and products — in other areas, including wearables and other embedded forms of speech recognition. Does the work you’re doing on search relate to these other initiatives? We want to build a speech system that can be used as the interface to any smart device, not just voice search. It turns out that voice search is a very important part of Baidu’s ecosystem, so that’s one place we can have a lot of impact right now. Is the pace of progress and significant advances in deep learning as fast it seems? I think right now, it does feel like the pace is increasing because people are recognizing that if you take tasks where you have some input and are trying to produce some output, you can apply deep learning to that task. If it was some old machine learning task such as machine translation or speech recognition, which has been heavily engineered for the past several decades, you can make significant advances if you try to simplify that pipeline with deep learning and increase the amount of data. We’re just on the crest of that. In particular, processing sequential data with deep learning is something that we’re just figuring out how to do really well. We’ve come up with models that seem to work well, and we’re at the point where we’re going to start squeezing a lot of performance out of these models. And then you’ll see that right and left, benchmarks will be dropping when it comes to sequential data. Beyond that, I don’t know. It’s possible we’ll start to plateau or we’ll start inventing new architectures to do new tasks. I think the moral of this story is: Where there’s a lot of data and where it makes sense to use a deep learning model, success is with high probability going to happen. That’s why it feels like progress is happening so rapidly right now. It really becomes a story of “How can we get right data?” when deep learning is involved. That becomes the big challenge. Architecturally, Deep Speech runs on a powerful GPU-based system. Where are the opportunities to move deep learning algorithms onto smaller systems, such as smartphones, in order to offload processing from Baidu’s (or anyone else’s) servers? That’s something I think about a lot, actually, and I think the future is bright in that regard. It’s certainly the case that deep learning models are getting bigger and bigger but, typically, it also has also been the case that the size and expressivity of the model is more necessary during training than it is during testing. There has been a lot of work that shows that if you take a model that has been trained at, say, 32-bit floating point precision and then compress it to 8-bit fixed point precision, it works just as well at test time. Or it works almost as well. You can reduce it by a factor of four and still have it work just as well. There’s also a lot of work in compressing existing models, like how can we take a giant model that we’ve trained to soak up a lot of data and then, say, train another, much smaller model to duplicate what that large model does. But that small model we can actually put into an embedded device somewhere. Often, the hard part is in training the system. In those cases, it needs to be really big and the servers have to be really beefy. But I do think there’s a lot of promising work with which we can make the models a lot smaller and there’s a future in terms of embedding them in different places. Of course, something like search has to go back to cloud servers unless you’ve somehow indexed the whole web on your smartphone, right? Yeah, that would be challenging. For some additional context on just how powerful a system Deep Speech is — and why Baidu puts so much emphasis on systems architecture for its deep learning efforts — consider this explanation offered by Baidu systems research scientist Bryan Catanzaro: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder/editor/writer of ARCHITECHT. Day job is running content at Replicated. Formerly at Gigaom/Mesosphere/Fortune. What’s next in computing, told by the people behind the software
Kyle McDonald
109
6
https://medium.com/@kcimc/comparing-artificial-artists-7d889428fce4?source=tag_archive---------7----------------
Comparing Artificial Artists – Kyle McDonald – Medium
Last Wednesday, “A Neural Algorithm of Artistic Style” was posted to ArXiv, featuring some of the most compelling imagery generated by deep convolutional neural networks (DCNNs) since Google Research’s “DeepDream” post. On Sunday, Kai Sheng Tai posted the first public implementation. I immediately stopped working on my implementation and started playing with his. Unfortunately, his results don’t quite match the paper, and it’s unclear why. I’m just getting started with this topic, so as I learn I want to share my understanding of the algorithm here, along with some results I got from testing his code. In two parts, the paper describes an algorithm for rendering a photo in the style of a given painting: 2. Instead of trying to match the activations exactly, try to match the correlation of the activations. They call this “style reconstruction”, and depending on the layer you reconstruct you get varying levels of abstraction. The correlation feature they use is called a Gram matrix: the dot product between the vectorized feature activation matrix and its transpose. If this sounds confusing, see the footnotes. Finally, instead of optimizing for just one of these things, they optimize for both simultaneously: the style of one image, and the content of another image. Here is an attempt to recreate the results from the paper using Kai’s implementation: Not quite the same, and possibly explained by a few differences between Kai’s implementation and the original paper: As a final comparison, consider the images Andrej Karpathy posted from his own implementation. The same large-scale, high-level features are missing here, just like in the style reconstruction of “Seated Nude” above. Beside’s Kai’s, I’ve seen one more implementation from a PhD student named Satoshi: a brief example in Python with Chainer. I haven’t spent as much time with it, as I had to adapt it to run on my CPU due to lack of memory. But I did notice: After running Tübingen in the style of The Starry Night with a 1:10e3 ratio and 100 iterations, it seems to converge on something matching the general structure but lacking the overall palette: I’d like to understand this algorithm well enough to generalize it to other media (mainly thinking about sound right now), so if you have an insights or other implementations please share them in the comments! I’ve started testing another implementation that popped up this morning from Justin Johnson. His follow the original paper very closely, except for using unequal weights when balancing different layers used for style reconstruction. All the following examples were run for 100 iterations with the default ratio of 1:10e0. Justin switched his implementation to use L-BFGS and equally weighted layers, and to my eyes this matches the results in the original paper. Here are his results for one of the harder content/style pairs: Other implementations that look great, but I haven’t tested enough: The definition of the Gram matrix confused me at first, so I wrote it out as code. Using a literal translation of equation 3 in the paper, you would write in Python, with numpy: It turns out that the original description is computed more efficiently than this literal translation. For example, Kai writes in Lua, with Torch: Satoshi computes it for all the layers simultaneously in Python with Chainer: Or again in Python, with numpy and Caffe layers: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Artist working with code.
Jim Fleming
165
4
https://medium.com/jim-fleming/highway-networks-with-tensorflow-1e6dfa667daa?source=tag_archive---------8----------------
Highway Networks with TensorFlow – Jim Fleming – Medium
This week I implemented highway networks to get an intuition for how they work. Highway networks, inspired by LSTMs, are a method of constructing networks with hundreds, even thousands, of layers. Let’s see how we construct them using TensorFlow. TL;DR Fully-connected highway repo and convolutional highway repo. For comparison, let’s start with a standard fully-connected (or “dense”) layer. We need a weight matrix and a bias vector then we’ll compute the following for the layer output: Here’s what a dense layer looks like as a graph in TensorBoard: For the highway layer what we want are two “gates” that control the flow of information. The “transform” gate controls how much of the activation we pass through and the “carry” gate controls how much of the unmodified input we pass through. Otherwise, the layer largely resembles a dense layer with a few additions: What happens is that when the transform gate is 1, we pass through our activation (H) and suppress the carry gate (since it will be 0). When the carry gate is 1, we pass through the unmodified input (x), while the activation is suppressed. Here’s what the highway layer graph looks in TensorBoard: Using a highway layer in a network is also straightforward. One detail to keep in mind is that consecutive highway layers must be the same size but you can use fully-connected layers to change dimensionality. This becomes especially complicated in convolutional layers where each layer can change the output dimensions. We can use padding (‘SAME’) to maintain each layers dimensionality. Otherwise, by simply using hyperparameters from the TensorFlow docs (i.e. no hyperparameter search) the fully-connected highway network performed much better than a fully-connected network. Using MNIST as my simple trial: Now that we have a highway network, I wanted to answer a few questions that came up for me while reading the paper. For instance, how deep will the network converge? The paper briefly mentions 1000 layers: Can we train with 1000 layers on MNIST? Yes, also reaching around 95% accuracy. Try it out with a carry bias around -20.0 for MNIST (from the paper the network will only utilize ~15 layers anyway). The network can probably even go deeper since the it’s just learning to carry the last 980 layers or so. We can’t do much useful at or past 1000 layers so that seems sufficient for now. What happens if you set very low or very high carry biases? In either extreme the network simply fails to converge in a reasonable amount of time. In the case of low biases (more positive), the network starts as if the carry gates aren’t present at all. In the case of high biases (more negative), we’re putting more emphasis on carrying and the network can take a long time to overcome that. Otherwise, the biases don’t seem to need to be exact, at least on this simple example. When in doubt start with high biases (more negative) since it’s easier to learn to overcome carrying than without carry gates (which is just a plain network). Overall I was happy with how easy highway networks were to implement. They’re fully differentiable with only a single additional hyperparameter for the initial carry bias. One downside is that highway layers do require additional parameters for the transform weights and biases. However, since we can go deeper, the layers do not need to be as wide which can compensate. Here’s are the complete notebooks if you want to play with the code: fully-connected highway repo and convolutional highway repo. Follow me on Twitter for more posts like these. If you’d like building very deep networks in production, I do consulting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CTO and lead ML engineer at Fomoro — focused on machine learning and applying cutting-edge research for businesses — previously @rdio What I’m working on.
Nathan Benaich
264
10
https://medium.com/@NathanBenaich/investing-in-artificial-intelligence-a-vc-perspective-afaf6adc82ea?source=tag_archive---------9----------------
Investing in Artificial Intelligence – Nathan Benaich – Medium
My (expanded) talking points from a presentation I gave at the Re.Work Investing in Deep Learning dinner in London on 1st December 2015. TL;DR Check out the slides here. It’s my belief that artificial intelligence is one of the most exciting and transformative opportunities of our time. There’s a few reasons why that’s so. Consumers worldwide carry 2 billion smartphones, they’re increasingly addicted to these devices and 40% of the world is online (KPCB). This means we’re creating new data assets that never existed before (user behavior, preferences, interests, knowledge, connections). The costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing. We’ve seen improvements in learning methods, architectures and software infrastructure. The pace of innovation can therefore only be accelerating. Indeed, we don’t fully appreciate what tomorrow will look and feel like. AI-driven products are already out in the wild and improving the performance of search engines, recommender systems (e.g. e-commerce, music), ad serving and financial trading (amongst others). Companies with the resources to invest in AI are already creating an impetus for others to follow suit or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks. More on this discussion here. A key consideration, in my view, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productising technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are: proprietary data access/creation, experienced talent and addictive products. Operational Commercial Financial There are two big factors that make involving the user in an AI-driven product paramount. 1) Machines don’t yet recapitulate human cognition. In order to pick up where software falls short, we need to call on the user for help. 2) Buyers/users of software products have more choice today than ever. As such, they’re often fickle (avg. 90-day retention for apps is 35%). Returning expected value out of the box is key to building habits (hyperparameter optimisation can help). Here are some great examples of products which prove that involving the user-in-the-loop improves performance: We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand. To put this discussion into context, let’s first look at the global VC market. Q1-Q3 2015 saw $47.2bn invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA). We’re likely to breach $55bn by year end. There are circa 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies. So far, we’ve seen circa 300 deals into AI companies (defined as businesses whose description includes keywords: artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning from Jan 1st 2015 thru 1st Dec 2015, CB Insights). In the UK, companies like Ravelin, Signal and Gluru raised seed rounds. Circa $2bn was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339m debt+credit), ZestFinance ($150m debt), LiftForward ($250m credit) and Argon Credit ($75m credit). Importantly, 80% of deals were < $5m in size and 90% of the cash was invested into US companies vs. 13% in Europe. 75% of rounds were in the US. The exit market has seen 33 M&A transactions and 1 IPO (Adgorithms on the LSE). Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532m; $17m raised), Elastica/Blue Coat Systems ($280m; $45m raised) and SupersonicAds/IronSource ($150m; $21m raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7ppl median. Altogether, AI investments will have accounted for circa 5% of total VC investments for 2015. That’s higher than the 2% claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software. The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the US. Businesses must therefore have exposure to this market. I spent a number of summers in university and 3 years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is a very challenging, expensive, lengthy, regulated and ultimately offers a transient solution to treating disease. Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real-time, drive down cost of care over a patient’s lifetime, while consequently improving outcomes. Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by 3rd parties). Sure, the news might paint a different, but the fact is that we’re still using the web and it’s wealth of products. On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge. Look at today’s clinical model: a patient presents into the hospital when they feel something is wrong. The doctor has to conduct a battery of tests to derive a diagnosis. These tests address a single (often late stage) time point, at which moment little can be done to reverse damage (e.g. in the case of cancer). Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There’s loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions... Some companies are already hacking away at this problem: A point worth noting is that the UK has a slight leg up on the data access front. Initiatives like the UK Biobank (500k patient records), Genomics England (100k genomes sequenced), HipSci (stem cells) and the NHS care.data programme are leading the way in creating centralised data repositories for public health and therapeutic research. Cheers for pointing out, Hari Arul. Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9tn by 2020 (BAML). Coupled to the efficiency gains worth $1.9tn driven by robots, I reckon there’s a chance for near complete automation of core, repetitive businesses functions in the future. Think of all the productised SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making. Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfilment and shipping. Of course, probably a ways off :) I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long term innovation, especially that far less is occurring within Universities. VC was born to fund moonshots. We must remember that access to technology will, over time, become commoditised. It’s therefore key to understand your use case, your user, the value you bring and how it’s experience and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering. Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g. the user experience layer — thanks for highlighting this Hari Arul). As such, there’s a renewed focused on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses. Finally, you must have exposure to the US market where the lion’s share of value is created and realised. We have an opportunity to catalyse the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond first-hand. Working in the space? We’d love to get to know you :) Sign up to my newsletter covering AI news and analysis from the tech world, research lab and private/public company market. I’m an investor at Playfair Capital, a London-based investment firm focusing on early stage technology companies that change the way we live, work and play. We invest across Europe and the US and our focus is on core technologies and user experiences. 25% of our portfolio is AI: Mapillary, DueDil, Jukedeck, Seldon, Clarify, Gluru and Ravelin. We want to take risk on technologists creating new markets or reinventing existing ones. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Advancing human progress with intelligent systems. Venture Partner @PointNineCap. Former scientist, photographer, perpetual foodie. nathan.ai @LDN_AI @TwentyBN
Tal Perry
2.6K
17
https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------3----------------
Deep Learning the Stock Market – Tal Perry – Medium
Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going. I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. Why NLP is relevant to Stock prediction In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance. In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure). Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will. You shall know a word by the company it keeps (Firth, J. R. 1957:11) There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort. Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them. The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog. But we can embed more than just words. We can do, say , stock market embeddings. Market2Vec The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie. Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college. The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself. We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How? So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful. Now What Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry. So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either. Going deeper I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning. So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote. Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market. Inside the magical black box What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text. A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope. We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that. Connecting Market2Vec and LSTMs The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them: We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior. What Comes out Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile. In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next. In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were: 1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do. 3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk. 5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index. Sidenote — Why predict the VIX What makes the VIX an interesting target is that Back to our LSTM outputs and the SoftMax How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like: We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ). So before we get into how this thing learns, let’s recap what we’ve done so far How does this thing learn? Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn. So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data. To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is Or in English, the square root of the square of the difference between what actually happened and what we predicted. Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error. This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task. It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task. It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task. Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error. What’s next Now that I’ve laid this out in writing and it still makes sense to me I want So, if you’ve come this far please point out my errors and share your inputs. Other thoughts Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work. Liquidity and efficient use of capital Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs. A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500. This stuff is new The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets. Multiple Time Frames While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most. I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with. MarketVectors When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them. My original consideration was to use an auto-encoder like in this paper but end to end training is cooler. A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder. But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them. The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage. CNNs Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun.
Andrej Karpathy
9.2K
7
https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------4----------------
Yes you should understand backprop – Andrej Karpathy – Medium
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets.
Erik Hallström
2.5K
7
https://medium.com/@erikhallstrm/hello-world-rnn-83cd7105b767?source=tag_archive---------5----------------
How to build a Recurrent Neural Network in TensorFlow (1/7)
In this tutorial I’ll explain how to build a simple working Recurrent Neural Network in TensorFlow. This is the first in a series of seven parts where various aspects and techniques of building Recurrent Neural Networks in TensorFlow are covered. A short introduction to TensorFlow is available here. For now, let’s get started with the RNN! It is short for “Recurrent Neural Network”, and is basically a neural network that can be used when your data is treated as a sequence, where the particular order of the data-points matter. More importantly, this sequence can be of arbitrary length. The most straight-forward example is perhaps a time-series of numbers, where the task is to predict the next value given previous values. The input to the RNN at every time-step is the current value as well as a state vector which represent what the network has “seen” at time-steps before. This state-vector is the encoded memory of the RNN, initially set to zero. The best and most comprehensive article explaining RNN:s I’ve found so far is this article by researchers at UCSD, highly recommended. For now you only need to understand the basics, read it until the “Modern RNN architectures”-section. That will be covered later. Although this article contains some explanations, it is mostly focused on the practical part, how to build it. You are encouraged to look up more theory on the Internet, there are plenty of good explanations. We will build a simple Echo-RNN that remembers the input data and then echoes it after a few time-steps. First let’s set some constants we’ll need, what they mean will become clear in a moment. Now generate the training data, the input is basically a random binary vector. The output will be the “echo” of the input, shifted echo_step steps to the right. Notice the reshaping of the data into a matrix with batch_size rows. Neural networks are trained by approximating the gradient of loss function with respect to the neuron-weights, by looking at only a small subset of the data, also known as a mini-batch. The theoretical reason for doing this is further elaborated in this question. The reshaping takes the whole dataset and puts it into a matrix, that later will be sliced up into these mini-batches. TensorFlow works by first building up a computational graph, that specifies what operations will be done. The input and output of this graph is typically multidimensional arrays, also known as tensors. The graph, or parts of it can then be executed iteratively in a session, this can either be done on the CPU, GPU or even a resource on a remote server. The two basic TensorFlow data-structures that will be used in this example are placeholders and variables. On each run the batch data is fed to the placeholders, which are “starting nodes” of the computational graph. Also the RNN-state is supplied in a placeholder, which is saved from the output of the previous run. The weights and biases of the network are declared as TensorFlow variables, which makes them persistent across runs and enables them to be updated incrementally for each batch. The figure below shows the input data-matrix, and the current batch batchX_placeholder is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3, and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code. The series order index is shown as numbers in a few of the data-points. Now it’s time to build the part of the graph that resembles the actual RNN computation, first we want to split the batch data into adjacent time-steps. As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step. The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows. Next let’s build the part of the graph that does the actual RNN computation. Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch. You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch. This is the final part of the graph, a fully connected softmax layer from the state to the output that will make the classes one-hot encoded, and then calculating the loss of the batch. The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally. Notice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size]. There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch. It’s time to wrap up and train the network, in TensorFlow the graph is executed in a session. New data is generated on each epoch (not the usual way to do it, but it works in this case since everything is predictable). You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below. Also realize that this is just simple example to explain how a RNN works, this functionality could easily be programmed in just a few lines of code. The network will be able to exactly learn the echo behavior so there is no need for testing data. The program will update the plot as training progresses, shown in the picture below. Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Our algorithm will fairly quickly learn the task. The graph in the top-left corner shows the output of the loss function, but why are there spikes in the curve? Think of it for a moment, answer is below. The reason for the spikes is that we are starting on a new epoch, and generating new data. Since the matrix is reshaped, the first element on each row is adjacent to the last element in the previous row. The first few elements on all rows (except the first) have dependencies that will not be included in the state, so the net will always perform badly on the first batch. This is the whole runnable program, just copy-paste and run. After each part in the article series the whole runnable program will be presented. If a line is referenced by number, these are the line numbers that we mean. In the next post in this series we will be simplify the computational graph creation by using the native TensorFlow RNN API. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Engineering Physics and in Machine Learning at Royal Institute of Technology in Stockholm. Also been living in Taiwan 學習中文. Interested in Deep Learning.
Stefan Kojouharov
1.5K
23
https://chatbotslife.com/ultimate-guide-to-leveraging-nlp-machine-learning-for-you-chatbot-531ff2dd870c?source=tag_archive---------6----------------
Ultimate Guide to Leveraging NLP & Machine Learning for your Chatbot
Code Snippets and Github Included Over the past few months I have been collecting the best resources on NLP and how to apply NLP and Deep Learning to Chatbots. Every once in awhile, I would run across an exception piece of content and I quickly started putting together a master list. Soon I found myself sharing this list and some of the most useful articles with developers and other people in bot community. In process, my list became a Guide and after some urging, I have decided to share it or at least a condensed version of it -for length reasons. This guide is mostly based on the work done by Denny Britz who has done a phenomenal job exploring the depths of Deep Learning for Bots. Code Snippets and Github included! Without further ado... Let us Begin! Chatbots, are a hot topic and many companies are hoping to develop bots to have natural conversations indistinguishable from human ones, and many are claiming to be using NLP and Deep Learning techniques to make this possible. But with all the hype around AI it’s sometimes difficult to tell fact from fiction. In this series I want to go over some of the Deep Learning techniques that are used to build conversational agents, starting off by explaining where we are right now, what’s possible, and what will stay nearly impossible for at least a little while. Retrieval-based models (easier) use a repository of predefined responses and some kind of heuristic to pick an appropriate response based on the input and context. The heuristic could be as simple as a rule-based expression match, or as complex as an ensemble of Machine Learning classifiers. These systems don’t generate any new text, they just pick a response from a fixed set. Generative models (harder) don’t rely on pre-defined responses. They generate new responses from scratch. Generative models are typically based on Machine Translation techniques, but instead of translating from one language to another, we “translate” from an input to an output (response). Both approaches have some obvious pros and cons. Due to the repository of handcrafted responses, retrieval-based methods don’t make grammatical mistakes. However, they may be unable to handle unseen cases for which no appropriate predefined response exists. For the same reasons, these models can’t refer back to contextual entity information like names mentioned earlier in the conversation. Generative models are “smarter”. They can refer back to entities in the input and give the impression that you’re talking to a human. However, these models are hard to train, are quite likely to make grammatical mistakes (especially on longer sentences), and typically require huge amounts of training data. Deep Learning techniques can be used for both retrieval-based or generative models, but research seems to be moving into the generative direction. Deep Learning architectures likeSequence to Sequence are uniquely suited for generating text and researchers are hoping to make rapid progress in this area. However, we’re still at the early stages of building generative models that work reasonably well. Production systems are more likely to be retrieval-based for now. LONG VS. SHORT CONVERSATIONS The longer the conversation the more difficult to automate it. On one side of the spectrum areShort-Text Conversations (easier) where the goal is to create a single response to a single input. For example, you may receive a specific question from a user and reply with an appropriate answer. Then there are long conversations (harder) where you go through multiple turns and need to keep track of what has been said. Customer support conversations are typically long conversational threads with multiple questions. In an open domain (harder) setting the user can take the conversation anywhere. There isn’t necessarily have a well-defined goal or intention. Conversations on social media sites like Twitter and Reddit are typically open domain — they can go into all kinds of directions. The infinite number of topics and the fact that a certain amount of world knowledge is required to create reasonable responses makes this a hard problem. In a closed domain (easier) setting the space of possible inputs and outputs is somewhat limited because the system is trying to achieve a very specific goal. Technical Customer Support or Shopping Assistants are examples of closed domain problems. These systems don’t need to be able to talk about politics, they just need to fulfill their specific task as efficiently as possible. Sure, users can still take the conversation anywhere they want, but the system isn’t required to handle all these cases — and the users don’t expect it to. There are some obvious and not-so-obvious challenges when building conversational agents most of which are active research areas. To produce sensible responses systems may need to incorporate both linguistic context andphysical context. In long dialogs people keep track of what has been said and what information has been exchanged. That’s an example of linguistic context. The most common approach is toembed the conversation into a vector, but doing that with long conversations is challenging. Experiments in Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models and Attention with Intention for a Neural Network Conversation Model both go into that direction. One may also need to incorporate other kinds of contextual data such as date/time, location, or information about a user. When generating responses the agent should ideally produce consistent answers to semantically identical inputs. For example, you want to get the same reply to “How old are you?” and “What is your age?”. This may sound simple, but incorporating such fixed knowledge or “personality” into models is very much a research problem. Many systems learn to generate linguistic plausible responses, but they are not trained to generate semantically consistent ones. Usually that’s because they are trained on a lot of data from multiple different users. Models like that in A Persona-Based Neural Conversation Model are making first steps into the direction of explicitly modeling a personality. The ideal way to evaluate a conversational agent is to measure whether or not it is fulfilling its task, e.g. solve a customer support problem, in a given conversation. But such labels are expensive to obtain because they require human judgment and evaluation. Sometimes there is no well-defined goal, as is the case with open-domain models. Common metrics such as BLEUthat are used for Machine Translation and are based on text matching aren’t well suited because sensible responses can contain completely different words or phrases. In fact, in How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation researchers find that none of the commonly used metrics really correlate with human judgment. A common problem with generative systems is that they tend to produce generic responses like “That’s great!” or “I don’t know” that work for a lot of input cases. Early versions of Google’s Smart Reply tended to respond with “I love you” to almost anything. That’s partly a result of how these systems are trained, both in terms of data and in terms of actual training objective/algorithm. Some researchers have tried to artificially promote diversity through various objective functions. However, humans typically produce responses that are specific to the input and carry an intention. Because generative systems (and particularly open-domain systems) aren’t trained to have specific intentions they lack this kind of diversity. Given all the cutting edge research right now, where are we and how well do these systems actually work? Let’s consider our taxonomy again. A retrieval-based open domain system is obviously impossible because you can never handcraft enough responses to cover all cases. A generative open-domain system is almost Artificial General Intelligence (AGI) because it needs to handle all possible scenarios. We’re very far away from that as well (but a lot of research is going on in that area). This leaves us with problems in restricted domains where both generative and retrieval based methods are appropriate. The longer the conversations and the more important the context, the more difficult the problem becomes. In a recent interview, Andrew Ng, now chief scientist of Baidu, puts it well: Many companies start off by outsourcing their conversations to human workers and promise that they can “automate” it once they’ve collected enough data. That’s likely to happen only if they are operating in a pretty narrow domain — like a chat interface to call an Uber for example. Anything that’s a bit more open domain (like sales emails) is beyond what we can currently do. However, we can also use these systems to assist human workers by proposing and correcting responses. That’s much more feasible. Grammatical mistakes in production systems are very costly and may drive away users. That’s why most systems are probably best off using retrieval-based methods that are free of grammatical errors and offensive responses. If companies can somehow get their hands on huge amounts of data then generative models become feasible — but they must be assisted by other techniques to prevent them from going off the rails like Microsoft’s Tay did. The Code and data for this tutorial is on Github. The vast majority of production systems today are retrieval-based, or a combination of retrieval-based and generative. Google’s Smart Reply is a good example. Generative models are an active area of research, but we’re not quite there yet. If you want to build a conversational agent today your best bet is most likely a retrieval-based model. In this post we’ll work with the Ubuntu Dialog Corpus (paper, github). The Ubuntu Dialog Corpus (UDC) is one of the largest public dialog datasets available. It’s based on chat logs from the Ubuntu channels on a public IRC network. The paper goes into detail on how exactly the corpus was created, so I won’t repeat that here. However, it’s important to understand what kind of data we’re working with, so let’s do some exploration first. The training data consists of 1,000,000 examples, 50% positive (label 1) and 50% negative (label 0). Each example consists of a context, the conversation up to this point, and an utterance, a response to the context. A positive label means that an utterance was an actual response to a context, and a negative label means that the utterance wasn’t — it was picked randomly from somewhere in the corpus. Here is some sample data. Note that the dataset generation script has already done a bunch of preprocessing for us — it hastokenized, stemmed, and lemmatized the output using the NLTK tool. The script also replaced entities like names, locations, organizations, URLs, and system paths with special tokens. This preprocessing isn’t strictly necessary, but it’s likely to improve performance by a few percent. The average context is 86 words long and the average utterance is 17 words long. Check out the Jupyter notebook to see the data analysis. The data set comes with test and validations sets. The format of these is different from that of the training data. Each record in the test/validation set consists of a context, a ground truth utterance (the real response) and 9 incorrect utterances called distractors. The goal of the model is to assign the highest score to the true utterance, and lower scores to wrong utterances. The are various ways to evaluate how well our model does. A commonly used metric is recall@k. Recall@k means that we let the model pick the k best responses out of the 10 possible responses (1 true and 9 distractors). If the correct one is among the picked ones we mark that test example as correct. So, a larger k means that the task becomes easier. If we set k=10 we get a recall of 100% because we only have 10 responses to pick from. If we set k=1 the model has only one chance to pick the right response. At this point you may be wondering how the 9 distractors were chosen. In this data set the 9 distractors were picked at random. However, in the real world you may have millions of possible responses and you don’t know which one is correct. You can’t possibly evaluate a million potential responses to pick the one with the highest score — that’d be too expensive. Google’sSmart Reply uses clustering techniques to come up with a set of possible responses to choose from first. Or, if you only have a few hundred potential responses in total you could just evaluate all of them. Before starting with fancy Neural Network models let’s build some simple baseline models to help us understand what kind of performance we can expect. We’ll use the following function to evaluate our recall@k metric: Here, y is a list of our predictions sorted by score in descending order, and y_test is the actual label. For example, a y of [0,3,1,2,5,6,4,7,8,9] Would mean that the utterance number 0 got the highest score, and utterance 9 got the lowest score. Remember that we have 10 utterances for each test example, and the first one (index 0) is always the correct one because the utterance column comes before the distractor columns in our data. Intuitively, a completely random predictor should get a score of 10% for recall@1, a score of 20% for recall@2, and so on. Let’s see if that’s the case. Great, seems to work. Of course we don’t just want a random predictor. Another baseline that was discussed in the original paper is a tf-idf predictor. tf-idf stands for “term frequency — inverse document” frequency and it measures how important a word in a document is relative to the whole corpus. Without going into too much detail (you can find many tutorials about tf-idf on the web), documents that have similar content will have similar tf-idf vectors. Intuitively, if a context and a response have similar words they are more likely to be a correct pair. At least more likely than random. Many libraries out there (such as scikit-learn) come with built-in tf-idf functions, so it’s very easy to use. Let’s build a tf-idf predictor and see how well it performs. We can see that the tf-idf model performs significantly better than the random model. It’s far from perfect though. The assumptions we made aren’t that great. First of all, a response doesn’t necessarily need to be similar to the context to be correct. Secondly, tf-idf ignores word order, which can be an important signal. With a Neural Network model we can do a bit better. The Deep Learning model we will build in this post is called a Dual Encoder LSTM network. This type of network is just one of many we could apply to this problem and it’s not necessarily the best one. You can come up with all kinds of Deep Learning architectures that haven’t been tried yet — it’s an active research area. For example, the seq2seq model often used in Machine Translation would probably do well on this task. The reason we are going for the Dual Encoder is because it has been reported to give decent performance on this data set. This means we know what to expect and can be sure that our implementation is correct. Applying other models to this problem would be an interesting project. The Dual Encoder LSTM we’ll build looks like this (paper): It roughly works as follows: To train the network, we also need a loss (cost) function. We’ll use the binary cross-entropy loss common for classification problems. Let’s call our true label for a context-response pair y. This can be either 1 (actual response) or 0 (incorrect response). Let’s call our predicted probability from 4. above y’. Then, the cross entropy loss is calculated as L= −y * ln(y’) − (1 − y) * ln(1−y’). The intuition behind this formula is simple. If y=1 we are left with L = -ln(y’), which penalizes a prediction far away from 1, and if y=0 we are left with L= −ln(1−y’), which penalizes a prediction far away from 0. For our implementation we’ll use a combination of numpy, pandas, Tensorflow and TF Learn (a combination of high-level convenience functions for Tensorflow). The dataset originally comes in CSV format. We could work directly with CSVs, but it’s better to convert our data into Tensorflow’s proprietary Example format. (Quick side note: There’s alsotf.SequenceExample but it doesn’t seem to be supported by tf.learn yet). The main benefit of this format is that it allows us to load tensors directly from the input files and let Tensorflow handle all the shuffling, batching and queuing of inputs. As part of the preprocessing we also create a vocabulary. This means we map each word to an integer number, e.g. “cat” may become 2631. The TFRecord files we will generate store these integer numbers instead of the word strings. We will also save the vocabulary so that we can map back from integers to words later on. Each Example contains the following fields: The preprocessing is done by the prepare_data.py Python script, which generates 3 files:train.tfrecords, validation.tfrecords and test.tfrecords. You can run the script yourself or download the data files here. In order to use Tensorflow’s built-in support for training and evaluation we need to create an input function — a function that returns batches of our input data. In fact, because our training and test data have different formats, we need different input functions for them. The input function should return a batch of features and labels (if available). Something along the lines of: Because we need different input functions during training and evaluation and because we hate code duplication we create a wrapper called create_input_fn that creates an input function for the appropriate mode. It also takes a few other parameters. Here’s the definition we’re using: The complete code can be found in udc_inputs.py. On a high level, the function does the following: We already mentioned that we want to use the recall@k metric to evaluate our model. Luckily, Tensorflow already comes with many standard evaluation metrics that we can use, including recall@k. To use these metrics we need to create a dictionary that maps from a metric name to a function that takes the predictions and label as arguments: Above, we use functools.partial to convert a function that takes 3 arguments to one that only takes 2 arguments. Don’t let the name streaming_sparse_recall_at_k confuse you. Streaming just means that the metric is accumulated over multiple batches, and sparse refers to the format of our labels. This brings is to an important point: What exactly is the format of our predictions during evaluation? During training, we predict the probability of the example being correct. But during evaluation our goal is to score the utterance and 9 distractors and pick the best one — we don’t simply predict correct/incorrect. This means that during evaluation each example should result in a vector of 10 scores, e.g. [0.34, 0.11, 0.22, 0.45, 0.01, 0.02, 0.03, 0.08, 0.33, 0.11], where the scores correspond to the true response and the 9 distractors respectively. Each utterance is scored independently, so the probabilities don’t need to add up to 1. Because the true response is always element 0 in array, the label for each example is 0. The example above would be counted as classified incorrectly by recall@1because the third distractor got a probability of 0.45 while the true response only got 0.34. It would be scored as correct by recall@2 however. Before writing the actual neural network code I like to write the boilerplate code for training and evaluating the model. That’s because, as long as you adhere to the right interfaces, it’s easy to swap out what kind of network you are using. Let’s assume we have a model functionmodel_fn that takes as inputs our batched features, labels and mode (train or evaluation) and returns the predictions. Then we can write general-purpose code to train our model as follows: Here we create an estimator for our model_fn, two input functions for training and evaluation data, and our evaluation metrics dictionary. We also define a monitor that evaluates our model every FLAGS.eval_every steps during training. Finally, we train the model. The training runs indefinitely, but Tensorflow automatically saves checkpoint files in MODEL_DIR, so you can stop the training at any time. A more fancy technique would be to use early stopping, which means you automatically stop training when a validation set metric stops improving (i.e. you are starting to overfit). You can see the full code in udc_train.py. Two things I want to mention briefly is the usage of FLAGS. This is a way to give command line parameters to the program (similar to Python’s argparse). hparams is a custom object we create in hparams.py that holds hyperparameters, nobs we can tweak, of our model. This hparams object is given to the model when we instantiate it. Now that we have set up the boilerplate code around inputs, parsing, evaluation and training it’s time to write code for our Dual LSTM neural network. Because we have different formats of training and evaluation data I’ve written a create_model_fn wrapper that takes care of bringing the data into the right format for us. It takes a model_impl argument, which is a function that actually makes predictions. In our case it’s the Dual Encoder LSTM we described above, but we could easily swap it out for some other neural network. Let’s see what that looks like: The full code is in dual_encoder.py. Given this, we can now instantiate our model function in the main routine in udc_train.py that we defined earlier. That’s it! We can now run python udc_train.py and it should start training our networks, occasionally evaluating recall on our validation data (you can choose how often you want to evaluate using the — eval_every switch). To get a complete list of all available command line flags that we defined using tf.flags and hparams you can run python udc_train.py — help. ... INFO:tensorflow:Results after 270 steps (0.248 sec/batch): recall_at_1 = 0.507581018519, recall_at_2 = 0.689699074074, recall_at_5 = 0.913020833333, recall_at_10 = 1.0, loss = 0.5383 ... After you’ve trained the model you can evaluate it on the test set using python udc_test.py — model_dir=$MODEL_DIR_FROM_TRAINING, e.g. python udc_test.py — model_dir=~/github/chatbot-retrieval/runs/1467389151. This will run the recall@k evaluation metrics on the test set instead of the validation set. Note that you must call udc_test.py with the same parameters you used during training. So, if you trained with — embedding_size=128 you need to call the test script with the same. After training for about 20,000 steps (around an hour on a fast GPU) our model gets the following results on the test set: While recall@1 is close to our TFIDF model, recall@2 and recall@5 are significantly better, suggesting that our neural network assigns higher scores to the correct answers. The original paper reported 0.55, 0.72 and 0.92 for recall@1, recall@2, and recall@5 respectively, but I haven’t been able to reproduce scores quite as high. Perhaps additional data preprocessing or hyperparameter optimization may bump scores up a bit more. You can modify and run udc_predict.py to get probability scores for unseen data. For example python udc_predict.py — model_dir=./runs/1467576365/ outputs: You could imagine feeding in 100 potential responses to a context and then picking the one with the highest score. In this post we’ve implemented a retrieval-based neural network model that can assign scores to potential responses given a conversation context. There is still a lot of room for improvement, however. One can imagine that other neural networks do better on this task than a dual LSTM encoder. There is also a lot of room for hyperparameter optimization, or improvements to the preprocessing step. The Code and data for this tutorial is on Github, so check it out. Denny’s Blogs: http://blog.dennybritz.com/ & http://www.wildml.com/ Mark Clark: https://www.linkedin.com/in/markwclark I hope you have found this Condensed NLP Guide Helpful. I wanted to publish a longer version (imagine if this was 5x longer) however I don’t want to scare the readers away. As someone who develops the front end of bots (user experience, personality, flow, etc) I find it extremely helpful to the understand the stack, know the technological pros and cons and so to be able to effectively design around NLP/NLU limitations. Ultimately a lot of the issues bots face today (eg: context) can be designed around, effectively. If you have any suggestions on regarding this article and how it can be improved, feel free to drop me a line. Creator of 10+ bots, including Smart Notes Bot. Founder of Chatbot’s Life, where we help companies create great chatbots and share our insights along the way. Want to Talk Bots? Best way to chat directly and see my latest projects is via my Personal Bot: Stefan’s Bot. Currently, I’m consulting a number of companies on their chatbot projects. To get feedback on your Chatbot project or to Start a Chatbot Project, contact me. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More.
Arthur Juliani
3.5K
8
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------7----------------
Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C)
In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series. So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments. Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself. Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse. Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods. Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately. The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow: Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage. In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation. In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository. The general outline of the code architecture is: The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself. Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU. ~ From here we go asynchronous ~ Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network. Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment. Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action. A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy. A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment. Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again. To view the full and functional code, see the Github repository here. The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as: pip install vizdoom Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory. The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks. I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs. (There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.) If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani. If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
Alexandr Honchar
1.91K
7
https://medium.com/machine-learning-world/neural-networks-for-algorithmic-trading-part-one-simple-time-series-forecasting-f992daa1045a?source=tag_archive---------8----------------
Neural networks for algorithmic trading. Simple time series forecasting
Ciao, people! This is first part of my experiments on application of deep learning to finance, in particular to algorithmic trading. I want to implement trading system from scratch based only on deep learning approaches, so for any problem we have here (price prediction, trading strategy, risk management) we gonna use different variations of artificial neural networks (ANNs) and check how well they can handle this. Now I plan to work on next sections: I highly recommend you to check out code and IPython Notebook in this repository. In this, first part, I want to show how MLPs, CNNs and RNNs can be used for financial time series prediction. In this part we are not going to use any feature engineering. Let’s just consider historical dataset of S&P 500 index price movements. We have information from 1950 to 2016 about open, close, high, low prices for every day in the year and volume of trades. First, we will try just to predict close price in the end of the next day, second, we will try to predict return (close price — open price). Download the dataset from Yahoo Finance or from this repository. We will consider our problem as 1) regression problem (trying to forecast exactly close price or return next day) 2) binary classification problem (price will go up [1; 0] or down [0; 1]). For training NNs we gonna use framework Keras. First let’s prepare our data for training. We want to predict t+1 value based on N previous days information. For example, having close prices from past 30 days on the market we want to predict, what price will be tomorrow, on the 31st day. We use first 90% of time series as training set (consider it as historical data) and last 10% as testing set for model evaluation. Here is example of loading, splitting into training samples and preprocessing of raw input data: It will be just 2-hidden layer perceptron. Number of hidden neurons is chosen empirically, we will work on hyperparameters optimization in next sections. Between two hidden layers we add one Dropout layer to prevent overfitting. Important thing is Dense(1), Activation(‘linear’) and ‘mse’ in compile section. We want one output that can be in any range (we predict real value) and our loss function is defined as mean squared error. Let’s see what happens if we just pass chunks of 20-days close prices and predict price on 21st day. Final MSE= 46.3635263557, but it’s not very representative information. Below is plot of predictions for first 150 points of test dataset. Black line is actual data, blue one — predicted. We can clearly see that our algorithm is not even close by value, but can learn the trend. Let’s scale our data using sklearn’s method preprocessing.scale() to have our time series zero mean and unit variance and train the same MLP. Now we have MSE = 0.0040424330518 (but it is on scaled data). On the plot below you can see actual scaled time series (black)and our forecast (blue) for it: For using this model in real world we should return back to unscaled time series. We can do it, by multiplying or prediction by standard deviation of time series we used to make prediction (20 unscaled time steps) and add it’s mean value: MSE in this case equals 937.963649937. Here is the plot of restored predictions (red) and real data (green): Not bad, isn’t it? But let’s try more sophisticated algorithms for this problem! I am not going to dive into theory of convolutional neural networks, you can check out this amazing resourses: Let’s define 2-layer convolutional neural network (combination of convolution and max-pooling layers) with one fully-connected layer and the same output as earlier: Let’s check out results. MSEs for scaled and restored data are: 0.227074542433; 935.520550172. Plots are below: Even looking on MSE on scaled data, this network learned much worse. Most probably, deeper architecture needs more data for training, or it just overfitted due to too high number of filters or layers. We will consider this issue later. As recurrent architecture I want to use two stacked LSTM layers (read more about LSTMs here). Plots of forecasts are below, MSEs = 0.0246238639582; 939.948636707. RNN forecasting looks more like moving average model, it can’t learn and predict all fluctuations. So, it’s a bit unexpectable result, but we can see, that MLPs work better for this time series forecasting. Let’s check out what will happen if we swith from regression to classification problem. Now we will use not close prices, but daily return (close price-open price) and we want to predict if close price is higher or lower than open price based on last 20 days returns. Code is changed just a bit — we change our last Dense layer to have output [0; 1] or [1; 0] and add softmax output to expect probabilistic output. To load binary outputs, change in the code following line: Also we change loss function to binary cross-entopy and add accuracy metrics. Oh, it’s not better than random guessing (50% accuracy), let’s try something better. Check out the results below. We can see, that treating financial time series prediction as regression problem is better approach, it can learn the trend and prices close to the actual. What was surprising for me, that MLPs are treating sequence data better as CNNs or RNNs which are supposed to work better with time series. I explain it with pretty small dataset (~16k time stamps) and dummy hyperparameters choice. You can reproduce results and get better using code from repository. I think we can get better results both in regression and classification using different features (not only scaled time series) like some technical indicators, volume of sales. Also we can try more frequent data, let’s say minute-by-minute ticks to have more training data. All these things I’m going to do later, so stay tuned :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. 🇺🇦 🇮🇹 AI entrepreneur, blogger and researcher. Making machines work 💻, learn 📕 and like 👍, but humans create 🎨, discover 🚀 and love ❤️ The best about Machine Learning, Computer Vision, Deep Learning, Natural language processing and other.
Arthur Juliani
1.7K
8
https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df?source=tag_archive---------9----------------
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and Beyond
Welcome to the latest installment of my Reinforcement Learning series. In this tutorial we will be walking through the creation of a Deep Q-Network. It will be built upon the simple one layer Q-network we created in Part 0, so I would recommend reading that first if you are new to reinforcement learning. While our ordinary Q-network was able to barely perform as well as the Q-Table in a simple game environment, Deep Q-Networks are much more capable. In order to transform an ordinary Q-Network into a DQN we will be making the following improvements: It was these three innovations that allowed the Google DeepMind team to achieve superhuman performance on dozens of Atari games using their DQN agent. We will be walking through each individual improvement, and showing how to implement it. We won’t stop there though. The pace of Deep Learning research is extremely fast, and the DQN of 2014 is no longer the most advanced agent around anymore. I will discuss two simple additional improvements to the DQN architecture, Double DQN and Dueling DQN, that allow for improved performance, stability, and faster training time. In the end we will have a network that can tackle a number of challenging Atari games, and we will demonstrate how to train the DQN to learn a basic navigation task. Since our agent is going to be learning to play video games, it has to be able to make sense of the game’s screen output in a way that is at least similar to how humans or other intelligent animals are able to. Instead of considering each pixel independently, convolutional layers allow us to consider regions of an image, and maintain spatial relationships between the objects on the screen as we send information up to higher levels of the network. In this way, they act similarly to human receptive fields. Indeed there is a body of research showing that convolutional neural network learn representations that are similar to those of the primate visual cortex. As such, they are ideal for the first few elements within our network. In Tensorflow, we can utilize the tf.contrib.layers.convolution2d function to easily create a convolutional layer. We write for function as follows: Here num_outs refers to how many filters we would like to apply to the previous layer. kernel_size refers to how large a window we would like to slide over the previous layer. Stride refers to how many pixels we want to skip as we slide the window across the layer. Finally, padding refers to whether we want our window to slide over just the bottom layer (“VALID”) or add padding around it (“SAME”) in order to ensure that the convolutional layer has the same dimensions as the previous layer. For more information, see the Tensorflow documentation. The second major addition to make DQNs work is Experience Replay. The basic idea is that by storing an agent’s experiences, and then randomly drawing batches of them to train the network, we can more robustly learn to perform well in the task. By keeping the experiences we draw random, we prevent the network from only learning about what it is immediately doing in the environment, and allow it to learn from a more varied array of past experiences. Each of these experiences are stored as a tuple of <state,action,reward,next state>. The Experience Replay buffer stores a fixed number of recent memories, and as new ones come in, old ones are removed. When the time comes to train, we simply draw a uniform batch of random memories from the buffer, and train our network with them. For our DQN, we will build a simple class that handles storing and retrieving memories. The third major addition to the DQN that makes it unique is the utilization of a second network during the training procedure. This second network is used to generate the target-Q values that will be used to compute the loss for every action during training. Why not use just use one network for both estimations? The issue is that at every step of training, the Q-network’s values shift, and if we are using a constantly shifting set of values to adjust our network values, then the value estimations can easily spiral out of control. The network can become destabilized by falling into feedback loops between the target and estimated Q-values. In order to mitigate that risk, the target network’s weights are fixed, and only periodically or slowly updated to the primary Q-networks values. In this way training can proceed in a more stable manner. Instead of updating the target network periodically and all at once, we will be updating it frequently, but slowly. This technique was introduced in another DeepMind paper earlier this year, where they found that it stabilized the training process. With the additions above, we have everything we need to replicate the DWN of 2014. But the world moves fast, and a number of improvements above and beyond the DQN architecture described by DeepMind, have allowed for even greater performance and stability. Before training your new DQN on your favorite ATARI game, I would suggest checking the newer additions out. I will provide a description and some code for two of them: Double DQN, and Dueling DQN. Both are simple to implement, and by combining both techniques, we can achieve better performance with faster training times. The main intuition behind Double DQN is that the regular DQN often overestimates the Q-values of the potential actions to take in a given state. While this would be fine if all actions were always overestimates equally, there was reason to believe this wasn’t the case. You can easily imagine that if certain suboptimal actions regularly were given higher Q-values than optimal actions, the agent would have a hard time ever learning the ideal policy. In order to correct for this, the authors of DDQN paper propose a simple trick: instead of taking the max over Q-values when computing the target-Q value for our training step, we use our primary network to chose an action, and our target network to generate the target Q-value for that action. By decoupling the action choice from the target Q-value generation, we are able to substantially reduce the overestimation, and train faster and more reliably. Below is the new DDQN equation for updating the target value. In order to explain the reasoning behind the architecture changes that Dueling DQN makes, we need to first explain some a few additional reinforcement learning terms. The Q-values that we have been discussing so far correspond to how good it is to take a certain action given a certain state. This can be written as Q(s,a). This action given state can actually be decomposed into two more fundamental notions of value. The first is the value function V(s), which says simple how good it is to be in any given state. The second is the advantage function A(a), which tells how much better taking a certain action would be compared to the others. We can then think of Q as being the combination of V and A. More formally: The goal of Dueling DQN is to have a network that separately computes the advantage and value functions, and combines them back into a single Q-function only at the final layer. It may seem somewhat pointless to do this at first glance. Why decompose a function that we will just put back together? The key to realizing the benefit is to appreciate that our reinforcement learning agent may not need to care about both value and advantage at any given time. For example: imagine sitting outside in a park watching the sunset. It is beautiful, and highly rewarding to be sitting there. No action needs to be taken, and it doesn’t really make sense to think of the value of sitting there as being conditioned on anything beyond the environmental state you are in. We can achieve more robust estimates of state value by decoupling it from the necessity of being attached to specific actions. Now that we have learned all the tricks to get the most out of our DQN, let’s actually try it on a game environment! While the DQN we have described above could learn ATARI games with enough training, getting the network to perform well on those games takes at least a day of training on a powerful machine. For educational purposes, I have built a simple game environment which our DQN learns to master in a couple hours on a moderately powerful machine (I am using a GTX970). In the environment the agent controls a blue square, and the goal is to navigate to the green squares (reward +1) while avoiding the red squares (reward -1). At the start of each episode all squares are randomly placed within a 5x5 grid-world. The agent has 50 steps to achieve as large a reward as possible. Because they are randomly positioned, the agent needs to do more than simply learn a fixed path, as was the case in the FrozenLake environment from Tutorial 0. Instead the agent must learn a notion of spatial relationships between the blocks. And indeed, it is able to do just that! The game environment outputs 84x84x3 color images, and uses function calls as similar to the OpenAI gym as possible. In doing so, it should be easy to modify this code to work on any of the OpenAI atari games. I encourage those with the time and computing resources necessary to try getting the agent to perform well in an ATARI game. The hyperparameters may need some tuning, but it is definitely possible. Good luck! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student.
Vishal Maini
32K
10
https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------0----------------
A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium
Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future. Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent. Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs. Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models. Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD). Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications. Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem. Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum. This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series. Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic. The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions. The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C. Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe). In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie. Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste. In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing. Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models. The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task. Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself. And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years. The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel. Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start. You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have: Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence. Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale. Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process. And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning! More from Machine Learning for Humans 🤖👶 A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact.
Tim Anglade
7K
23
https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------1----------------
How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native
The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs. While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches. The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps. If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet. While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways. Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.” Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one). All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section. We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done. The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors: For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone. Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack. It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs). While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition. The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images. One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation. Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods. Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons. First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner. At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with. We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile. To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come. The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source). So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million. This has two advantages: There are tradeoffs of course: During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions. After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations. So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture. Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time. With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with. Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular: So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features. Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were: The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit. Our data augmentation rules were as follows: These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation. The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be. The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours. We trained the network in 3 phases: While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training. In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well. Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users. This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries. Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future. If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again. There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future: Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience. UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case. DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch). Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks. For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects. Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide. Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration. ... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com
Dhruv Parthasarathy
4.3K
12
https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------2----------------
A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN
At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com
Sebastian Heinz
4.4K
13
https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------3----------------
A simple deep learning model for stock price prediction using TensorFlow
For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind. Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB). Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format. The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values. A quick look at the S&P time series using pyplot.plot(data['SP500']): Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute. The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017. There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values. Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler. Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction. TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog): In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get "filled" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow: After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex. As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1). The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly. The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch. Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail. The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article. It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer. After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications. Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model. The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network. The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets. However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved. The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research. Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp. Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies. Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient. After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets. A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch. The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies. During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice! One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario. Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post. The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;) If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice. Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork. Lastly, follow me on: Twitter | LinkedIn From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts.