author,claps,reading_time,link,title,text Justin Lee,8.3K,11,https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=---------0----------------,Chatbots were the next big thing: what happened? – The Startup – Medium,"Oh, how the headlines blared: Chatbots were The Next Big Thing. Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines. And why wouldn’t they be? All the road signs pointed towards insane success. At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’. In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place: One year on, we have an answer to that question. No. Because there isn’t even an ecosystem for a platform to dominate. Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly. The age-old hype cycle unfolded in familiar fashion... Expectations built, built, and then..... It all kind of fizzled out. The predicted paradim shift didn’t materialize. And apps are, tellingly, still alive and well. We look back at our breathless optimism and turn to each other, slightly baffled: “is that it? THAT was the chatbot revolution we were promised?” Digit’s Ethan Bloch sums up the general consensus: According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them. Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word. Users had to type commands manually into a machine to get anything done. Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too! Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language. Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised: The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system: Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants. It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations. Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms. Basically, we’re still trying to achieve the same innovations we were 30 years ago. Here’s where I think we’re going wrong: An oversized assumption has been that apps are ‘over’, and would be replaced by bots. By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development. You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet? It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least. Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app. Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail. A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition. That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems. Modern-day apps benefit from decades of research and experimentation. Why would we throw this away? But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting. Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements. The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response. Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these. For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed. Building a bot for the sake of it, letting it loose and hoping for the best will never end well: The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input. The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too. That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. Problems arise when life refuses to fit into those boxes. According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus. When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties. Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly. A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like. In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy. Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.) As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers. And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish. Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet. And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked: Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type. Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true. We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication. Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative. Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI. Aiming for a human dimension in business interactions makes sense. If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms. Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be. A conversation encompasses so much more than just text. Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory. As HubSpot team pinpointed: People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users). And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison. And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans. But is that how humans prefer to interact with machines? Not necessarily. At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure. In a way, those early-adopters weren’t entirely wrong. People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16. Not even close. Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information. Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel. That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence. For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon. But that’s not the whole story. Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial. As Bill Gates once said: The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone. I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology. Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day. Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing. And I can’t wait to see what happens next. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Conor Dewey,1.4K,7,https://towardsdatascience.com/python-for-data-science-8-concepts-you-may-have-forgotten-i-did-825966908393?source=---------1----------------,Python for Data Science: 8 Concepts You May Have Forgotten,"If you’ve ever found yourself looking up the same question, concept, or syntax over and over again when programming, you’re not alone. I find myself doing this constantly. While it’s not unnatural to look things up on StackOverflow or other resources, it does slow you down a good bit and raise questions as to your complete understanding of the language. We live in a world where there is a seemingly infinite amount of accessible, free resources looming just one search away at all times. However, this can be both a blessing and a curse. When not managed effectively, an over-reliance on these resources can build poor habits that will set you back long-term. Personally, I find myself pulling code from similar discussion threads several times, rather than taking the time to learn and solidify the concept so that I can reproduce the code myself the next time. This approach is lazy and while it may be the path of least resistance in the short-term, it will ultimately hurt your growth, productivity, and ability to recall syntax (cough, interviews) down the line. Recently, I’ve been working through an online data science course titled Python for Data Science and Machine Learning on Udemy (Oh God, I sound like that guy on Youtube). Over the early lectures in the series, I was reminded of some concepts and syntax that I consistently overlook when performing data analysis in Python. In the interest of solidifying my understanding of these concepts once and for all and saving you guys a couple of StackOverflow searches, here’s the stuff that I’m always forgetting when working with Python, NumPy, and Pandas. I’ve included a short description and example for each, however for your benefit, I will also include links to videos and other resources that explore each concept more in-depth as well. Writing out a for loop every time you need to define some sort of list is tedious, luckily Python has a built-in way to address this problem in just one line of code. The syntax can be a little hard to wrap your head around but once you get familiar with this technique you’ll use it fairly often. See the example above and below for how you would normally go about list comprehension with a for loop vs. creating your list with in one simple line with no loops necessary. Ever get tired of creating function after function for limited use cases? Lambda functions to the rescue! Lambda functions are used for creating small, one-time and anonymous function objects in Python. Basically, they let you create a function, without creating a function. The basic syntax of lambda functions is: Note that lambda functions can do everything that regular functions can do, as long as there’s just one expression. Check out the simple example below and the upcoming video to get a better feel for the power of lambda functions: Once you have a grasp on lambda functions, learning to pair them with the map and filter functions can be a powerful tool. Specifically, map takes in a list and transforms it into a new list by performing some sort of operation on each element. In this example, it goes through each element and maps the result of itself times 2 to a new list. Note that the list function simply converts the output to list type. The filter function takes in a list and a rule, much like map, however it returns a subset of the original list by comparing each element against the boolean filtering rule. For creating quick and easy Numpy arrays, look no further than the arange and linspace functions. Each one has their specific purpose, but the appeal here (instead of using range), is that they output NumPy arrays, which are typically easier to work with for data science. Arange returns evenly spaced values within a given interval. Along with a starting and stopping point, you can also define a step size or data type if necessary. Note that the stopping point is a ‘cut-off’ value, so it will not be included in the array output. Linspace is very similar, but with a slight twist. Linspace returns evenly spaced numbers over a specified interval. So given a starting and stopping point, as well as a number of values, linspace will evenly space them out for you in a NumPy array. This is especially helpful for data visualizations and declaring axes when plotting. You may have ran into this when dropping a column in Pandas or summing values in NumPy matrix. If not, then you surely will at some point. Let’s use the example of dropping a column for now: I don’t know how many times I wrote this line of code before I actually knew why I was declaring axis what I was. As you can probably deduce from above, set axis to 1 if you want to deal with columns and set it to 0 if you want rows. But why is this? My favorite reasoning, or atleast how I remember this: Calling the shape attribute from a Pandas dataframe gives us back a tuple with the first value representing the number of rows and the second value representing the number of columns. If you think about how this is indexed in Python, rows are at 0 and columns are at 1, much like how we declare our axis value. Crazy, right? If you’re familiar with SQL, then these concepts will probably come a lot easier for you. Anyhow, these functions are essentially just ways to combine dataframes in specific ways. It can be difficult to keep track of which is best to use at which time, so let’s review it. Concat allows the user to append one or more dataframes to each other either below or next to it (depending on how you define the axis). Merge combines multiple dataframes on specific, common columns that serve as the primary key. Join, much like merge, combines two dataframes. However, it joins them based on their indices, rather than some specified column. Check out the excellent Pandas documentation for specific syntax and more concrete examples, as well as some special cases that you may run into. Think of apply as a map function, but made for Pandas DataFrames or more specifically, for Series. If you’re not as familiar, Series are pretty similar to NumPy arrays for the most part. Apply sends a function to every element along a column or row depending on what you specify. You might imagine how useful this can be, especially for formatting and manipulating values across a whole DataFrame column, without having to loop at all. Last but certainly not least is pivot tables. If you’re familiar with Microsoft Excel, then you’ve probably heard of pivot tables in some respect. The Pandas built-in pivot_table function creates a spreadsheet-style pivot table as a DataFrame. Note that the levels in the pivot table are stored in MultiIndex objects on the index and columns of the resulting DataFrame. That’s it for now. I hope a couple of these overviews have effectively jogged your memory regarding important yet somewhat tricky methods, functions, and concepts you frequently encounter when using Python for data science. Personally, I know that even the act of writing these out and trying to explain them in simple terms has helped me out a ton. If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below! If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge. This article was originally published on conordewey.com From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist & Writer | www.conordewey.com Sharing concepts, ideas, and codes. " William Koehrsen,2.8K,11,https://towardsdatascience.com/automated-feature-engineering-in-python-99baf11cc219?source=---------2----------------,Automated Feature Engineering in Python – Towards Data Science,"Machine learning is increasingly moving from hand-designed models to automatically optimized pipelines using tools such as H20, TPOT, and auto-sklearn. These libraries, along with methods such as random search, aim to simplify the model selection and tuning parts of machine learning by finding the best model for a dataset with little to no manual intervention. However, feature engineering, an arguably more valuable aspect of the machine learning pipeline, remains almost entirely a human labor. Feature engineering, also known as feature creation, is the process of constructing new features from existing data to train a machine learning model. This step can be more important than the actual model used because a machine learning algorithm only learns from the data we give it, and creating features that are relevant to a task is absolutely crucial (see the excellent paper “A Few Useful Things to Know about Machine Learning”). Typically, feature engineering is a drawn-out manual process, relying on domain knowledge, intuition, and data manipulation. This process can be extremely tedious and the final features will be limited both by human subjectivity and time. Automated feature engineering aims to help the data scientist by automatically creating many candidate features out of a dataset from which the best can be selected and used for training. In this article, we will walk through an example of using automated feature engineering with the featuretools Python library. We will use an example dataset to show the basics (stay tuned for future posts using real-world data). The complete code for this article is available on GitHub. Feature engineering means building additional features out of existing data which is often spread across multiple related tables. Feature engineering requires extracting the relevant information from the data and getting it into a single table which can then be used to train a machine learning model. The process of constructing features is very time-consuming because each new feature usually requires several steps to build, especially when using information from more than one table. We can group the operations of feature creation into two categories: transformations and aggregations. Let’s look at a few examples to see these concepts in action. A transformation acts on a single table (thinking in terms of Python, a table is just a Pandas DataFrame ) by creating new features out of one or more of the existing columns. As an example, if we have the table of clients below we can create features by finding the month of the joined column or taking the natural log of the income column. These are both transformations because they use information from only one table. On the other hand, aggregations are performed across tables, and use a one-to-many relationship to group observations and then calculate statistics. For example, if we have another table with information on the loans of clients, where each client may have multiple loans, we can calculate statistics such as the average, maximum, and minimum of loans for each client. This process involves grouping the loans table by the client, calculating the aggregations, and then merging the resulting data into the client data. Here’s how we would do that in Python using the language of Pandas. These operations are not difficult by themselves, but if we have hundreds of variables spread across dozens of tables, this process is not feasible to do by hand. Ideally, we want a solution that can automatically perform transformations and aggregations across multiple tables and combine the resulting data into a single table. Although Pandas is a great resource, there’s only so much data manipulation we want to do by hand! (For more on manual feature engineering check out the excellent Python Data Science Handbook). Fortunately, featuretools is exactly the solution we are looking for. This open-source Python library will automatically create many features from a set of related tables. Featuretools is based on a method known as “Deep Feature Synthesis”, which sounds a lot more imposing than it actually is (the name comes from stacking multiple features not because it uses deep learning!). Deep feature synthesis stacks multiple transformation and aggregation operations (which are called feature primitives in the vocab of featuretools) to create features from data spread across many tables. Like most ideas in machine learning, it’s a complex method built on a foundation of simple concepts. By learning one building block at a time, we can form a good understanding of this powerful method. First, let’s take a look at our example data. We already saw some of the dataset above, and the complete collection of tables is as follows: If we have a machine learning task, such as predicting whether a client will repay a future loan, we will want to combine all the information about clients into a single table. The tables are related (through the client_id and the loan_id variables) and we could use a series of transformations and aggregations to do this process by hand. However, we will shortly see that we can instead use featuretools to automate the process. The first two concepts of featuretools are entities and entitysets. An entity is simply a table (or a DataFrame if you think in Pandas). An EntitySet is a collection of tables and the relationships between them. Think of an entityset as just another Python data structure, with its own methods and attributes. We can create an empty entityset in featuretools using the following: Now we have to add entities. Each entity must have an index, which is a column with all unique elements. That is, each value in the index must appear in the table only once. The index in the clients dataframe is the client_idbecause each client has only one row in this dataframe. We add an entity with an existing index to an entityset using the following syntax: The loans dataframe also has a unique index, loan_id and the syntax to add this to the entityset is the same as for clients. However, for the payments dataframe, there is no unique index. When we add this entity to the entityset, we need to pass in the parameter make_index = True and specify the name of the index. Also, although featuretools will automatically infer the data type of each column in an entity, we can override this by passing in a dictionary of column types to the parameter variable_types . For this dataframe, even though missed is an integer, this is not a numeric variable since it can only take on 2 discrete values, so we tell featuretools to treat is as a categorical variable. After adding the dataframes to the entityset, we inspect any of them: The column types have been correctly inferred with the modification we specified. Next, we need to specify how the tables in the entityset are related. The best way to think of a relationship between two tables is the analogy of parent to child. This is a one-to-many relationship: each parent can have multiple children. In the realm of tables, a parent table has one row for every parent, but the child table may have multiple rows corresponding to multiple children of the same parent. For example, in our dataset, the clients dataframe is a parent of the loans dataframe. Each client has only one row in clients but may have multiple rows in loans. Likewise, loans is the parent of payments because each loan will have multiple payments. The parents are linked to their children by a shared variable. When we perform aggregations, we group the child table by the parent variable and calculate statistics across the children of each parent. To formalize a relationship in featuretools, we only need to specify the variable that links two tables together. The clients and the loans table are linked via the client_id variable and loans and payments are linked with the loan_id. The syntax for creating a relationship and adding it to the entityset are shown below: The entityset now contains the three entities (tables) and the relationships that link these entities together. After adding entities and formalizing relationships, our entityset is complete and we are ready to make features. Before we can quite get to deep feature synthesis, we need to understand feature primitives. We already know what these are, but we have just been calling them by different names! These are simply the basic operations that we use to form new features: New features are created in featuretools using these primitives either by themselves or stacking multiple primitives. Below is a list of some of the feature primitives in featuretools (we can also define custom primitives): These primitives can be used by themselves or combined to create features. To make features with specified primitives we use the ft.dfs function (standing for deep feature synthesis). We pass in the entityset, the target_entity , which is the table where we want to add the features, the selected trans_primitives (transformations), and agg_primitives (aggregations): The result is a dataframe of new features for each client (because we made clients the target_entity). For example, we have the month each client joined which is a transformation feature primitive: We also have a number of aggregation primitives such as the average payment amounts for each client: Even though we specified only a few feature primitives, featuretools created many new features by combining and stacking these primitives. The complete dataframe has 793 columns of new features! We now have all the pieces in place to understand deep feature synthesis (dfs). In fact, we already performed dfs in the previous function call! A deep feature is simply a feature made of stacking multiple primitives and dfs is the name of process that makes these features. The depth of a deep feature is the number of primitives required to make the feature. For example, the MEAN(payments.payment_amount) column is a deep feature with a depth of 1 because it was created using a single aggregation. A feature with a depth of two is LAST(loans(MEAN(payments.payment_amount)) This is made by stacking two aggregations: LAST (most recent) on top of MEAN. This represents the average payment size of the most recent loan for each client. We can stack features to any depth we want, but in practice, I have never gone beyond a depth of 2. After this point, the features are difficult to interpret, but I encourage anyone interested to try “going deeper”. We do not have to manually specify the feature primitives, but instead can let featuretools automatically choose features for us. To do this, we use the same ft.dfs function call but do not pass in any feature primitives: Featuretools has built many new features for us to use. While this process does automatically create new features, it will not replace the data scientist because we still have to figure out what to do with all these features. For example, if our goal is to predict whether or not a client will repay a loan, we could look for the features most correlated with a specified outcome. Moreover, if we have domain knowledge, we can use that to choose specific feature primitives or seed deep feature synthesis with candidate features. Automated feature engineering has solved one problem, but created another: too many features. Although it’s difficult to say before fitting a model which of these features will be important, it’s likely not all of them will be relevant to a task we want to train our model on. Moreover, having too many features can lead to poor model performance because the less useful features drown out those that are more important. The problem of too many features is known as the curse of dimensionality. As the number of features increases (the dimension of the data grows) it becomes more and more difficult for a model to learn the mapping between features and targets. In fact, the amount of data needed for the model to perform well scales exponentially with the number of features. The curse of dimensionality is combated with feature reduction (also known as feature selection): the process of removing irrelevant features. This can take on many forms: Principal Component Analysis (PCA), SelectKBest, using feature importances from a model, or auto-encoding using deep neural networks. However, feature reduction is a different topic for another article. For now, we know that we can use featuretools to create numerous features from many tables with minimal effort! Like many topics in machine learning, automated feature engineering with featuretools is a complicated concept built on simple ideas. Using concepts of entitysets, entities, and relationships, featuretools can perform deep feature synthesis to create new features. Deep feature synthesis in turn stacks feature primitives — aggregations, which act across a one-to-many relationship between tables, and transformations, functions applied to one or more columns in a single table — to build new features from multiple tables. In future articles, I’ll show how to use this technique on a real world problem, the Home Credit Default Risk competition currently being hosted on Kaggle. Stay tuned for that post, and in the meantime, read this introduction to get started in the competition! I hope that you can now use automated feature engineering as an aid in a data science pipeline. Our models are only as good as the data we give them, and automated feature engineering can help to make the feature creation process more efficient. For more information on featuretools, including advanced usage, check out the online documentation. To see how featuretools is used in practice, read about the work of Feature Labs, the company behind the open-source library. As always, I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist and Master Student, Data Science Communicator and Advocate Sharing concepts, ideas, and codes. " Gant Laborde,1.3K,7,https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------3----------------,Machine Learning: how to go from Zero to Hero – freeCodeCamp,"If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text. Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough. However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you. A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter. A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players. Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory. I’m talking about Machine Learning. You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm. So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I. Wow! Right? That’s a crazy process! Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane. Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games. Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks! I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML. Secondly, if you don’t influence the world, the world will influence you. Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML. The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work? If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos. If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting. Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley). It’s cool watching data go through a trained model, but you can even watch your neural network get trained. One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model! Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above. These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now. There are tons of resources available. I’ll be recommending two approaches. In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch! If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you. I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone. This one costs money after Level 1. Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper. Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did. This course is 100% free. You only have to pay for a certificate if you want one. With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple. But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways. If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course. TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course. Plenty more information on available courses and rankings can be found here. If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready. I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive! Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now. I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in. It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps! See? I hope this article has inspired you and those around you to learn ML! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech Our community publishes stories worth reading on development, design, and data science. " Emmanuel Ameisen,935,11,https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------4----------------,Reinforcement Learning from scratch – Insight Data,"Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below. In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone. Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here! Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer. This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in. For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here. Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems. 1/It all starts with slot machines Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine. Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests. While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best. In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor. 2/Adding different states The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits. Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world. 3/Learning about the consequences of our actions There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning. First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate? We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward. Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates. 3+/Introducing Monte Carlo Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them. 4/The world is rarely discrete The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function! A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently. The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space. 4+/Leveraging deep learning for representations In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations. One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients. 5/Learning directly from the screen There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment). Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat! As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand. 6/Nuanced actions So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger. We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :). 7/Next steps for the brave There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored: That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here. Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Lead at Insight AI @EmmanuelAmeisen Insight Fellows Program - Your bridge to a career in data " Irhum Shafkat,2K,15,https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------5----------------,Intuitively Understanding Convolutions for Deep Learning,"The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel. The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer. Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature. This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion. Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides. Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input. The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on. More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes. Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others. So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique. Each filter in a convolution layer produces one and only one output channel, and they do it like so: Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others). Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel. Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel. And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process. Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data. Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer: And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be: (Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2]) The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0. It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer. In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class. Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters. So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good? The answer lies in the feature combinations the prior leads the parameters to learn. Early on in this article, we discussed that: So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image. If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid! Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality. And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution: For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image! So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc. There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning: One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize. Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in. A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see. The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1]. We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field: This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area. (Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween) This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer. Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a. The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge. Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds. The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image. Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train. The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts. By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data. With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model. Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare. Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable. CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding. Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curious programmer, tinkers around in Python and deep learning. Sharing concepts, ideas, and codes. " Sam Drozdov,2.3K,6,https://uxdesign.cc/an-intro-to-machine-learning-for-designers-5c74ba100257?source=---------6----------------,An intro to Machine Learning for designers – UX Collective,"There is an ongoing debate about whether or not designers should write code. Wherever you fall on this issue, most people would agree that designers should know about code. This helps designers understand constraints and empathize with developers. It also allows designers to think outside of the pixel perfect box when problem solving. For the same reasons, designers should know about machine learning. Put simply, machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Even though Arthur Samuel coined the term over fifty years ago, only recently have we seen the most exciting applications of machine learning — digital assistants, autonomous driving, and spam-free email all exist thanks to machine learning. Over the past decade new algorithms, better hardware, and more data have made machine learning an order of magnitude more effective. Only in the past few years companies like Google, Amazon, and Apple have made some of their powerful machine learning tools available to developers. Now is the best time to learn about machine learning and apply it to the products you are building. Since machine learning is now more accessible than ever before, designers today have the opportunity to think about how machine learning can be applied to improve their products. Designers should be able to talk with software developers about what is possible, how to prepare, and what outcomes to expect. Below are a few example applications that should serve as inspiration for these conversations. Machine learning can help create user-centric products by personalizing experiences to the individuals who use them. This allows us to improve things like recommendations, search results, notifications, and ads. Machine learning is effective at finding abnormal content. Credit card companies use this to detect fraud, email providers use this to detect spam, and social media companies use this to detect things like hate speech. Machine learning has enabled computers to begin to understand the things we say (natural-language processing) and the things we see (computer vision). This allows Siri to understand “Siri, set a reminder...”, Google Photos to create albums of your dog, and Facebook to describe a photo to those visually impaired. Machine learning is also helpful in understanding how users are grouped. This insight can then be used to look at analytics on a group-by-group basis. From here, different features can be evaluated across groups or be rolled out to only a particular group of users. Machine learning allows us to make predictions about how a user might behave next. Knowing this, we can help prepare for a user’s next action. For example, if we can predict what content a user is planning on viewing, we can preload that content so it’s immediately ready when they want it. Depending on the application and what data is available, there are different types of machine learning algorithms to choose from. I’ll briefly cover each of the following. Supervised learning allows us to make predictions using correctly labeled data. Labeled data is a group of examples that has informative tags or outputs. For example, photos with associated hashtags or a house’s features (eq. number of bedrooms, location) and its price. By using supervised learning we can fit a line to the labelled data that either splits the data into categories or represents the trend of the data. Using this line we are able to make predictions on new data. For example, we can look at new photos and predict hashtags or look at a new house’s features and predict its price. If the output we are trying to predict is a list of tags or values we call it classification. If the output we are trying to predict is a number we call it regression. Unsupervised learning is helpful when we have unlabeled data or we are not exactly sure what outputs (like an image’s hashtags or a house’s price) are meaningful. Instead we can identify patterns among unlabeled data. For example, we can identify related items on an e-commerce website or recommend items to someone based on others who made similar purchases. If the pattern is a group we call it a cluster. If the pattern is a rule (e.q. if this, then that) we call it an association. Reinforcement learning doesn’t use an existing data set. Instead we create an agent to collect its own data through trial-and-error in an environment where it is reinforced with a reward. For example, an agent can learn to play Mario by receiving a positive reward for collecting coins and a negative reward for walking into a Goomba. Reinforcement learning is inspired by the way that humans learn and has turned out to be an effective way to teach computers. Specifically, reinforcement has been effective at training computers to play games like Go and Dota. Understanding the problem you are trying to solve and the available data will constrain the types of machine learning you can use (e.q. identifying objects in an image with supervised learning requires a labeled data set of images). However, constraints are the fruit of creativity. In some cases, you can set out to collect data that is not already available or consider other approaches. Even though machine learning is a science, it comes with a margin of error. It is important to consider how a user’s experience might be impacted by this margin of error. For example, when an autonomous car fails to recognize its surroundings people can get hurt. Even though machine learning has never been as accessible as it is today, it still requires additional resources (developers and time) to be integrated into a product. This makes it important to think about whether the resulting impact justifies the amount of resources needed to implement. We have barely covered the tip of the iceberg, but hopefully at this point you feel more comfortable thinking about how machine learning can be applied to your product. If you are interested in learning more about machine learning, here are some helpful resources: Thanks for reading. Chat with me on Twitter @samueldrozdov From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Digital Product Designer samueldrozdov.com Curated stories on user experience, usability, and product design. By @fabriciot and @caioab. " Conor Dewey,252,10,https://towardsdatascience.com/the-big-list-of-ds-ml-interview-resources-2db4f651bd63?source=---------7----------------,The Big List of DS/ML Interview Resources – Towards Data Science,"Data science interviews certainly aren’t easy. I know this first hand. I’ve participated in over 50 individual interviews and phone screens while applying for competitive internships over the last calendar year. Through this exciting and somewhat (at times, very) painful process, I’ve accumulated a plethora of useful resources that helped me prepare for and eventually pass data science interviews. Long story short, I’ve decided to sort through all my bookmarks and notes in order to deliver a comprehensive list of data science resources. With this list by your side, you should have more than enough effective tools at your disposal next time you’re prepping for a big interview. It’s worth noting that many of these resources are naturally going to geared towards entry-level and intern data science positions, as that’s where my expertise lies. Keep that in mind and enjoy! Here’s some of the more general resources covering data science as a whole. Specifically, I highly recommend checking out the first two links regarding 120 Data Science Interview Questions. While the ebook itself is a couple bucks out of pocket, the answers themselves are free on Quora. These were some of my favorite full-coverage questions to practice with right before an interview. Even Data Scientists cannot escape the dreaded algorithmic coding interview. In my experience, this isn’t the case 100% of the time, but chances are you’ll be asked to work through something similar to an easy or medium question on LeetCode or HackerRank. As far as language goes, most companies will let you use whatever language you want. Personally, I did almost all of my algorithmic coding in Java even though the positions were targeted at Python and R programmers. If I had to recommend one thing, it’s to break out your wallet and invest in Cracking the Coding Interview. It absolutely lives up to the hype. I plan to continue using it for years to come. Once the interviewer knows that you can think-through problems and code effectively, chances are that you’ll move onto some more data science specific applications. Depending on the interviewer and the position, you will likely be able to choose between Python and R as your tool of choice. Since I’m partial to Python, my resources below will primarily focus on effectively using Pandas and NumPy for data analysis. A data science interview typically isn’t complete without checking your knowledge of SQL. This can be done over the phone or through a live coding question, more likely the latter. I’ve found that the difficulty level of these questions can vary a good bit, ranging from being painfully easy to requiring complex joins and obscure functions. Our good friend, statistics is still crucial for Data Scientists and it’s reflected as such in interviews. I had many interviews begin by seeing if I can explain a common statistics or probability concept in simple and concise terms. As positions get more experienced, I suspect this happens less and less as traditional statistical questions begin to take the more practical form of A/B testing scenarios, covered later in the post. You’ll notice that I’ve compiled a few more resources here than in other sections. This isn’t a mistake. Machine learning is a complex field that is a virtual guarantee in data science interviews today. The way that you’ll be tested on this is no guarantee however. It may come up as a conceptual question regarding cross validation or bias-variance tradeoff, or it may take the form of a take home assignment with a dataset attached. I’ve seen both several times, so you’ve got to be prepared for anything. Specifically, check out the Machine Learning Flashcards below, they’re only a couple bucks and were my by far my favorite way to quiz myself on any conceptual ML stuff. This won’t be covered in every single data science interview, but it’s certainly not uncommon. Most interviews will have atleast one section solely dedicated to product thinking which often lends itself to A/B testing of some sort. Make sure your familiar with the concepts and statistical background necessary in order to be prepared when it comes up. If you have time to spare, I took the free online course by Udacity and overall, I was pretty impressed. Lastly, I wanted to call out all of the posts related to data science jobs and interviewing that I read over and over again to understand, not only how to prepare, but what to expect as well. If you only check out one section here, this is the one to focus on. This is the layer that sits on top of all the technical skills and application. Don’t overlook it. I hope you find these resources useful during your next interview or job search. I know I did, truthfully I’m just glad that I saved these links somewhere. Lastly, this post is part of an ongoing initiative to ‘open-source’ my experience applying and interviewing at data science positions, so if you enjoyed this content then be sure to follow me for more stuff like this. If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below! If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge. This article was originally published on conordewey.com From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist & Writer | www.conordewey.com Sharing concepts, ideas, and codes. " Abhishek Parbhakar,937,6,https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------8----------------,Must know Information Theory concepts in Deep Learning (AI),"Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields: In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences: It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first? Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”. Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence. Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences: Also called Information Entropy or Shannon Entropy. Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments: If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy. Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy. A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments. Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it. For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero. For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as: where p(x_i) is the probability of i^th outcome of X. Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are. Cross entropy between two probability distributions p and q defined over same set of outcomes is given by: Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable. Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship. Mutual information of two discrete random variables X and Y is defined as: where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively. Also called Relative Entropy. KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other. Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence. KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’. KL divergence of a probability distribution Q from another probability distribution P is defined as: KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders. Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948. Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings. In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes. " Aman Dalmia,2.3K,17,https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------9----------------,What I learned from interviewing at multiple AI companies and start-ups,"Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work. This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation. NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger. To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps: a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview: As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be: • Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional). Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine: Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles. All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself. b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about. c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise. I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors. Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are. It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile. There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one. This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not: There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow: Problem > 1 or 2 previous approaches > Our approach > Result > Intuition The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly. Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling. Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life. That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄 We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says: This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said: We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely. Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning. If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough. Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA). Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future. Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for. I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills. This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love. Follow our publication to see more product & design stories featured by the Journal team. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Fanatic • Math Lover • Dreamer The official Journal blog " Sophia Arakelyan,7,4,https://buzzrobot.com/from-ballerina-to-ai-researcher-part-i-46fce67f809b?source=---------1----------------,From Ballerina to AI Researcher: Part I – buZZrobot,"Last year, I published the article “From Ballerina to AI writer” where I described how I embraced the technical part of AI without a technical background. But having love and passion for AI, I educated myself and was able to build a neural net classifier and do projects in Deep RL. Recently, I’ve become a participant in the OpenAI Scholarship Program (OpenAI is a non-profit that gathers top AI researchers to ensure the safety of AI to benefit humanity). Every week for the next three months I’ll publish blog posts sharing my story of transformation from a person dedicated to 15 years of professional dancing and then writing about tech and AI to actually conducting AI research. Finding your true calling — the key component of happiness My primary goal with the series of blog posts “From Ballerina to AI researcher” is to show that it’s never too late to embrace a new field, start over again, and find your true calling. Finding work you love is one of the most important components of happiness - — something that you do every day and invest your time in to grow; that makes you feel fulfilled, gives you energy; something that is a refuge for your soul. Great things never come easy. We have to be able to fight to make great things happen. But you can’t fight for something you don’t believe in, especially if you don’t feel like it’s really important for you and humanity. Finding that thing is a real challenge. I feel lucky that I found my true passion — AI. To me, the technology itself and the AI community — researchers, scientists, people who dedicate their lives to building the most powerful technology of all time with the mission to benefit humanity and make it safe for us — is a great source of energy. The structure of the blog post series Today, I’m giving an overall intro of what I’m going to cover in my “From Ballerina to AI Researcher” series. I’ll dedicate the sequence of blog posts during the OpenAI Scholars program to several aspects of AI technology. I’ll cover those areas that concern me a lot, like AI and automation, bias in ML, dual use of AI, etc. Also, the structure of my posts will include some insights on what I’m working on right now (the final technical project will be available by the end of August and will be open-sourced). I feel very lucky to have Alec Radford, an experienced researcher, as my mentor who guides me in the NLP and NLU research area. First week of my scholarship I’ve dedicated my first week within the program to learning about the Transformer architecture that performs much better on sequential data compared to RNNs, LSTMs. The novelty of the architecture is its multi-head self-attention mechanism. According to the original paper, experiments with the transformer on two machine translation tasks showed the model to be superior in quality while being more parallelizable and requiring significantly less time to train. More concretely, when RNNs or CNNs take a sequence as an input, it goes through sentences word by word, which is a huge obstacle toward parallelization of the process (takes more time to train models). Moreover, if sequences are too long, the model tends to forget the content of distant positions in sequence or mixes it with the following positions’ content — this is the fundamental problem in dealing with sequential data. The transformer architecture reduced this problem thanks to the multi-head self-attention mechanism. I digged into RNN, LSTM models to catch up with the background information. To that end, I’ve found Andrew Ng’s course on Deep Learning along with the papers extremely useful. To develop insights regarding the transformer, I went through the following resources: the video by Łukasz Kaiser from Google Brain, one of the model’s creators; a blog post with very well elaborated content re: the model, ran the code tensor2tensor and the code using the PyTorch framework from this paper to “feel” the difference between the TF and PyTorch frameworks. Overall, the goal within the program is to develop deep comprehension of the NLU research area: challenges, current state of the art; and to formulate and test hypotheses that tackle the most important problems of the field. I’ll share more on what I’m working on in my future articles. Meanwhile, if you have questions/feedback, please leave a comment. If you want to learn more about me, here are my Facebook and Twitter accounts. I’d appreciate your feedback on my posts, such as what topics are most interesting to you that I should consider further coverage on. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com The publication aims to cover practical aspects of AI technology, use cases along with interviews with notable people in the AI field. " Dr. GP Pulipaka,2,6,https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------9----------------,"3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and...","Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form. Latent semantic analysis Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document. Brief overview of linear algebra The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK. Deep Learning at scale with Google Colab notebooks Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper. Once this step is complete, it shows the following log without errors: The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab If it shows successful retrieval of access token, then Colab is all set. At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file. Once the drive is mounted, Colab has access to the datasets from Google drive. Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook. PyCharm IDE The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal. Results from OSX Terminal Jupyter Notebook on standalone machine Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine: References Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013 Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing. Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience " Scott Santens,7.3K,14,https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49?source=tag_archive---------0----------------,Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines,"(An alternate version of this article was originally published in the Boston Globe) On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words. Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it. The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most. What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue. So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game. AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income. This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far: Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990. All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines. Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them. If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn. I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me. As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work. One of these applications is the creation of deep neural networks - kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data. Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why? Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains. The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds. This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI. However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it: Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.” Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it. We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans. Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide. Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted. A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast. Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada. The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time. And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War. All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.” AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies... My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.” Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system... we may have to consider instituting a basic income guarantee.” Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.” When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers? No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t? What’s the big lesson to learn, in a century when machines can learn? I offer it’s that jobs are for machines, and life is for people. This article was written on a crowdfunded monthly basic income. If you found value in this article, you can support it along with all my advocacy for basic income with a monthly patron pledge of $1+. Special thanks to Arjun Banker, Steven Grimm, Larry Cohen, Topher Hunt, Aaron Marcus-Kubitza, Andrew Stern, Keith Davis, Albert Wenger, Richard Just, Chris Smothers, Mark Witham, David Ihnen, Danielle Texeira, Katie Doemland, Paul Wicks, Jan Smole, Joe Esposito, Jack Wagner, Joe Ballou, Stuart Matthews, Natalie Foster, Chris McCoy, Michael Honey, Gary Aranovich, Kai Wong, John David Hodge, Louise Whitmore, Dan O’Sullivan, Harish Venkatesan, Michiel Dral, Gerald Huff, Susanne Berg, Cameron Ottens, Kian Alavi, Gray Scott, Kirk Israel, Robert Solovay, Jeff Schulman, Andrew Henderson, Robert F. Greene, Martin Jordo, Victor Lau, Shane Gordon, Paolo Narciso, Johan Grahn, Tony DeStefano, Erhan Altay, Bryan Herdliska, Stephane Boisvert, Dave Shelton, Rise & Shine PAC, Luke Sampson, Lee Irving, Kris Roadruck, Amy Shaffer, Thomas Welsh, Olli Niinimäki, Casey Young, Elizabeth Balcar, Masud Shah, Allen Bauer, all my other funders for their support, and my amazing partner, Katie Smith. Scott Santens writes about basic income on his blog. You can also follow him here on Medium, on Twitter, on Facebook, or on Reddit where he is a moderator for the /r/BasicIncome community of over 30,000 subscribers. If you feel others would appreciate this article, please click the green heart. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. New Orleans writer focused on the potential for human civilization to gets its act together in the 21st century. Moderator of /r/BasicIncome on Reddit. Articles discussing the concept of the universal basic income " Adam Geitgey,35K,15,https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?source=tag_archive---------1----------------,Machine Learning is Fun! – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة‎‎, Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 , Tiếng Việt or فارسی. Bigger update: The content of this article is now available as a full-length video course that walks you through every step of the code. You can take the course for free (and access everything else on Lynda.com free for 30 days) if you sign up with this link. Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that! This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is. The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished. Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data. For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic. “Machine learning” is an umbrella term covering lots of these kinds of generic algorithms. You can think of machine learning algorithms as falling into one of two main categories — supervised learning and unsupervised learning. The difference is simple, but really important. Let’s say you are a real estate agent. Your business is growing, so you hire a bunch of new trainee agents to help you out. But there’s a problem — you can glance at a house and have a pretty good idea of what a house is worth, but your trainees don’t have your experience so they don’t know how to price their houses. To help your trainees (and maybe free yourself up for a vacation), you decide to write a little app that can estimate the value of a house in your area based on it’s size, neighborhood, etc, and what similar houses have sold for. So you write down every time someone sells a house in your city for 3 months. For each house, you write down a bunch of details — number of bedrooms, size in square feet, neighborhood, etc. But most importantly, you write down the final sale price: Using that training data, we want to create a program that can estimate how much any other house in your area is worth: This is called supervised learning. You knew how much each house sold for, so in other words, you knew the answer to the problem and could work backwards from there to figure out the logic. To build your app, you feed your training data about each house into your machine learning algorithm. The algorithm is trying to figure out what kind of math needs to be done to make the numbers work out. This kind of like having the answer key to a math test with all the arithmetic symbols erased: From this, can you figure out what kind of math problems were on the test? You know you are supposed to “do something” with the numbers on the left to get each answer on the right. In supervised learning, you are letting the computer work out that relationship for you. And once you know what math was required to solve this specific set of problems, you could answer to any other problem of the same type! Let’s go back to our original example with the real estate agent. What if you didn’t know the sale price for each house? Even if all you know is the size, location, etc of each house, it turns out you can still do some really cool stuff. This is called unsupervised learning. This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!” So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts. Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions. Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer. Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start. As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers. But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”. Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead. Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human. So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further. If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this: If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change. Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number: One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price. That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this: Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices! A dumb way to figure out the best weights would be something like this: Start with each weight set to 1.0: Run every house you know about through your function and see how far off the function is at guessing the correct price for each house: For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house. Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is. Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function. If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights. Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem! That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow! But here’s a few more facts that will blow your mind: Pretty crazy, right? Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try. To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way: First, write a simple equation that represents Step #2 above: Now let’s re-write exactly the same equation, but using a bunch of machine learning math jargon (that you can ignore for now): This equation represents how wrong our price estimating function is for the weights we currently have set. If we graph this cost equation for all possible values of our weights for number_of_bedrooms and sqft, we’d get a graph that might look something like this: In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer! So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights. If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill. So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading). That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details. When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening. The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it. But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line. But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies. Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully. In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn! Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data! But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have. For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two. So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly. In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day. If you want to try out what you’ve learned in this article, I made a course that walks you through every step of this article, including writing all the code. Give it a try! If you want to go deeper, Andrew Ng’s free Machine Learning class on Coursera is pretty amazing as a next step. I highly recommend it. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math. Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms. If you liked this article, please consider signing up for my Machine Learning is Fun! Newsletter: Also, please check out the full-length course version of this article. It covers everything in this article in more detail, including writing the actual code in Python. You can get a free 30-day trial to watch the course if you sign up with this link. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 2! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,14.2K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------2----------------,Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that! This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture: Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished! (If you haven’t already read part 1 and part 2, read them now!) You might have seen this famous xkcd comic before. The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years. In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one. So let’s do it — let’s write a program that can recognize birds! Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”. In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in: We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”. Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set: The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers? The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is: To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers: The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes: Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups. Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone. All that’s left is to train the neural network with images of “8”s and not-“8""s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images. Here’s some of our training data: We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition! It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right? Well, of course it’s not that simple. First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image: But now the really bad news: Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything: This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only. That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered. We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one? This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this! When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image? We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image: Using this technique, we can easily create an endless supply of training data. More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns. To make the network bigger, we just stack up layer upon layer of nodes: We call this a “deep neural network” because it has more layers than a traditional neural network. This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly. But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network. Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects. There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is! As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture: As a human, you instantly recognize the hierarchy in this picture: Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on. But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks. We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up. We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images). Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture. Here’s how it’s going to work, step by step — Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile: By doing this, we turned our original image into 77 equally-sized tiny image tiles. Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile: However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting. We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this: In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting. The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big: To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all! We’ll just look at each 2x2 square of the array and keep the biggest number: The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits. So far, we’ve reduced a giant image down into a fairly small array. Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network. So from start to finish, our whole five-step pipeline looks like this: Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network. When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data. The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize. For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc. Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like: In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories! So how do you know which steps you need to combine to make your image classifier work? Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error! Now finally we know enough to write a program that can decide if a picture is a bird or not. As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics. Here’s a few of the birds from our combined data set: And here’s some of the 52,000 non-bird images: This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data! To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network. Here’s the code to define and train the network: If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer. As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there. Congrats! Our program can now recognize birds in images! Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not. But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time. That seems pretty good, right? Well... it depends! Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things. For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless. We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed. Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories — Using our validation set of 15,000 images, here’s how many times our predictions fell into each category: Why do we break our results down like this? Because not all mistakes are created equal. Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did. Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did: This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one! Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images. You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next? If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,15.2K,13,https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------3----------------,Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic: This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do! Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)! So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result. But face recognition is really a series of several related problems: As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects: Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately. We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms: Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib. The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart! If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action: Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline. Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short. To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces: Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it: Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker: If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image: This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve! But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image. To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest. The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way: To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces: Using this technique, we can now easily find faces in any image: If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images. Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer: To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps. To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan. The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face: Here’s the result of locating the 68 face landmarks on our test image: Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations): Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate. If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks. Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting! The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right? There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours. What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about: Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else? It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure. The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face. The training process works by looking at 3 face images at a time: Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart: After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements. Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist. This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy. But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team! So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image: So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person. If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this. This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image. You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work. All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person! So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon: Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show: It works! And look how well it works for faces in different poses — even sideways faces! Let’s review the steps we followed: Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer: UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below! I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself! Original OpenFace instructions: If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter: You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 5! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Xiaohan Zeng,48K,13,https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f?source=tag_archive---------4----------------,"I interviewed at five top companies in Silicon Valley in five days, and luckily got five job offers","In the five days from July 24th to 28th 2017, I interviewed at LinkedIn, Salesforce Einstein, Google, Airbnb, and Facebook, and got all five job offers. It was a great experience, and I feel fortunate that my efforts paid off, so I decided to write something about it. I will discuss how I prepared, review the interview process, and share my impressions about the five companies. I had been at Groupon for almost three years. It’s my first job, and I have been working with an amazing team and on awesome projects. We’ve been building cool stuff, making impact within the company, publishing papers and all that. But I felt my learning rate was being annealed (read: slowing down) yet my mind was craving more. Also as a software engineer in Chicago, there are so many great companies that all attract me in the Bay Area. Life is short, and professional life shorter still. After talking with my wife and gaining her full support, I decided to take actions and make my first ever career change. Although I’m interested in machine learning positions, the positions at the five companies are slightly different in the title and the interviewing process. Three are machine learning engineer (LinkedIn, Google, Facebook), one is data engineer (Salesforce), and one is software engineer in general (Airbnb). Therefore I needed to prepare for three different areas: coding, machine learning, and system design. Since I also have a full time job, it took me 2–3 months in total to prepare. Here is how I prepared for the three areas. While I agree that coding interviews might not be the best way to assess all your skills as a developer, there is arguably no better way to tell if you are a good engineer in a short period of time. IMO it is the necessary evil to get you that job. I mainly used Leetcode and Geeksforgeeks for practicing, but Hackerrank and Lintcode are also good places. I spent several weeks going over common data structures and algorithms, then focused on areas I wasn’t too familiar with, and finally did some frequently seen problems. Due to my time constraints I usually did two problems per day. Here are some thoughts: This area is more closely related to the actual working experience. Many questions can be asked during system design interviews, including but not limited to system architecture, object oriented design,database schema design,distributed system design,scalability, etc. There are many resources online that can help you with the preparation. For the most part I read articles on system design interviews, architectures of large-scale systems, and case studies. Here are some resources that I found really helpful: Although system design interviews can cover a lot of topics, there are some general guidelines for how to approach the problem: With all that said, the best way to practice for system design interviews is to actually sit down and design a system, i.e. your day-to-day work. Instead of doing the minimal work, go deeper into the tools, frameworks, and libraries you use. For example, if you use HBase, rather than simply using the client to run some DDL and do some fetches, try to understand its overall architecture, such as the read/write flow, how HBase ensures strong consistency, what minor/major compactions do, and where LRU cache and Bloom Filter are used in the system. You can even compare HBase with Cassandra and see the similarities and differences in their design. Then when you are asked to design a distributed key-value store, you won’t feel ambushed. Many blogs are also a great source of knowledge, such as Hacker Noon and engineering blogs of some companies, as well as the official documentation of open source projects. The most important thing is to keep your curiosity and modesty. Be a sponge that absorbs everything it is submerged into. Machine learning interviews can be divided into two aspects, theory and product design. Unless you are have experience in machine learning research or did really well in your ML course, it helps to read some textbooks. Classical ones such as the Elements of Statistical Learning and Pattern Recognition and Machine Learning are great choices, and if you are interested in specific areas you can read more on those. Make sure you understand basic concepts such as bias-variance trade-off, overfitting, gradient descent, L1/L2 regularization,Bayes Theorem,bagging/boosting,collaborative filtering,dimension reduction, etc. Familiarize yourself with common formulas such as Bayes Theorem and the derivation of popular models such as logistic regression and SVM. Try to implement simple models such as decision trees and K-means clustering. If you put some models on your resume, make sure you understand it thoroughly and can comment on its pros and cons. For ML product design, understand the general process of building a ML product. Here’s what I tried to do: Here I want to emphasize again on the importance of remaining curious and learning continuously. Try not to merely using the API for Spark MLlib or XGBoost and calling it done, but try to understand why stochastic gradient descent is appropriate for distributed training, or understand how XGBoost differs from traditional GBDT, e.g. what is special about its loss function, why it needs to compute the second order derivative, etc. I started by replying to HR’s messages on LinkedIn, and asking for referrals. After a failed attempt at a rock star startup (which I will touch upon later), I prepared hard for several months, and with help from my recruiters, I scheduled a full week of onsites in the Bay Area. I flew in on Sunday, had five full days of interviews with around 30 interviewers at some best tech companies in the world, and very luckily, got job offers from all five of them. All phone screenings are standard. The only difference is in the duration: For some companies like LinkedIn it’s one hour, while for Facebook and Airbnb it’s 45 minutes. Proficiency is the key here, since you are under the time gun and usually you only get one chance. You would have to very quickly recognize the type of problem and give a high-level solution. Be sure to talk to the interviewer about your thinking and intentions. It might slow you down a little at the beginning, but communication is more important than anything and it only helps with the interview. Do not recite the solution as the interviewer would almost certainly see through it. For machine learning positions some companies would ask ML questions. If you are interviewing for those make sure you brush up your ML skills as well. To make better use of my time, I scheduled three phone screenings in the same afternoon, one hour apart from each. The upside is that you might benefit from the hot hand and the downside is that the later ones might be affected if the first one does not go well, so I don’t recommend it for everyone. One good thing about interviewing with multiple companies at the same time is that it gives you certain advantages. I was able to skip the second round phone screening with Airbnb and Salesforce because I got the onsite at LinkedIn and Facebook after only one phone screening. More surprisingly, Google even let me skip their phone screening entirely and schedule my onsite to fill the vacancy after learning I had four onsites coming in the next week. I knew it was going to make it extremely tiring, but hey, nobody can refuse a Google onsite invitation! LinkedIn This is my first onsite and I interviewed at the Sunnyvale location. The office is very neat and people look very professional, as always. The sessions are one hour each. Coding questions are standard, but the ML questions can get a bit tough. That said, I got an email from my HR containing the preparation material which was very helpful, and in the end I did not see anything that was too surprising. I heard the rumor that LinkedIn has the best meals in the Silicon Valley, and from what I saw if it’s not true, it’s not too far from the truth. Acquisition by Microsoft seems to have lifted the financial burden from LinkedIn, and freed them up to do really cool things. New features such as videos and professional advertisements are exciting. As a company focusing on professional development, LinkedIn prioritizes the growth of its own employees. A lot of teams such as ads relevance and feed ranking are expanding, so act quickly if you want to join. Salesforce Einstein Rock star project by rock star team. The team is pretty new and feels very much like a startup. The product is built on the Scala stack, so type safety is a real thing there! Great talks on the Optimus Prime library by Matthew Tovbin at Scala Days Chicago 2017 and Leah McGuire at Spark Summit West 2017. I interviewed at their Palo Alto office. The team has a cohesive culture and work life balance is great there. Everybody is passionate about what they are doing and really enjoys it. With four sessions it is shorter compared to the other onsite interviews, but I wish I could have stayed longer. After the interview Matthew even took me for a walk to the HP garage :) Google Absolutely the industry leader, and nothing to say about it that people don’t already know. But it’s huge. Like, really, really HUGE. It took me 20 minutes to ride a bicycle to meet my friends there. Also lines for food can be too long. Forever a great place for developers. I interviewed at one of the many buildings on the Mountain View campus, and I don’t know which one it is because it’s HUGE. My interviewers all look very smart, and once they start talking they are even smarter. It would be very enjoyable to work with these people. One thing that I felt special about Google’s interviews is that the analysis of algorithm complexity is really important. Make sure you really understand what Big O notation means! Airbnb Fast expanding unicorn with a unique culture and arguably the most beautiful office in the Silicon Valley. New products such as Experiences and restaurant reservation, high end niche market, and expansion into China all contribute to a positive prospect. Perfect choice if you are risk tolerant and want a fast growing, pre-IPO experience. Airbnb’s coding interview is a bit unique because you’ll be coding in an IDE instead of whiteboarding, so your code needs to compile and give the right answer. Some problems can get really hard. And they’ve got the one-of-a-kind cross functional interviews. This is how Airbnb takes culture seriously, and being technically excellent doesn’t guarantee a job offer. For me the two cross functionals were really enjoyable. I had casual conversations with the interviewers and we all felt happy at the end of the session. Overall I think Airbnb’s onsite is the hardest due to the difficulty of the problems, longer duration, and unique cross-functional interviews. If you are interested, be sure to understand their culture and core values. Facebook Another giant that is still growing fast, and smaller and faster-paced compared to Google. With its product lines dominating the social network market and big investments in AI and VR, I can only see more growth potential for Facebook in the future. With stars like Yann LeCun and Yangqing Jia, it’s the perfect place if you are interested in machine learning. I interviewed at Building 20, the one with the rooftop garden and ocean view and also where Zuckerberg’s office is located. I’m not sure if the interviewers got instructions, but I didn’t get clear signs whether my solutions were correct, although I believed they were. By noon the prior four days started to take its toll, and I was having a headache. I persisted through the afternoon sessions but felt I didn’t do well at all. I was a bit surprised to learn that I was getting an offer from them as well. Generally I felt people there believe the company’s vision and are proud of what they are building. Being a company with half a trillion market cap and growing, Facebook is a perfect place to grow your career at. This is a big topic that I won’t cover in this post, but I found this article to be very helpful. Some things that I do think are important: All successes start with failures, including interviews. Before I started interviewing for these companies, I failed my interview at Databricks in May. Back in April, Xiangrui contacted me via LinkedIn asking me if I was interested in a position on the Spark MLlib team. I was extremely thrilled because 1) I use Spark and love Scala, 2) Databricks engineers are top-notch, and 3) Spark is revolutionizing the whole big data world. It is an opportunity I couldn’t miss, so I started interviewing after a few days. The bar is very high and the process is quite long, including one pre-screening questionnaire, one phone screening, one coding assignment, and one full onsite. I managed to get the onsite invitation, and visited their office in downtown San Francisco, where Treasure Island can be seen. My interviewer were incredibly intelligent yet equally modest. During the interviews I often felt being pushed to the limits. It was fine until one disastrous session, where I totally messed up due to insufficient skills and preparation, and it ended up a fiasco. Xiangrui was very kind and walked me to where I wanted to go after the interview was over, and I really enjoyed talking to him. I got the rejection several days later. It was expected but I felt frustrated for a few days nonetheless. Although I missed the opportunity to work there, I wholeheartedly wish they will continue to make greater impact and achievements. From the first interview in May to finally accepting the job offer in late September, my first career change was long and not easy. It was difficult for me to prepare because I needed to keep doing well at my current job. For several weeks I was on a regular schedule of preparing for the interview till 1am, getting up at 8:30am the next day and fully devoting myself to another day at work. Interviewing at five companies in five days was also highly stressful and risky, and I don’t recommend doing it unless you have a very tight schedule. But it does give you a good advantage during negotiation should you secure multiple offers. I’d like to thank all my recruiters who patiently walked me through the process, the people who spend their precious time talking to me, and all the companies that gave me the opportunities to interview and extended me offers. Lastly but most importantly, I want to thank my family for their love and support — my parents for watching me taking the first and every step, my dear wife for everything she has done for me, and my daughter for her warming smile. Thanks for reading through this long post. You can find me on LinkedIn or Twitter. Xiaohan Zeng 10/22/17 PS: Since the publication of this post, it has (unexpectedly) received some attention. I would like to thank everybody for the congratulations and shares, and apologize for not being able to respond to each of them. This post has been translated into some other languages: It has been reposted in Tech In Asia. Breaking Into Startups invited me to a live video streaming, together with Sophia Ciocca. CoverShr did a short QnA with me. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Critical Mind & Romantic Heart " Gil Fewster,3.3K,5,https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------5----------------,The mind-blowing AI announcement from Google that you probably missed.,"Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text. I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own. In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System. This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock. Luckily, I’m here to bring you up to speed. Here’s the deal. Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance. Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results. This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input. All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning. The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative. Google Translate invented its own language to help it translate more effectively. What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation. Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes) To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post: Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated. And this leads the Google engineers onto that truly astonishing discovery of creation: So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics. I just thought maybe you’d want to know. Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly. Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective. Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A tinkerer Our community publishes stories worth reading on development, design, and data science. " Adam Geitgey,10.4K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------6----------------,Machine Learning is Fun! Part 2 – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话. In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!). This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out! Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished. Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this: We ended up with this simple estimation function: In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value. Instead of using code, let’s represent that same function as a simple diagram: However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model? To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases: Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)! Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model. Let’s combine our four attempts to guess into one big diagram: This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions. There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click: It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together: The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm. In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time. Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess? I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter. Our model might look like this: But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem. Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example: What letter is going to come next? You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing. In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English. To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently. Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters. This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory. Predicting the next letter in a story might seem pretty useless. What’s the point? One cool use might be auto-predict for a mobile phone keyboard: But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us! We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway. To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github. We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example. As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training: You can see that it has figured out that sometimes words have spaces between them, but that’s about it. After about 1000 iterations, things are looking more promising: The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense. But after several thousand more training iterations, it looks pretty good: At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense. Compare that with some real text from the book: Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing! We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters. For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”: Not bad! But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern. In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system. This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers. Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels? First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985: This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those. To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime. Here’s the first level from the game (which you probably remember if you ever played it): If we look closely, we can see the level is made of a simple grid of objects: We could just as easily represent this grid as a sequence of characters with one character representing each object: We’ve replaced each object in the level with a letter: ...and so on, using a different letter for each different kind of object in the level. I ended up with text files that looked like this: Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line: The patterns in a level really emerge when you think of the level as a series of columns: So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well. To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up: Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it. After a little training, our model is generating junk: It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet. After several thousand iterations, it’s starting to look like something: The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool! With a lot more training, the model gets to the point where it generates perfectly valid data: Let’s sample an entire level’s worth of data from our model and rotate it back horizontal: This data looks great! There are several awesome things to notice: Finally, let’s take this level and recreate it in Super Mario Maker: Play it yourself! If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3. The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model. If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free. As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly! For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate. But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been. In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach. Readers have sent me links to other interesting approaches to generating Super Mario levels: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 3! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " David Venturi,10.6K,20,https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------7----------------,"Every single Machine Learning course on the internet, ranked by your reviews","A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost. I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science. For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization. For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below. For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews. Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources. Each course must fit three criteria: We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only. There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out. We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. We made subjective syllabus judgment calls based on three factors: A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience. A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps: The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves. First off, let’s define deep learning. Here is a succinct description: As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article: My top three recommendations from that list would be: Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline. Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar. Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews. Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available. Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning. Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice: Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course. A few prominent reviewers noted the following: Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews. The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding). Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase. Below are a few of the aforementioned sparkling reviews: Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered. It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R. As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course. Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings. A few prominent reviewers noted the following: Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here. The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews. Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews. Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews. Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent. Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews. Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews. Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews. Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews. Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews. Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews. Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews. Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews. AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews. Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews. StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews. Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews. From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews. Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews. Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews. Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews. Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews. Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews. Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews. Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews. Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews. Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews. Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews. Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews. Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews. The following courses had one or no reviews as of May 2017. Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review. Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available. Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free. Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free. Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase. Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase. Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase. Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks. Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required. The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course. Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours. Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours. Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours. Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours. Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours. Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours. Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free. Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free. Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below). Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well. This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth. The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering. If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page. If you enjoyed reading this, check out some of Class Central’s other pieces: If you have suggestions for courses I missed, let me know in the responses! If you found this helpful, click the 💚 so more people will see it here on Medium. This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program. Our community publishes stories worth reading on development, design, and data science. " Michael Jordan,34K,16,https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------8----------------,Artificial Intelligence — The Revolution Hasn’t Happened Yet,"Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.” We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills. Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities. While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws. And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically. Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems. This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions. Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon. Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon. One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits. For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering. Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction. As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory? A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms. Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved. IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean). It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.) But we need to move beyond the particular historical perspectives of McCarthy and Wiener. We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II. This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline. Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead. Michael I. Jordan From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley. " Eran Kampf,57,3,https://developerzen.com/data-mining-handling-missing-values-the-database-bd2241882e72?source=tag_archive---------0----------------,Data Mining — Handling Missing Values the Database – DeveloperZen,"I’ve recently answered Predicting missing data values in a database on StackOverflow and thought it deserved a mention on DeveloperZen. One of the important stages of data mining is preprocessing, where we prepare the data for mining. Real-world data tends to be incomplete, noisy, and inconsistent and an important task when preprocessing the data is to fill in missing values, smooth out noise and correct inconsistencies. If we specifically look at dealing with missing data, there are several techniques that can be used. Choosing the right technique is a choice that depends on the problem domain — the data’s domain (sales data? CRM data? ...) and our goal for the data mining process. So how can you handle missing values in your database? This is usually done when the class label is missing (assuming your data mining goal is classification), or many attributes are missing from the row (not just one). However, you’ll obviously get poor performance if the percentage of such rows is high. For example, let’s say we have a database of students enrolment data (age, SAT score, state of residence, etc.) and a column classifying their success in college to “Low”, “Medium” and “High”. Let’s say our goal is to build a model predicting a student’s success in college. Data rows who are missing the success column are not useful in predicting success so they could very well be ignored and removed before running the algorithm. Decide on a new global constant value, like “unknown“, “N/A” or minus infinity, that will be used to fill all the missing values. This technique is used because sometimes it just doesn’t make sense to try and predict the missing value. For example, let’s look at the students enrollment database again. Assuming the state of residence attribute data is missing for some students. Filling it up with some state doesn’t really makes sense as opposed to using something like “N/A”. Replace missing values of an attribute with the mean (or median if its discrete) value for that attribute in the database. For example, in a database of US family incomes, if the average income of a US family is X you can use that value to replace missing income values. Instead of using the mean (or median) of a certain attribute calculated by looking at all the rows in a database, we can limit the calculations to the relevant class to make the value more relevant to the row we’re looking at. Let’s say you have a cars pricing database that, among other things, classifies cars to “Luxury” and “Low budget” and you’re dealing with missing values in the cost field. Replacing missing cost of a luxury car with the average cost of all luxury cars is probably more accurate than the value you’d get if you factor in the low budget cars. The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, clustering algorithms (K-Mean\Median etc.). For example, we could use clustering algorithms to create clusters of rows which will then be used for calculating an attribute mean or median as specified in technique #3. Another example could be using a decision tree to try and predict the probable value in the missing attribute, according to other attributes in the data. I’d suggest looking into regression and decision trees first (ID3 tree generation) as they’re relatively easy and there are plenty of examples on the net... Additional Notes Originally published at www.developerzen.com on August 14, 2009. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Maker of things. Big data geek. Food Lover. The essence of Software Development ... " Oliver Lindberg,1,7,https://medium.com/the-lindberg-interviews/interview-with-googles-alfred-spector-on-voice-search-hybrid-intelligence-and-more-2f6216aa480c?source=tag_archive---------0----------------,"Interview with Google’s Alfred Spector on voice search, hybrid intelligence and more","Google’s a pretty good search engine, right? Well, you ain’t seen nothing yet. VP of research Alfred Spector talks to Oliver Lindberg about the technologies emerging from Google Labs — from voice search to hybrid intelligence and beyond This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Google has always been tight-lipped about products that haven’t launched yet. It’s no secret, however, that thanks to the company’s bottom-up culture, its engineers are working on tons of new projects at the same time. Following the mantra of ‘release early, release often’, the speed at which the search engine giant is churning out tools is staggering. At the heart of it all is Alfred Spector, Google’s Vice President of Research and Special Initiatives. One of the areas Google is making significant advances in is voice search. Spector is astounded by how rapidly it’s come along. The Google Mobile App features ‘search by voice’ capabilities that are available for the iPhone, BlackBerry, Windows Mobile and Android. All versions understand English (including US, UK, Australian and Indian-English accents) but the latest addition, for Nokia S60 phones, even introduces Mandarin speech recognition, which — because of its many different accents and tonal characteristics — posed a huge engineering challenge. It’s the most spoken language in the world, but as it isn’t exactly keyboard-friendly, voice search could become immensely popular in China. “Voice is one of these grand technology challenges in computer science,” Spector explains. “Can a computer understand the human voice? It’s been worked on for many decades and what we’ve realised over the last couple of years is that search, particularly on handheld devices, is amenable to voice as an import mechanism. “It’s very valuable to be able to use voice. All of us know that no matter how good the keyboard, it’s tricky to type exactly the right thing into a searchbar, while holding your backpack and everything else.” To get a computer to take account of your voice is no mean feat, of course. “One idea is to take all of the voices that the system hears over time into one huge pan-human voice model. So, on the one hand we have a voice that’s higher and with an English accent, and on the other hand my voice, which is deeper and with an American accent. They both go into one model, or it just becomes personalised to the individual; voice scientists are a little unclear as to which is the best approach.” The research department is also making progress in machine translation. Google Translate already features 51 languages, including Swahili and Yiddish. The latest version introduces instant, real-time translation, phonetic input and text-to-speech support (in English). “We’re able to go from any language to any of the others, and there are 51 times 50, so 2,550 possibilities,” Spector explains. “We’re focusing on increasing the number of languages because we’d like to handle even those languages where there’s not an enormous volume of usage. It will make the web far more valuable to more people if they can access the English-or Chinese language web, for example. “But we also continue to focus on quality because almost always the translations are valuable but imperfect. Sometimes it comes from training our translation system over more raw data, so we have, say, EU documents in English and French and can compare them and learn rules for translation. The other approach is to bring more knowledge into translation. For example, we’re using more syntactic knowledge today and doing automated parsing with language. It’s been a grand challenge of the field since the late 1950s. Now it’s finally achieved mass usage.” The team, led by scientist Franz Josef Och, has been collecting data for more than 100 languages, and the Google Translator Toolkit, which makes use of the ‘wisdom of the crowds’, now even supports 345 languages, many of which are minority languages. The editor enables users to translate text, correct the automatic translation and publish it. Spector thinks that this approach is the future. As computers become even faster, handling more and more data — a lot of it in the cloud — machines learn from users and thus become smarter. He calls this concept ‘hybrid intelligence’. “It’s very difficult to solve these technological problems without human input,” he says. “It’s hard to create a robot that’s as clever, smart and knowledgeable of the world as we humans are. But it’s not as tough to build a computational system like Google, which extends what we do greatly and gradually learns something about the world from us, but that requires our interpretation to make it really successful. “We need to get computers and people communicating in both directions, so the computer learns from the human and makes the human more effective.” Examples of ‘hybrid intelligence’ are Google Suggest, which instantly offers popular searches as you type a search query, and the ‘did you mean?’ feature in Google search, which corrects you when you misspell a query in the search bar. The more you use it, the better the system gets. Training computers to become seemingly more intelligent poses major hurdles for Google’s engineers. “Computers don’t train as efficiently as people do,” Spector explains. “Let’s take the chess example. If a Kasparov was the educator, we could count on almost anything he says as being accurate. But if you tried to learn from a million chess players, you learn from my children as well, who play chess but they’re 10 and eight. They’ll be right sometimes and not right other times. There’s noise in that, and some of the noise is spam. One also has to have careful regard for privacy issues.” By collecting enormous amounts of data, Google hopes to create a powerful database that eventually will understand the relationship between words (for example, ‘a dog is an animal’ and ‘a dog has four legs’). The challenge is to try to establish these relationships automatically, using tons of information, instead of having experts teach the system. This database would then improve search results and language translations because it would have a better understanding of the meaning of the words. There’s also a lot of research around ‘conceptual search’. “Let’s take a video of a couple in front of the Empire State Building. We watch the video and it’s clear they’re on their honeymoon. But what is the video about? Is it about love or honeymoons, or is it about renting office space? It’s a fundamentally challenging problem.” One example of conceptual search is Google Image Swirl, which was added to Labs in November. Enter a keyword and you get a list of 12 images; clicking on each one brings up a cluster of related pictures. Click on any of them to expand the ‘wonder wheel’ further. Google notes that they’re not just the most relevant images; the algorithm determines the most relevant group of images with similar appearance and meaning. To improve the world’s data, Google continues to focus on the importance of the open internet. Another Labs project, Google Fusion Tables facilitates data management in the cloud. It enables users to create tables, filter and aggregate data, merge it with other data sources and visualise it with Google Maps or the Google Visualisation API. The data sets can then be published, shared or kept private and commented on by people around the world. “It’s an example of open collaboration,” Spector says. “If it’s public, we can crawl it to make it searchable and easily visible to people. We hired one of the best database researchers in the world, Alon Halevy, to lead it.” Google is aiming to make more information available more easily across multiple devices, whether it’s images, videos, speech or maps, no matter which language we’re using. Spector calls the impact “totally transparent processing — it revolutionises the role of computation in day-today life. The computer can break down all these barriers to communication and knowledge. No matter what device we’re using, we have access to things. We can do translations, there are books or government documents, and some day we hope to have medical records. Whatever you want, no matter where you are, you can find it.” Spector retired in early 2015 and now serves as the CTO of Two Sigma Investments This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Photography by Andy Short From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of www.GenerateConf.com. Former editor of @netmag. Interviews with leading tech entrepreneurs and web designers, conducted by @oliverlindberg at @netmag. " Xu Wenhao,1,4,https://xuwenhao.com/%E5%BB%BA%E8%AE%AE%E7%9A%84%E7%A8%8B%E5%BA%8F%E5%91%98%E5%AD%A6%E4%B9%A0lda%E7%AE%97%E6%B3%95%E7%9A%84%E6%AD%A5%E9%AA%A4-54168e081bc1?source=tag_archive---------0----------------,建议的程序员学习LDA算法的步骤 – 蒸汽与魔法,"这一阵为了工作上的关系,花了点时间学习了一下LDA算法,说实话,对于我这个学CS而非学数学的人来说,除了集体智慧编程这本书之外基本没怎么看过机器学习的人来说,一开始还真是摸不太到门道,前前后后快要四个月了,算是基本了解了这个算法的实现,记录一下,也供后来人快速入门做个参考。 一开始直接就下了Blei的原始的那篇论文来看,但是看了个开头就被Dirichlet分布和几个数学公式打倒,然后因为专心在写项目中的具体的代码,也就先放下了。但是因为发现完全忘记了本科学的概率和统计的内容,只好回头去看大学时候概率论的教材,发现早不知道借给谁了,于是上网买了本,花了几天时间大致回顾了一遍概率论的知识,什么贝叶斯全概率公式,正态分布,二项分布之类的。后来晚上没事儿的时候,去水木的AI版转了转,了解到了Machine Learning的圣经PRML,考虑到反正也是要长期学习了,搞了电子版,同时上淘宝买了个打印胶装的版本。春节里每天晚上看一点儿,扫了一下前两章,再次回顾了一下基本数学知识,然后了解了下贝叶斯学派那种采用共轭先验来建模的方式。于是再次尝试回头去看Blei的那篇论文,发现还是看不太懂,于是又放下了。然后某天Tony让我准备准备给复旦的同学们share一下我们项目中LDA的使用,为了不露怯,又去翻论文,正好看到Science上这篇Topic Models Vs. Unstructured Data的科普性质的文章,翻了一遍之后,再去PRML里看了一遍Graphic Models那一张,觉得对于LDA想解决的问题和方法了解了更清楚了。之后从search engine里搜到这篇文章,然后根据推荐读了一部分的Gibbs Sampling for the Uninitiated。之后忘了怎么又搜到了Mark Steyvers和Tom Griffiths合著的Probabilistic Topic Models,在某个周末往返北京的飞机上读完了,觉得基本上模型训练过程也明白了。再之后就是读了一下这个最简版的LDA Gibbs Sampling的实现,再回过头读了一下PLDA的源码,基本上算是对LDA有了个相对清楚的了解。 这样前前后后,也过去了三个月,其实不少时间都是浪费掉的,比如Blei的论文在没有任何相关知识的情况下一开始读了好几次,都没读完而且得到到信息也很有限,如果重新总结一下,我觉得对于我们这些门外汉程序员来说,想了解LDA大概需要这些知识: 基本上这样一圈下来,基本概念和算法实现都应该搞定了,当然,数学证明其实没那么容易就搞定,但是对于工程师来说,先把这些搞定就能干活了,这个步骤并不适合各位读博士发论文的同学们,但是这样先看看也比较容易对于这些数学问题的兴趣,不然,成天对这符号和数学公式,没有整块业余时间的我是觉得还是容易退缩放弃的。 发现作为工程师来说,还是看代码比较有感觉,看实际应用的实例比较有感觉,看来不能把大部分时间花在PRML上,还是要多对照着代码看。 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Facebook Messenger & Chatbot, Machine Learning & Big Data 生命如此短暂,掌握技艺却要如此长久 " Netflix Technology Blog,439,9,https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429?source=tag_archive---------0----------------,Netflix Recommendations: Beyond the 5 stars (Part 1),"by Xavier Amatriain and Justin Basilico (Personalization Science and Engineering) In this two-part blog post, we will open the doors of one of the most valued Netflix assets: our recommendation system. In Part 1, we will relate the Netflix Prize to the broader recommendation challenge, outline the external components of our personalized service, and highlight how our task has evolved with the business. In Part 2, we will describe some of the data and models that we use and discuss our approach to algorithmic innovation that combines offline machine learning experimentation with online AB testing. Enjoy... and remember that we are always looking for more star talent to add to our great team, so please take a look at our jobs page. In 2006 we announced the Netflix Prize, a machine learning and data mining competition for movie rating prediction. We offered $1 million to whoever improved the accuracy of our existing system called Cinematch by 10%. We conducted this competition to find new ways to improve the recommendations we provide to our members, which is a key part of our business. However, we had to come up with a proxy question that was easier to evaluate and quantify: the root mean squared error (RMSE) of the predicted rating. The race was on to beat our RMSE of 0.9525 with the finish line of reducing it to 0.8572 or less. A year into the competition, the Korbell team won the first Progress Prize with an 8.43% improvement. They reported more than 2000 hours of work in order to come up with the final combination of 107 algorithms that gave them this prize. And, they gave us the source code. We looked at the two underlying algorithms with the best performance in the ensemble: Matrix Factorization (which the community generally called SVD, Singular Value Decomposition) and Restricted Boltzmann Machines (RBM). SVD by itself provided a 0.8914 RMSE, while RBM alone provided a competitive but slightly worse 0.8990 RMSE. A linear blend of these two reduced the error to 0.88. To put these algorithms to use, we had to work to overcome some limitations, for instance that they were built to handle 100 million ratings, instead of the more than 5 billion that we have, and that they were not built to adapt as members added more ratings. But once we overcame those challenges, we put the two algorithms into production, where they are still used as part of our recommendation engine. If you followed the Prize competition, you might be wondering what happened with the final Grand Prize ensemble that won the $1M two years later. This is a truly impressive compilation and culmination of years of work, blending hundreds of predictive models to finally cross the finish line. We evaluated some of the new methods offline but the additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment. Also, our focus on improving Netflix personalization had shifted to the next level by then. In the remainder of this post we will explain how and why it has shifted. One of the reasons our focus in the recommendation algorithms has changed is because Netflix as a whole has changed dramatically in the last few years. Netflix launched an instant streaming service in 2007, one year after the Netflix Prize began. Streaming has not only changed the way our members interact with the service, but also the type of data available to use in our algorithms. For DVDs our goal is to help people fill their queue with titles to receive in the mail over the coming days and weeks; selection is distant in time from viewing, people select carefully because exchanging a DVD for another takes more than a day, and we get no feedback during viewing. For streaming members are looking for something great to watch right now; they can sample a few videos before settling on one, they can consume several in one session, and we can observe viewing statistics such as whether a video was watched fully or only partially. Another big change was the move from a single website into hundreds of devices. The integration with the Roku player and the Xbox were announced in 2008, two years into the Netflix competition. Just a year later, Netflix streaming made it into the iPhone. Now it is available on a multitude of devices that go from a myriad of Android devices to the latest AppleTV. Two years ago, we went international with the launch in Canada. In 2011, we added 43 Latin-American countries and territories to the list. And just recently, we launched in UK and Ireland. Today, Netflix has more than 23 million subscribers in 47 countries. Those subscribers streamed 2 billion hours from hundreds of different devices in the last quarter of 2011. Every day they add 2 million movies and TV shows to the queue and generate 4 million ratings. We have adapted our personalization algorithms to this new scenario in such a way that now 75% of what people watch is from some sort of recommendation. We reached this point by continuously optimizing the member experience and have measured significant gains in member satisfaction whenever we improved the personalization for our members. Let us now walk you through some of the techniques and approaches that we use to produce these recommendations. We have discovered through the years that there is tremendous value to our subscribers in incorporating recommendations to personalize as much of Netflix as possible. Personalization starts on our homepage, which consists of groups of videos arranged in horizontal rows. Each row has a title that conveys the intended meaningful connection between the videos in that group. Most of our personalization is based on the way we select rows, how we determine what items to include in them, and in what order to place those items. Take as a first example the Top 10 row: this is our best guess at the ten titles you are most likely to enjoy. Of course, when we say “you”, we really mean everyone in your household. It is important to keep in mind that Netflix’ personalization is intended to handle a household that is likely to have different people with different tastes. That is why when you see your Top10, you are likely to discover items for dad, mom, the kids, or the whole family. Even for a single person household we want to appeal to your range of interests and moods. To achieve this, in many parts of our system we are not only optimizing for accuracy, but also for diversity. Another important element in Netflix’ personalization is awareness. We want members to be aware of how we are adapting to their tastes. This not only promotes trust in the system, but encourages members to give feedback that will result in better recommendations. A different way of promoting trust with the personalization component is to provide explanations as to why we decide to recommend a given movie or show. We are not recommending it because it suits our business needs, but because it matches the information we have from you: your explicit taste preferences and ratings, your viewing history, or even your friends’ recommendations. On the topic of friends, we recently released our Facebook connect feature in 46 out of the 47 countries we operate — all but the US because of concerns with the VPPA law. Knowing about your friends not only gives us another signal to use in our personalization algorithms, but it also allows for different rows that rely mostly on your social circle to generate recommendations. Some of the most recognizable personalization in our service is the collection of “genre” rows. These range from familiar high-level categories like “Comedies” and “Dramas” to highly tailored slices such as “Imaginative Time Travel Movies from the 1980s”. Each row represents 3 layers of personalization: the choice of genre itself, the subset of titles selected within that genre, and the ranking of those titles. Members connect with these rows so well that we measure an increase in member retention by placing the most tailored rows higher on the page instead of lower. As with other personalization elements, freshness and diversity is taken into account when deciding what genres to show from the thousands possible. We present an explanation for the choice of rows using a member’s implicit genre preferences — recent plays, ratings, and other interactions — , or explicit feedback provided through our taste preferences survey. We will also invite members to focus a row with additional explicit preference feedback when this is lacking. Similarity is also an important source of personalization in our service. We think of similarity in a very broad sense; it can be between movies or between members, and can be in multiple dimensions such as metadata, ratings, or viewing data. Furthermore, these similarities can be blended and used as features in other models. Similarity is used in multiple contexts, for example in response to a member’s action such as searching or adding a title to the queue. It is also used to generate rows of “adhoc genres” based on similarity to titles that a member has interacted with recently. If you are interested in a more in-depth description of the architecture of the similarity system, you can read about it in this past post on the blog. In most of the previous contexts — be it in the Top10 row, the genres, or the similars — ranking, the choice of what order to place the items in a row, is critical in providing an effective personalized experience. The goal of our ranking system is to find the best possible ordering of a set of items for a member, within a specific context, in real-time. We decompose ranking into scoring, sorting, and filtering sets of movies for presentation to a member. Our business objective is to maximize member satisfaction and month-to-month subscription retention, which correlates well with maximizing consumption of video content. We therefore optimize our algorithms to give the highest scores to titles that a member is most likely to play and enjoy. Now it is clear that the Netflix Prize objective, accurate prediction of a movie’s rating, is just one of the many components of an effective recommendation system that optimizes our members enjoyment. We also need to take into account factors such as context, title popularity, interest, evidence, novelty, diversity, and freshness. Supporting all the different contexts in which we want to make recommendations requires a range of algorithms that are tuned to the needs of those contexts. In the next part of this post, we will talk in more detail about the ranking problem. We will also dive into the data and models that make all the above possible and discuss our approach to innovating in this space. On to part 2: Originally published at techblog.netflix.com on April 6, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more. " Netflix Technology Blog,365,10,https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-2-d9b96aa399f5?source=tag_archive---------1----------------,Netflix Recommendations: Beyond the 5 stars (Part 2),"by Xavier Amatriain and Justin Basilico (Personalization Science and Engineering) In part one of this blog post, we detailed the different components of Netflix personalization. We also explained how Netflix personalization, and the service as a whole, have changed from the time we announced the Netflix Prize. The $1M Prize delivered a great return on investment for us, not only in algorithmic innovation, but also in brand awareness and attracting stars (no pun intended) to join our team. Predicting movie ratings accurately is just one aspect of our world-class recommender system. In this second part of the blog post, we will give more insight into our broader personalization technology. We will discuss some of our current models, data, and the approaches we follow to lead innovation and research in this space. The goal of recommender systems is to present a number of attractive items for a person to choose from. This is usually accomplished by selecting some items and sorting them in the order of expected enjoyment (or utility). Since the most common way of presenting recommended items is in some form of list, such as the various rows on Netflix, we need an appropriate ranking model that can use a wide variety of information to come up with an optimal ranking of the items for each of our members. If you are looking for a ranking function that optimizes consumption, an obvious baseline is item popularity. The reason is clear: on average, a member is most likely to watch what most others are watching. However, popularity is the opposite of personalization: it will produce the same ordering of items for every member. Thus, the goal becomes to find a personalized ranking function that is better than item popularity, so we can better satisfy members with varying tastes. Recall that our goal is to recommend the titles that each member is most likely to play and enjoy. One obvious way to approach this is to use the member’s predicted rating of each item as an adjunct to item popularity. Using predicted ratings on their own as a ranking function can lead to items that are too niche or unfamiliar being recommended, and can exclude items that the member would want to watch even though they may not rate them highly. To compensate for this, rather than using either popularity or predicted rating on their own, we would like to produce rankings that balance both of these aspects. At this point, we are ready to build a ranking prediction model using these two features. There are many ways one could construct a ranking function ranging from simple scoring methods, to pairwise preferences, to optimization over the entire ranking. For the purposes of illustration, let us start with a very simple scoring approach by choosing our ranking function to be a linear combination of popularity and predicted rating. This gives an equation of the form frank(u,v) = w1 p(v) + w2 r(u,v) + b, where u=user, v=video item, p=popularity and r=predicted rating. This equation defines a two-dimensional space like the one depicted below. Once we have such a function, we can pass a set of videos through our function and sort them in descending order according to the score. You might be wondering how we can set the weights w1 and w2 in our model (the bias b is constant and thus ends up not affecting the final ordering). In other words, in our simple two-dimensional model, how do we determine whether popularity is more or less important than predicted rating? There are at least two possible approaches to this. You could sample the space of possible weights and let the members decide what makes sense after many A/B tests. This procedure might be time consuming and not very cost effective. Another possible answer involves formulating this as a machine learning problem: select positive and negative examples from your historical data and let a machine learning algorithm learn the weights that optimize your goal. This family of machine learning problems is known as “Learning to rank” and is central to application scenarios such as search engines or ad targeting. Note though that a crucial difference in the case of ranked recommendations is the importance of personalization: we do not expect a global notion of relevance, but rather look for ways of optimizing a personalized model. As you might guess, apart from popularity and rating prediction, we have tried many other features at Netflix. Some have shown no positive effect while others have improved our ranking accuracy tremendously. The graph below shows the ranking improvement we have obtained by adding different features and optimizing the machine learning algorithm. Many supervised classification methods can be used for ranking. Typical choices include Logistic Regression, Support Vector Machines, Neural Networks, or Decision Tree-based methods such as Gradient Boosted Decision Trees (GBDT). On the other hand, a great number of algorithms specifically designed for learning to rank have appeared in recent years such as RankSVM or RankBoost. There is no easy answer to choose which model will perform best in a given ranking problem. The simpler your feature space is, the simpler your model can be. But it is easy to get trapped in a situation where a new feature does not show value because the model cannot learn it. Or, the other way around, to conclude that a more powerful model is not useful simply because you don’t have the feature space that exploits its benefits. The previous discussion on the ranking algorithms highlights the importance of both data and models in creating an optimal personalized experience for our members. At Netflix, we are fortunate to have many relevant data sources and smart people who can select optimal algorithms to turn data into product features. Here are some of the data sources we can use to optimize our recommendations: So, what about the models? One thing we have found at Netflix is that with the great availability of data, both in quantity and types, a thoughtful approach is required to model selection, training, and testing. We use all sorts of machine learning approaches: From unsupervised methods such as clustering algorithms to a number of supervised classifiers that have shown optimal results in various contexts. This is an incomplete list of methods you should probably know about if you are working in machine learning for personalization: Consumer Data Science The abundance of source data, measurements and associated experiments allow us to operate a data-driven organization. Netflix has embedded this approach into its culture since the company was founded, and we have come to call it Consumer (Data) Science. Broadly speaking, the main goal of our Consumer Science approach is to innovate for members effectively. The only real failure is the failure to innovate; or as Thomas Watson Sr, founder of IBM, put it: “If you want to increase your success rate, double your failure rate.” We strive for an innovation culture that allows us to evaluate ideas rapidly, inexpensively, and objectively. And, once we test something we want to understand why it failed or succeeded. This lets us focus on the central goal of improving our service for our members. So, how does this work in practice? It is a slight variation over the traditional scientific process called A/B testing (or bucket testing): When we execute A/B tests, we track many different metrics. But we ultimately trust member engagement (e.g. hours of play) and retention. Tests usually have thousands of members and anywhere from 2 to 20 cells exploring variations of a base idea. We typically have scores of A/B tests running in parallel. A/B tests let us try radical ideas or test many approaches at the same time, but the key advantage is that they allow our decisions to be data-driven. You can read more about our approach to A/B Testing in this previous tech blog post or in some of the Quora answers by our Chief Product Officer Neil Hunt. An interesting follow-up question that we have faced is how to integrate our machine learning approaches into this data-driven A/B test culture at Netflix. We have done this with an offline-online testing process that tries to combine the best of both worlds. The offline testing cycle is a step where we test and optimize our algorithms prior to performing online A/B testing. To measure model performance offline we track multiple metrics used in the machine learning community: from ranking measures such as normalized discounted cumulative gain, mean reciprocal rank, or fraction of concordant pairs, to classification metrics such as accuracy, precision, recall, or F-score. We also use the famous RMSE from the Netflix Prize or other more exotic metrics to track different aspects like diversity. We keep track of how well those metrics correlate to measurable online gains in our A/B tests. However, since the mapping is not perfect, offline performance is used only as an indication to make informed decisions on follow up tests. Once offline testing has validated a hypothesis, we are ready to design and launch the A/B test that will prove the new feature valid from a member perspective. If it does, we will be ready to roll out in our continuous pursuit of the better product for our members. The diagram below illustrates the details of this process. An extreme example of this innovation cycle is what we called the Top10 Marathon. This was a focused, 10-week effort to quickly test dozens of algorithmic ideas related to improving our Top10 row. Think of it as a 2-month hackathon with metrics. Different teams and individuals were invited to contribute ideas and code in this effort. We rolled out 6 different ideas as A/B tests each week and kept track of the offline and online metrics. The winning results are already part of our production system. The Netflix Prize abstracted the recommendation problem to a proxy question of predicting ratings. But member ratings are only one of the many data sources we have and rating predictions are only part of our solution. Over time we have reformulated the recommendation problem to the question of optimizing the probability a member chooses to watch a title and enjoys it enough to come back to the service. More data availability enables better results. But in order to get those results, we need to have optimized approaches, appropriate metrics and rapid experimentation. To excel at innovating personalization, it is insufficient to be methodical in our research; the space to explore is virtually infinite. At Netflix, we love choosing and watching movies and TV shows. We focus our research by translating this passion into strong intuitions about fruitful directions to pursue; under-utilized data sources, better feature representations, more appropriate models and metrics, and missed opportunities to personalize. We use data mining and other experimental approaches to incrementally inform our intuition, and so prioritize investment of effort. As with any scientific pursuit, there’s always a contribution from Lady Luck, but as the adage goes, luck favors the prepared mind. Finally, above all, we look to our members as the final judges of the quality of our recommendation approach, because this is all ultimately about increasing our members’ enjoyment in their own Netflix experience. We are always looking for more people to join our team of “prepared minds”. Make sure you take a look at our jobs page. Originally published at techblog.netflix.com on June 20, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more. " Wolf Garbe,6,6,https://medium.com/@wolfgarbe/1000x-faster-spelling-correction-algorithm-2012-8701fcd87a5f?source=tag_archive---------2----------------,1000x Faster Spelling Correction algorithm (2012) – Wolf Garbe – Medium,"Update1: An improved SymSpell implementation is now 1,000,000x faster.Update2: SymSpellCompound with Compound aware spelling correction. Update3: Benchmark of SymSpell, BK-Tree und Norvig’s spell-correct. Recently I answered a question on Quora about spelling correction for search engines. When I described our SymSpell algorithm I was pointed to Peter Norvig’s page where he outlined his approach. Both algorithms are based on Edit distance (Damerau-Levenshtein distance). Both try to find the dictionary entries with smallest edit distance from the query term. If the edit distance is 0 the term is spelled correctly, if the edit distance is <=2 the dictionary term is used as spelling suggestion. But SymSpell uses a different way to search the dictionary, resulting in a significant performance gain and language independence. Three ways to search for minimum edit distance in a dictionary: 1. Naive approachThe obvious way of doing this is to compute the edit distance from the query term to each dictionary term, before selecting the string(s) of minimum edit distance as spelling suggestion. This exhaustive search is inordinately expensive. Source: Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze: Introduction to Information Retrieval. The performance can be significantly improved by terminating the edit distance calculation as soon as a threshold of 2 or 3 has been reached. 2. Peter NorvigGenerate all possible terms with an edit distance (deletes + transposes + replaces + inserts) from the query term and search them in the dictionary. For a word of length n, an alphabet size a, an edit distance d=1, there will be n deletions, n-1 transpositions, a*n alterations, and a*(n+1) insertions, for a total of 2n+2an+a-1 terms at search time. Source: Peter Norvig: How to Write a Spelling Corrector. This is much better than the naive approach, but still expensive at search time (114,324 terms for n=9, a=36, d=2) and language dependent (because the alphabet is used to generate the terms, which is different in many languages and huge in Chinese: a=70,000 Unicode Han characters) 3. Symmetric Delete Spelling Correction (SymSpell) Generate terms with an edit distance (deletes only) from each dictionary term and add them together with the original term to the dictionary. This has to be done only once during a pre-calculation step. Generate terms with an edit distance (deletes only) from the input term and search them in the dictionary. For a word of length n, an alphabet size of a, an edit distance of 1, there will be just n deletions, for a total of n terms at search time. This is three orders of magnitude less expensive (36 terms for n=9 and d=2) and language independent (the alphabet is not required to generate deletes). The cost of this approach is the pre-calculation time and storage space of x deletes for every original dictionary entry, which is acceptable in most cases. The number x of deletes for a single dictionary entry depends on the maximum edit distance: x=n for edit distance=1, x=n*(n-1)/2 for edit distance=2, x=n!/d!/(n-d)! for edit distance=d (combinatorics: k out of n combinations without repetitions, and k=n-d), E.g. for a maximum edit distance of 2 and an average word length of 5 and 100,000 dictionary entries we need to additionally store 1,500,000 deletes. Remark 1: During the precalculation, different words in the dictionary might lead to same delete term: delete(sun,1)==delete(sin,1)==sn. While we generate only one new dictionary entry (sn), inside we need to store both original terms as spelling correction suggestion (sun,sin) Remark 2: There are four different comparison pair types: The last comparison type is required for replaces and transposes only. But we need to check whether the suggested dictionary term is really a replace or an adjacent transpose of the input term to prevent false positives of higher edit distance (bank==bnak and bank==bink, but bank!=kanb and bank!=xban and bank!=baxn). Remark 3: Instead of a dedicated spelling dictionary we are using the search engine index itself. This has several benefits: Remark 4: We have implemented query suggestions/completion in a similar fashion. This is a good way to prevent spelling errors in the first place. Every newly indexed word, whose frequency is over a certain threshold, is stored as a suggestion to all of its prefixes (they are created in the index if they do not yet exist). As we anyway provide an instant search feature the lookup for suggestions comes also at almost no extra cost. Multiple terms are sorted by the number of results stored in the index. ReasoningThe SymSpell algorithm exploits the fact that the edit distance between two terms is symmetrical: We are using variant 3, because the delete-only-transformation is language independent and three orders of magnitude less expensive. Where does the speed come from? Computational Complexity The SymSpell algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size (but depending on the average term length and maximum edit distance), because our index is based on a Hash Table which has an average search time complexity of O(1). Comparison to other approaches BK-Trees have a search time of O(log dictionary_size), whereas the SymSpell algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size. Tries have a comparable search performance to our approach. But a Trie is a prefix tree, which requires a common prefix. This makes it suitable for autocomplete or search suggestions, but not applicable for spell checking. If your typing error is e.g. in the first letter, than you have no common prefix, hence the Trie will not work for spelling correction. Application Possible application fields of the SymSpell algorithm are those of fast approximate dictionary string matching: spell checkers for word processors and search engines, correction systems for optical character recognition, natural language translation based on translation memory, record linkage, de-duplication, matching DNA sequences, fuzzy string searching and fraud detection. Source codeThe C# implementation of the Symmetric Delete Spelling Correction algorithm is released on GitHub as Open Source under the MIT License:https://github.com/wolfgarbe/symspell PortsThere are ports in C++, Crystal, Go, Java, Javascript, Python, Ruby, Rust, Scala, Swift available. Originally published at blog.faroo.com on June 7, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder SeekStorm (Search-as-a-Service), FAROO (P2P Search) http://www.seekstorm.com https://github.com/wolfgarbe https://www.quora.com/profile/Wolf-Garbe " Paul Christiano,43,31,https://ai-alignment.com/a-formalization-of-indirect-normativity-7e44db640160?source=tag_archive---------3----------------,Formalizing indirect normativity – AI Alignment,"This post outlines a formalization of what Nick Bostrom calls “indirect normativity.” I don’t think it’s an adequate solution to the AI control problem; but to my knowledge it was the first precise specification of a goal that meets the “not terrible” bar, i.e. which does not lead to terrible consequences if pursued without any caveats or restrictions. The proposal outlined here was sketched in early 2012 while I was visiting FHI, and was my first serious foray into AI control. When faced with the challenge of writing down precise moral principles, adhering to the standards demanded in mathematics, moral philosophers encounter two serious difficulties: In light of these difficulties, a moral philosopher might simply declare: “It is not my place to aspire to mathematical standards of precision. Ethics as a project inherently requires shared language, understanding, and experience; it becomes impossible or meaningless without them.” This may be a defensible philosophical position, but unfortunately the issue is not entirely philosophical. In the interest of building institutions or machines which reliably pursue what we value, we may one day be forced to describe precisely “what we value” in a way that does not depend on charitable or “common sense” interpretation (in the same way that we today must describe “what we want done” precisely to computers, often with considerable effort). If some aspects of our values cannot be described formally, then it may be more difficult to use institutions or machines to reliably satisfy them. This is not to say that describing our values formally is necessary to satisfying them, merely that it might make it easier. Since we are focusing on finding any precise and satisfactory moral theory, rather than resolving disputes in moral philosophy, we will adopt a consequentialist approach without justification and focus on axiology. Moreover, we will begin from the standpoint of expected utility maximization, and leave aside questions about how or over what space the maximization is performed. We aim to mathematically define a utility function U such that we would be willing to build a hypothetical machine which exceptionlessly maximized U, possibly at the catastrophic expense of any other values. We will assume that the machine has an ability to reason which at least rivals that of humans, and is willing to tolerate arbitrarily complex definitions of U (within its ability to reason about them). We adopt an indirect approach. Rather than specifying what exactly we want, we specify a process for determining what we want. This process is extremely complex, so that any computationally limited agent will always be uncertain about the process’ output. However, by reasoning about the process it is possible to make judgments about which action has the highest expected utility in light of this uncertainty. For example, I might adopt the principle: “a state of affairs is valuable to the extent that I would judge it valuable after a century of reflection.” In general I will be uncertain about what I would say after a century, but I can act on the basis of my best guesses: after a century I will probably prefer worlds with more happiness, and so today I should prefer worlds with more happiness. After a century I have only a small probability of valuing trees’ feelings, and so today I should go out of my way to avoid hurting them if it is either instrumentally useful or extremely easy. As I spend more time thinking, my beliefs about what I would say after a century may change, and I will start to pursue different states of affairs even though the formal definition of my values is static. Similarly, I might desire to think about the value of trees’ feelings, if I expect that my opinions are unstable: if I spend a month thinking about trees, my current views will then be a much better predictor of my views after a hundred years, and if I know better whether or not trees’ feelings are valuable, I can make better decisions. This example is quite informal, but it communicates the main idea of the approach. We stress that the value of our contribution, if any, is in the possibility of a precise formulation. (Our proposal itself will be relatively informal; instead it is a description of how you would arrive at a precise formulation.) The use of indirection seems to be necessary to achieve the desired level of precision. Our proposal contains only two explicit steps: Each of these steps requires substantial elaboration, but we must also specify what we expect the human to do with these tools. This proposal is best understood in the context of other fantastic-seeming proposals, such as “my utility is whatever I would write down if I reflected for a thousand years without interruption or biological decay.” The counterfactual events which take place within the definition are far beyond the realm our intuition recognizes as “realistic,” and have no place except in thought experiments. But to the extent that we can reason about these counterfactuals and change our behavior on the basis of that reasoning (if so motivated), we can already see how such fantastic situations could affect our more prosaic reality. The remainder of this document consists of brief elaboration of some of these steps, and a few arguments about why this is a desirable process. The first step of our proposal is a high-fidelity mathematical model of human cognition. We will set aside philosophical troubles, and assume that the human brain is a purely physical system which may be characterized mathematically. Even granting this, it is not clear how we can realistically obtain such a characterization. The most obvious approach to characterizing a brain is to combine measurements of its behavior or architecture with an understanding of biology, chemistry, and physics. This project represents a massive engineering effort which is currently just beginning. Most pessimistically, our proposal could be postponed until this project’s completion. This could still be long before the mathematical characterization of the brain becomes useful for running experiments or automating human activities: because we are interested only in a definition, we do not care about having the computational resources necessary to simulate the brain. An impractical mathematical definition, however, may be much easier to obtain. We can define a model of a brain in terms of exhaustive searches which could never be practically carried out. For example, given some observations of a neuron, we can formally define a brute force search for a model of that neuron. Similarly, given models of individual neurons we may be able to specify a brute force search over all ways of connecting those neurons which account for our observations of the brain (say, some data acquired through functional neuroimaging). It may be possible to carry out this definition without exploiting any structural knowledge about the brain, beyond what is necessary to measure it effectively. By collecting imaging data for a human exposed to a wide variety of stimuli, we can recover a large corpus of data which must be explained by any model of a human brain. Moreover, by using our explicit knowledge of human cognition we can algorithmically generate an extensive range of tests which identify a successful simulation, by probing responses to questions or performance on games or puzzles. In fact, this project may be possible using existing resources. The complexity of the human brain is not as unapproachable as it may at first appear: though it may contain 1014synapses, each described by many parameters, it can be specified much more compactly. A newborn’s brain can be specified by about 109bits of genetic information, together with a recipe for a physical simulation of development. The human brain appears to form new long-term memories at a rate of 1–2 bits per second, suggesting that it may be possible to specify an adult brain using 109additional bits of experiential information. This suggests that it may require only about 1010bits of information to specify a human brain, which is at the limits of what can be reasonably collected by existing technology for functional neuroimaging. This discussion has glossed over at least one question: what do we mean by ‘brain emulation’? Human cognition does not reside in a physical system with sharp boundaries, and it is not clear how you would define or use a simulation of the “input-output” behavior of such an object. We will focus on some system which does have precisely defined input-output behavior, and which captures the important aspects of human cognition. Consider a system containing a human, a keyboard, a monitor, and some auxiliary instruments, well-insulated from the environment except for some wires carrying inputs to the monitor and outputs from the keyboard and auxiliary instruments (and wires carrying power). The inputs to this system are simply screens to be displayed on the monitor (say delivered as a sequence to be displayed one after another at 30 frames per second), while the outputs are the information conveyed from the keyboard and the other measuring apparatuses (also delivered as a sequence of data dumps, each recording activity from the last 30th of a second). This “human in a box” system can be easily formally defined if a precise description of a human brain and coarse descriptions of the human body and the environment are available. Alternatively, the input-output behavior of the human in a box can be directly observed, and a computational model constructed for the entire system. Let H be a mathematical definition of the resulting (randomized) function from input sequences (In(1), In(2), ..., In(K)) to the next output Out(K). H is, by design, a good approximation to what the human “would output” if presented with any particular input sequence. Using H, we can mathematically define what “would happen” if the human interacted with a wide variety of systems. For example, if we deliver Out(K) as the input to an abstract computer running some arbitrary software, and then define In(K+1) as what the screen would next display, we can mathematically define the distribution over transcripts which would have arisen if the human had interacted with the abstract computer. This computer could be running an interactive shell, a video game, or a messaging client. Note that H reflects the behavior of a particular human, in a particular mental state. This state is determined by the process used to design H, or the data used to learn it. In general, we can control H by choosing an appropriate human and providing appropriate instructions / training. More emulations could be produced by similar measures if necessary. Using only a single human may seem problematic, but we will not rely on this lone individual to make all relevant ethical judgments. Instead, we will try to select a human with the motivational stability to carry out the subsequent steps faithfully, which will define U using the judgment of a community consisting of many humans. This discussion has been brief and has necessarily glossed over several important difficulties. One difficulty is the danger of using computationally unbounded brute force search, given the possibility of short programs which exhibit goal-oriented behavior. Another difficulty is that, unless the emulation project is extremely conservative, the models it produces are not likely to be fully-functional humans. Their thoughts may be blurred in various ways, they may be missing many memories or skills, and they may lack important functionalities such as long-term memory formation or emotional expression. The scope of these issues depends on the availability of data from which to learn the relevant aspects of human cognition. Realistic proposals along these lines will need to accommodate these shortcomings, relying on distorted emulations as a tool to construct increasingly accurate models. For any idealized “software”, with a distinguished instruction return, we can use H to mathematically define the distribution over return values which would result, if the human were to interact with that software. We will informally define a particular program T which provides a rich environment, in which the remainder of our proposal can be implemented. From a technical perspective this will be the last step of our proposal. The remaining steps will be reflected only in the intentions and behavior of the human being simulated in H. Fix a convenient and adequately expressive language (say a dialect of Python designed to run on an abstract machine). T implements a standard interface for an interactive shell in this language: the user can look through all of the past instructions that have been executed and their return values (rendered as strings) or execute a new instruction. We also provide symbols representing H and T themselves (as functions from sequences of K inputs to a value for the Kth output). We also provide some useful information (such as a snapshot of the Internet, and some information about the process used to create H and T), which we encode as a bit string and store in a single environment variable data. We assume that our language of choice has a return instruction, and we have T return whenever the user executes this instruction. Some care needs to be taken to define the behavior if T enters an infinite loop–we want to minimize the probability that the human accidentally hangs the terminal, with catastrophic consequences, but we cannot provide a complete safety-net without running into unresolvable issues with self-reference. We define U to be the value returned by H interacting with T. If H represented an unfortunate mental state, then this interaction could be short and unproductive: the simulated human could just decide to type ‘return 0’ and be done with it. However, by choosing an appropriate human to simulate and inculcating an appropriate mental state, we can direct the process further. We intend for H to use the resources in T to initiate a larger deliberative process. For example, the first step of this process may be to instantiate many copies of H, interacting with variants of messaging clients which are in contact with each other. The return value from the original process could then be defined as the value returned by a designated ‘leader’ from this community, or as a majority vote amongst the copies of H, or so on. Another step might be to create appropriate realistic virtual environments for simulated brains, rather than confining them to boxes. For motivational stability, it may be helpful to design various coordination mechanisms, involving frameworks for interaction, “cached” mental states which are frequently re-instantiated, or sanity checks whereby one copy of H monitors the behavior of another. The resulting communities of simulated brains then engage in a protracted planning process, ensuring that subsequent steps can be carried out safely or developing alternative approaches. The main priority of this community is to reduce the probability of errors as far as possible (exactly what constitutes an ‘error’ will be discussed at more length later). At the end of this process, we obtain a formal definition of a new protocol H+, which submits its inputs for consideration to a large community and then produces its outputs using some deliberation mechanism (democratic vote, one leader using the rest of the community as advisors, etc.) The next step requires our community of simulated brains to construct a detailed simulation of Earth which they can observe and manipulate. Once they have such a simulation, they have access to all of the data which would have been available on Earth. In particular, they can now explore many possible futures and construct simulations for each living human. In order to locate Earth, we will again leverage an exhaustive search. First, H+ decides on informal desiderata for an “Earth simulation.” These are likely to be as follows: Once H+ has decided on the desiderata, it uses a brute force search to find a simulation satisfying them: for each possible program it instantiates a new copy of H+ tasked with evaluating whether that program is an acceptable simulation. We then define E to be a uniform distribution over programs which pass this evaluation. We might have doubts about whether this process produces the “real” Earth–perhaps even once we have verified that it is identical according to a laundry list of measures, it may still be different in other important ways. There are two reasons why we might care about such differences. First, if the simulated Earth has a substantially different set of people than the real Earth, then a different set of people will be involved in the subsequent decision making. If we care particularly about the opinions of the people who actually exist (which the reader might well, being amongst such people!) then this may be unsatisfactory. Second, if events transpire significantly differently on the simulated Earth than the real Earth, value judgments designed to guide behavior appropriately in the simulated Earth may lead to less appropriate behaviors in the real Earth. (This will not be a problem if our ultimate definition of U consists of universalizable ethical principles, but we will see that U might take other forms.) These concerns are addressed by a few broad arguments. First, checking a detailed but arbitrary ‘laundry list’ actually provides a very strong guarantee. For example, if this laundry list includes verifying a snapshot of the Internet, then every event or person documented on the Internet must exist unchanged, and every keystroke of every person composing a document on the Internet must not be disturbed. If the world is well interconnected, then it may be very difficult to modify parts of the world without having substantial effects elsewhere, and so if a long enough arbitrary list of properties is fixed, we expect nearly all of the world to be the same as well. Second, if the essential character of the world is fixed but detailed are varied, we should expect the sort of moral judgments reached by consensus to be relatively constant. Finally, if the system whose behavior depends on these moral judgments is identical between the real and simulated worlds, then outputting a U which causes that system to behave a certain way in the simulated world will also cause that system to behave that way in the real world. Once H+ has defined a simulation of the world which permits inspection and intervention, by careful trial and error H+ can inspect a variety of possible futures. In particular, they can find interventions which cause the simulated human society to conduct a real brain emulation project and produce high-fidelity brain scans for all living humans. Once these scans have been obtained, H+ can use them to define U as the output of a new community, H++, which draws on the expertise of all living humans operating under ideal conditions. There are two important degrees of flexibility: how to arrange the community for efficient communication and deliberation, and how to delegate the authority to define U. In terms of organization, the distinction between different approaches is probably not very important. For example, it would probably be perfectly satisfactory to start from a community of humans interacting with each other over something like the existing Internet (but on abstract, secure infrastructure). More important are the safety measures which would be in place, and the mechanism for resolving differences of value between different simulated humans. The basic approach to resolving disputes is to allow each human to independently create a utility function U, each bounded in the interval [0, 1], and then to return their average. This average can either be unweighted, or can be weighted by a measure of each individual’s influence in the real world, in accordance with a game-theoretic notion like the Shapley value applied to abstract games or simulations of the original world. More sophisticated mechanisms are also possible, and may be desirable. Of course these questions can and should be addressed in part by H+ during its deliberation in the previous step. After all, H+ has access to an unlimited length of time to deliberate and has infinitely powerful computational aids. The role of our reasoning at this stage is simply to suggest that we can reasonably expect H+ to discover effective solutions. As when discussing discovering a brain simulation by brute force, we have skipped over some critical issues in this section. In general, brute force searches (particularly over programs which we would like to run) are quite dangerous, because such searches will discover many programs with destructive goal-oriented behaviors. To deal with these issues, in both cases, we must rely on patience and powerful safety measures. Once we have a formal description of a community of interacting humans, given as much time as necessary to deliberate and equipped with infinitely powerful computational aids, it becomes increasingly difficult to make coherent predictions about their behavior. Critically, though, we can also become increasingly confident that the outcome of their behavior will reflect their intentions. We sketch some possibilities, to illustrate the degree of flexibility available. Perhaps the most natural possibility is for this community to solve some outstanding philosophical problems and to produce a utility function which directly captures their preferences. However, even if they quickly discovered a formulation which appeared to be attractive, they would still be wise to spend a great length of time and to leverage some of these other techniques to ensure that their proposed solution was really satisfactory. Another natural possibility is to eschew a comprehensive theory of ethics, and define value in terms of the community’s judgment. We can define a utility function in terms of the hypothetical judgments of astronomical numbers of simulated humans, collaboratively evaluating the goodness of a state of affairs by examining its history at the atomic level, understanding the relevant higher-order structure, and applying human intuitions. It seems quite likely that the community will gradually engage in self-modifications, enlarging their cognitive capacity along various dimensions as they come to understand the relevant aspects of cognition and judge such modifications to preserve their essential character. Either independently or as an outgrowth of this process, they may (gradually or abruptly) pass control to machine intelligences which they are suitably confident expresses their values. This process could be used to acquire the power necessary to define a utility function in one of the above frameworks, or understanding value-preserving self-modification or machine intelligence may itself prove an important ingredient in formalizing what it is we value. Any of these operations would be performed only after considerable analysis, when the original simulated humans were extremely confident in the desirability of the results. Whatever path they take and whatever coordination mechanisms they use, eventually they will output a utility function U’. We then define U = 0 if U’ < 0, U = 1 if U’ > 1, and U = U’ otherwise. At this point we have offered a proposal for formally defining a function U. We have made some general observations about what this definition entails. But now we may wonder to what extent U reflects our values, or more relevantly, to what extent our values are served by the creation of U-maximizers. Concerns may be divided into a few natural categories: We respond to each of these objections in turn. If the process works as intended, we will reach a stage in which a large community of humans reflects on their values, undergoes a process of discovery and potentially self-modification, and then outputs its result. We may be concerned that this dynamic does not adequately capture what we value. For example, we may believe that some other extrapolation dynamic captures our values, or that it is morally desirable to act on the basis of our current beliefs without further reflection, or that the presence of realistic disruptions, such as the threat of catastrophe, has an important role in shaping our moral deliberation. The important observation, in the defense of our proposal, is that whatever objections we could think of today, we could think of within the simulation. If, upon reflection, we decide that too much reflection is undesirable, we can simply change our plans appropriately. If we decide that realistic interference is important for moral deliberation, we can construct a simulation in which such interference occurs, or determine our moral principles by observing moral judgments in our own world’s possible futures. There is some chance that this proposal is inadequate for some reason which won’t be apparent upon reflection, but then by definition this is a fact which we cannot possibly hope to learn by deliberating now. It therefore seems quite difficult to maintain objections to the proposal along these lines. One aspect of the proposal does get “locked in,” however, after being considered by only one human rather than by a large civilization: the distribution of authority amongst different humans, and the nature of mechanisms for resolving differing value judgments. Here we have two possible defenses. One is that the mechanism for resolving such disagreements can be reflected on at length by the individual simulated in H. This individual can spend generations of subjective time, and greatly expand her own cognitive capacities, while attempting to determine the appropriate way to resolve such disagreements. However, this defense is not completely satisfactory: we may be able to rely on this individual to produce a very technically sound and generally efficient proposal, but the proposal itself is quite value laden and relying on one individual to make such a judgment is in some sense begging the question. A second, more compelling, defense, is that the structure of our world has already provided a mechanism for resolving value disagreements. By assigning decision-making weight in a way that depends on current influence (for example, as determined by the simulated ability of various coalitions to achieve various goals), we can generate a class of proposals which are at a minimum no worse than the status quo. Of course, these considerations will also be shaped by the conditions surrounding the creation or maintenance of systems which will be guided by U–for example, if a nation were to create a U-maximizer, they might first adopt an internal policy for assigning influence on U. By performing this decision making in an idealized environment, we can also reduce the likelihood of destructive conflict and increase the opportunities for mutually beneficial bargaining. We may have moral objections to codifying this sort of “might makes right” policy, favoring a more democratic proposal or something else entirely, but as a matter of empirical fact a more ‘cosmopolitan’ proposal will be adopted only if it is supported by those with the appropriate forms of influence, a situation which is unchanged by precisely codifying existing power structure. Finally, the values of the simulations in this process may diverge from the values of the original human models, for one reaosn or another. For example, the simulated humans may predictably disagree with the original models about ethical questions by virtue of (probably) having no physical instantiation. That is, the output of this process is defined in terms of what a particular human would do, in a situation which that human knows will never come to pass. If I ask “What would I do, if I were to wake up in a featureless room and told that the future of humanity depended on my actions?” the answer might begin with “become distressed that I am clearly inhabiting a hypothetical situation, and adjust my ethical views to take into account the fact that people in hypothetical situations apparently have relevant first-person experience.” Setting aside the question of whether such adjustments are justified, they at least raise the possibility that our values may diverge from those of the simulations in this process. These changes might be minimized, by understanding their nature in advance and treating them on a case-by-case basis (if we can become convinced that our understanding is exhaustive). For example, we could try and use humans who robustly employ updateless decision theories which never undergo such predictable changes, or we could attempt to engineer a situation in which all of the humans being emulated do have physical instantiations, and naive self-interest for those emulations aligns roughly with the desired behavior (for example, by allowing the early emulations to “write themselves into” our world). We can imagine many ways in which this process can fail to work as intended–the original brain emulations may accurately model human behavior, the original subject may deviate from the intended plans, or simulated humans can make an error when interacting with their virtual environment which causes the process to get hijacked by some unintended dynamic. We can argue that the proposal is likely to succeed, and can bolster the argument in various ways (by reducing the number of assumptions necessary for succees, building in fault-tolerance, justifying each assumption more rigorously, and so on). However, we are unlikely to eliminate the possibility of error. Therefore we need to argue that if the process fails with some small probability, the resulting values will only be slightly disturbed. This is the reason for requiring U to lie in the interval [0, 1]–we will see that this restriction bounds the damage which may be done by an unlikely failure. If the process fails with some small probability ε, then we can represent the resulting utility function as U = (1 — ε) U1 + ε U2, where U1 is the intended utility function and U2 is a utility function produced by some arbitrary error process. Now consider two possible states of affairs A and B such that U1(A) > U1(B) + ε /(1 — ε) ≈ U1(B) + ε. Then since 0 ≤ U2 ≤ 1, we have: U(A) = (1 — ε) U1(A) + ε U2(A) > (1 — ε) U1(B) + ε ≥ (1 — ε) U1(B) + ε U2(B) = U(B) Thus if A is substantially better than B according to U1, then A is better than B according to U. This shows that a small probability of error, whether coming from the stochasticity of our process or an agent’s uncertainty about the process’ output, has only a small effect on the resulting values. Moreover, the process contains a humans who have access to a simulation of our world. This implies, in particular, that they have access to a simulation of whatever U-maximizing agents exist in the world, and they have knowledge of those agents’ beliefs about U. This allows them to choose U with perfect knowledge of the effects of error in these agents’ judgments. In some cases this will allow them to completely negate the effect of error terms. For example, if the randomness in our process causes a perfectly cooperate community of simulated humans to “control” U with probability 2⁄3, and causes an arbitrary adversary to control it with probability 1⁄3, then the simulated humans can spend half of their mass outputting a utility function which exactly counters the effect of the adversary. In general, the situation is not quite so simple: the fraction of mass controlled by any particular coalition will vary as the system’s uncertainty about U varies, and so it will be impossible to counteract the effect of an error term in a way which is time-independent. Instead, we will argue later that an appropriate choice of a bounded and noisy U can be used to achieve a very wide variety of effective behaviors of U-maximizers, overcoming the limitations both of bounded utility maximization and of noisy specification of utility functions. Many possible problems with this scheme were described or implicitly addressed above. But that discussion was not exhaustive, and there are some classes of errors that fall through the cracks. One interesting class of failures concerns changes in the values of the hypothetical human H. This human is in a very strange situation, and it seems quite possible that the physical universe we know contains extremely few instances of that situation (especially as the process unfolds and becomes more exotic). So H’s first-person experience of this situation may lead to significant changes in H’s views. For example, our intuition that our own universe is valuable seems to be derived substantially from our judgment that our own first-person experiences are valuable. If hypothetically we found ourselves in a very alien universe, it seems quite plausible that we would judge the experiences within that universe to be morally valuable as well (depending perhaps on our initial philosophical inclinations). Another example concerns our self-interest: much of individual humans’ values seem to depend on their own anticipations about what will happen to them, especially when faced with the prospect of very negative outcomes. If hypothetically we woke up in a completely non-physical situation, it is not exactly clear what we would anticipate, and this may distort our behavior. Would we anticipate the planned thought experiment occurring as planned? Would we focus our attention on those locations in the universe where a simulation of the thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates — anyone who can manipulate H’s behavior can have a significant effect on the future of our world, and so many may be motivated to create simulations of H. A realistic U-maximizer will not be able to carry out the process described in the definition of U–in fact, this process probably requires immensely more computing resources than are available in the universe. (It may even involve the reaction of a simulated human to watching a simulation of the universe!) To what extent can we make robust guarantees about the behavior of such an agent? We have already touched on this difficulty when discussing the maxim “A state of affairs is valuable to the extent I would judge it valuable after a century of reflection.” We cannot generally predict our own judgments in a hundred years’ time, but we can have well-founded beliefs about those judgments and act on the basis of those beliefs. We can also have beliefs about the value of further deliberation, and can strike a balance between such deliberation and acting on our current best guess. A U-maximizer faces a similar set of problems: it cannot understand the exact form of U, but it can still have well-founded beliefs about U, and about what sorts of actions are good according to U. For example, if we suppose that the U-maximizer can carry out any reasoning that we can carry out, then the U-maximizer knows to avoid anything which we suspect would be bad according to U (for example, torturing humans). Even if the U-maximizer cannot carry out this reasoning, as long as it can recognize that humans have powerful predictive models for other humans, it can simply appropriate those models (either by carrying out reasoning inspired by human models, or by simply asking). Moreover, the community of humans being simulated in our process has access to a simulation of whatever U-maximizer is operating under this uncertainty, and has a detailed understanding of that uncertainty. This allows the community to shape their actions in a way with predictable (to the U-maximizer) consequences. It is easily conceivable that our values cannot be captured by a bounded utility function. Easiest to imagine is the possibility that some states of the world are much better than others, in a way that requires unbounded utility functions. But it is also conceivable that the framework of utility maximization is fundamentally not an appropriate one for guiding such an agent’s action, or that the notion of utility maximization hides subtleties which we do not yet appreciate. We will argue that it is possible to transform bounded utility maximization into an arbitrary alternative system of decision-making, by designing a utility function which rewards worlds in which the U-maximizer replaced itself with an alternative decision-maker. It is straightforward to design a utility function which is maximized in worlds where any particular U-maximizer converted itself into a non-U-maximizer–even if no simple characterization can be found for the desired act, we can simply instantiate many communities of humans to look over a world history and decide whether or not they judge the U-maximizer to have acted appropriately. The more complicated question is whether a realistic U-maximizer can be made to convert itself into a non-U-maximizer, given that it is logically uncertain about the nature of U. It is at least conceivable that it couldn’t: if the desirability of some other behavior is only revealed by philosophical considerations which are too complex to ever be discovered by physically limited agents, then we should not expect any physically limited U-maximizer to respond to those considerations. Of course, in this case we could also not expect normal human deliberation to correctly capture our values. The relevant question is whether a U-maximizer could switch to a different normative framework, if an ordinary investment of effort by human society revealed that a different normative framework was more appropriate. If a U-maximizer does not spend any time investigating this possibility, than it may not be expected to act on it. But to the extent that we assign a significant probability to the simulated humans deciding that a different normative framework is more appropriate, and to the extent that the U-maximizer is able to either emulate or accept our reasoning, it will also assign a significant probability to this possibility (unless it is able to rule it out by more sophisticated reasoning). If we (and the U-maximizer) expect the simulations to output a U which rewards a switch to a different normative framework, and this possibility is considered seriously, then U-maximization entails exploring this possibility. If these explorations suggest that the simulated humans probably do recommend some particular alternative framework, and will output a U which assigns high value to worlds in which this framework is adopted and low value to worlds in which it isn’t, then a U-maximizer will change frameworks. Such a “change of frameworks” may involve sweeping action in the world. For example, the U-maximizer may have created many other agents which are pursuing activities instrumentally useful to maximizing U. These agents may then need to be destroyed or altered; anticipating this possibility, the U-maximizer is likely to take actions to ensure that its current “best guess” about U does not get locked in. This argument suggests that a U-maximizer could adopt an arbitrary alternative framework, if it were feasible to conclude that humans would endorse that framework upon reflection. Our proposal appears to be something of a cop out, in that it declines to directly take a stance on any ethical issues. Indeed, not only do we fail to specify a utility function ourselves, but we expect the simulations to which we have delegated the problem to in turn delegate it at least a few more times. Clearly at some point this process must bottom out with actual value judgments, and we may be concerned that this sort of “passing the buck” is just obscuring deeper problems which will arise when the process does bottom out. As observed above, whatever such concerns we might have can also be discovered by the simulations we create. If there is some fundamental difficulty which always arises when trying to assign values, then we certainly have not exacerbated this problem by delegation. Nevertheless, there are at least two coherent objections one might raise: Both of these objections can be met with a single response. In the current world, we face a broad range of difficult and often urgent problems. By passing the buck the first time, we delegate resolution of ethical challenges to a civilization which does not have to deal with some of these difficulties–in particular, it faces no urgent existential threats. This allows us to divert as much energy as possible to dealing with practical problems today, while still capturing most of the benefits of nearly arbitrarily extensive ethical deliberation. This process is defined in terms of the behavior of unthinkably many hypothetical brain emulations. It is conceivable that the moral status of these emulations may be significant. We must make a distinction between two possible sources of moral value: it could be the case that a U-maximizer carries out simulations on physical hardware in order to better understand U, and these simulations have moral value, or it could be the case that the hypothetical emulations themselves have moral value. In the first case, we can remark that the moral value of such simulations is itself incorporated into the definition of U. Therefore a U-maximizer will be sensitive to the possible suffering of simulations it runs while trying to learn about U–as long as it believes that we may might be concerned about the simulations’ welfare, upon reflection, it can rely as much as possible on approaches which do not involve running simulations, which deprive simulations of the first-person experience of discomfort, or which estimate outcomes by running simulations in more pleasant circumstances. If the U-maximizer is able to foresee that we will consider certain sacrifices in simulation welfare worthwhile, then it will make those sacrifices. In general, in the same way that we can argue that estimates of U reflect our values over states of affairs, we can argue that estimates of U reflects our values over processes for learning about U. In the second case, a U-maximizer in our world may have little ability to influence the welfare of hypothetical simulations invoked in the definition of U. However, the possible disvalue of these simulations’ experiences are probably seriously diminished. In general the moral value of such hypothetical simulations’ experiences is somewhat dubious. If we simply write down the definition of U, these simulations seem to have no more reality than story-book characters whose activities we describe. The best arguments for their moral relevance comes from the great causal significance of their decisions: if the actions of a powerful U-maximizer depend on its beliefs about what a particular simulation would do in a particular situation, including for example that simulation’s awareness of discomfort or fear, or confusion at the absurdity of the hypothetical situation in which they find themselves, then it may be the case that those emotional responses are granted moral significance. However, although we may define astronomical numbers of hypothetical simulations, the detailed emotional responses of very view of these simulations will play an important role in the definition of U. Moreover, for the most part the existences of the hypothetical simulations we define are extremely well-controlled by those simulations themselves, and may be expected to be counted as unusually happy by the lights of the simulations themselves. The early simulations (who have less such control) are created from an individual who has provided consent and is selected to find such situations particularly non-distressing. Finally, we observe that U can exert control over the experiences of even hypothetical simulations. If the early simulations would experience morally relevant suffering because of their causal significance, but the later simulations they generate robustly disvalue this suffering, the later simulations can simulate each other and ensure that they all take the same actions, eliminating the causal significance of the earlier simulations. Originally published at ordinaryideas.wordpress.com on April 21, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. OpenAI Aligning AI systems with human interests. " Robbie Tilton,3,15,https://medium.com/@robbietilton/emotional-computing-with-ai-3513884055fa?source=tag_archive---------4----------------,Emotional Computing – Robbie Tilton – Medium,"Investigating the human to computer relationship through reverse engineering the Turing test Humans are getting closer to creating a computer with the ability to feel and think. Although the processes of the human brain are at large unknown, computer scientists have been working to simulate the human capacity to feel and understand emotions. This paper explores what it means to live in an age where computers can have emotional depth and what this means for the future of human to computer interactions. In an experiment between a human and a human disguised as a computer, the Turing test is reverse engineered in order to understand the role computers will play as they become more adept to the processes of the human mind. Implications for this study are discussed and the direction for future research suggested. The computer is a gateway technology that has opened up new ways of creation, communication, and expression. Computers in first world countries are a standard household item (approximately 70% of Americans owning one as of 2009 (US Census Bereau)) and are utilized as a tool to achieve a diverse range of goals. As this product continues to become more globalized, transistors are becoming smaller, processors are becoming faster, hard drives are holding information in new networked patterns, and humans are adapting to the methods of interaction expected of machines. At the same time, with more powerful computers and quicker means of communication — many researchers are exploring how a computer can serve as a tool to simulate the brains cognition. If a computer is able to achieve the same intellectual and emotional properties as the human brain — we could potentially understand how we ourselves think and feel. Coined by MIT, the term Affective Computing relates to computation of emotion or the affective phenomena and is a study that breaks down complex processes of the brain relating them to machine-like activities. Marvin Minsky, Rosalind Picard, Clifford Nass, and Scott Brave — along with many others — have contributed to this field and what it would mean to have a computer that could fully understand its users. In their research it is very clear that humans have the capacity to associate human emotions and personality traits with a machine (Nass and Brave, 2005), but can a human ever truly treat machine as a person? In this paper we will uncover what it means for humans to interact with machines of greater intelligence and attempt to predict the future of human to computer interactions. The human to computer relationship is continuously evolving and is dependent on the software interface users interact with. With regards to current wide scale interfaces — OSX, Windows, Linux, iOS, and Android — the tools and abilities that a computer provide remains to be the central focus of computational advancements for commercial purposes. This relationship to software is driven by utilitarian needs and humans do not expect emotional comprehension or intellectually equivalent thoughts in their household devices. As face tracking, eye tracking, speech recognition, and kinetic recognition are advancing in their experimental laboratories, it is anticipated that these technologies will eventually make their way to the mainstream market to provide a new relationship to what a computer can understand about its users and how a user can interact with a computer. This paper is not about if a computer will have the ability to feel and love its user, but asks the question — to what capacity will humans be able to reciprocate feelings to a machine. How does Intelligence Quotient (IQ) differ from Emotional Quotient (EQ). An IQ is a representational relationship of intelligence that measures cognitive abilities like learning, understanding, and dealing with new situations. An EQ is a method of measuring emotional intelligence and the ability to both use emotions and cognitive skills (Cherry). Advances in computer IQ have been astonishing and have proved that machines are capable of answering difficult questions accurately, are able to hold a conversation with human-like understanding, and allow for emotional connections between a human and machine. The Turing test in particular has shown the machines ability to think and even fool a person into believing that it is a human (Turing test explained in detail in section 4). Machines like, Deep Blue, Watson, Eliza, Svetlana, CleverBot, and many more — have all expanded the perceptions of what a computer is and can be. If an increased computational IQ can allow a human to computer relationship to feel more like a human to human interaction, what would the advancement of computational EQ bring us? Peter Robinson, a professor at the University of Cambridge, states that if a computer understands its users’ feelings that it can then respond with an interaction that is more intuitive for its users’ (Robinson). In essence, EQ advocates feel that it can facilitate a more natural interaction process where collaboration can occur with a computer. In Alan Turing’s, Computing Machinery and Intelligence (Turing, 1950), a variant on the classic British parlor “imitation game” is proposed. The original game revolves around three players: a man (A), a woman (B), and an interrogator ©. The interrogator stays in a room apart from A and B and only can communicate to the participants through text-based communication (a typewriter or instant messenger style interface). When the game begins one contestant (A or B) is asked to pretend to be the opposite gender and to try and convince the interrogator © of this. At the same time the opposing participant is given full knowledge that the other contestant is trying to fool the interrogator. With Alan Turing’s computational background, he took this imitation game one step further by replacing one of the participants (A or B) with a machine — thus making the investigator try and depict if he/she was speaking to a human or machine. In 1950, Turing proposed that by 2000 the average interrogator would not have more than a 70 percent chance of making the right identification after five minutes of questioning. The Turing test was first passed in 1966, with Eliza by Joseph Weizenbaum, a chat robot programmed to act like a Rogerian psychotherapist (Weizenbaum, 1966). In 1972, Kenneth Colby created a similar bot called PARRY that incorporated more personality than Eliza and was programmed to act like a paranoid schizophrenic (Bowden, 2006). Since these initial victories for the test, the 21st century has proven to continue to provide machines with more human-like qualities and traits that have made people fall in love with them, convinced them of being human, and have human-like reasoning. Brian Christian, the author of The Most Human Human, argues that the problem with designing artificial intelligence with greater ability is that even though these machines are capable of learning and speaking, that they have no “self”. They are mere accumulations of identities and thoughts that are foreign to the machine and have no central identity of their own. He also argues that people are beginning to idealize the machine and admire machines capabilities more than their fellow humans — in essence — he argues humans are evolving to become more like machines with less of a notion of self (Christian 2011). Turing states, “we like to believe that Man is in some subtle way superior to the rest of creation” and “it is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.” If this is true, will humans idealize the future of the machine for its intelligence or will they remain an inferior being as an object of our creation? Reversing the Turing test allows us to understand how humans will treat machines when machines provide an equivalent emotional and intellectual capacity. This also hits directly on Jefferson Lister’s quote, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Participants were given a chat-room simulation between two participants (A) a human interrogator and (B) a human disguised as a computer. In this simulation A and B were both placed in different rooms to avoid influence and communicated through a text-based interface. (A) was informed that (B) was an advanced computer chat-bot with the capacity to feel, understand, learn, and speak like a human. (B) was informed to be his or herself. Text-based communication was chosen to follow Turing’s argument that a computers voice should not help an interrogator determine if it’s a human or computer. Pairings of participants were chosen to participate in the interaction one at a time to avoid influence from other participants. Each experiment was five minutes in length to replicate Turing’s time restraints. Twenty-eight graduate students were recruited from the NYU Interactive Telecommunications Program to participate in the study — 50% male and 50% female. The experiment was evenly distributed across men and women. After being recruited in-person, participants were directed to a website that gave instructions and ran the experiment. Upon entering the website, (A) participants were told that we were in the process of evaluating an advanced cloud based computing system that had the capacity to feel emotion, understand, learn, and converse like a human. (B) participants were instructed that they would be communicating with another person through text and to be themselves. They were also told that participant (A) thinks they are a computer, but that they shouldn’t act like a computer or pretend to be one in any way. This allowed (A) to explicitly understand that they were talking to a computer while (B) knew (A) perspective and explicitly were not going to play the role of a computer. Participants were then directed to communicate with the bot or human freely without restrictions. After five minutes of conversation the participants were asked to stop and then filled out a questionnaire. Participants were asked to rate IQ and EQ of the person they were conversing with. (A) participants perceived the following of (B): IQ: 0% — Not Good / 0% — Barely Acceptable / 21.4% — Okay / 50% — Great / 28.6% Excellent IQ Average Rating: 81.4% EQ: 0% — Not Good / 7.1% — Barely Acceptable / 50% — Okay / 14.3% — Great / 28.6% — Excellent EQ Average Rating: 72.8% Ability to hold a conversation: 0% — Not Good / 0% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 35.7% — Excellent Ability to hold a conversation Average: 81.4% (B) participants perceived the following of (A): IQ: 0% — Not Good / 21.4% — Barely Acceptable / 35.7% — Okay / 28.6% — Great / 14.3% Excellent IQ Average Rating: 67% EQ: 7.1% — Not Good / 14.3% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 14.3% — Excellent EQ Average Rating: 67% Ability to hold a conversation: 7.1% — Not Good / 28.6% — Barely Acceptable / 35.7% — Okay / 0% — Great / 28.6% — Excellent Ability to hold a conversation Average: 62.8% Overall, (A) participants gave the perceived Chabot higher ratings than (B) participants gave (A). In particular, the highest rating was in regards to the chat- bot’s IQ. This data states that people viewed the chat-bot to be more intellectually competent. It also implies that people talking with bots decrease their IQ, EQ, and conversation ability when communicating with computers. (A) participants were allowed to decide their username within the chat system to best reflect how they wanted to portray themselves to the machine. (B) participants were designated the gender neutral name “Bot” in an attempt to ganger gender perceptions for the machine. The male to female ratio was divided evenly with all participants: 50% being male and 50% being female. (A) participants 50% of the time thought (B) was a male, 7.1% a female, and 42.9% gender neutral. On the other hand, (B) participants 28.6% of the time thought (A) was a male, 57.1% a female, and 14.3% gender neutral. The usernames (A) chose are as follows: Hihi, Inessah Somade3 Willzing Jihyun, G, Ann, Divagrrl93, Thisdoug, Jono, Minion10, P, 123, itslynnburke From these results, it is clear that people associate the male gender and gender neutrality with machines. It also demonstrates that people modify their identities when speaking with machines. (B) participants were asked if they would like to pursue a friendship with the person they chatted with. 50% of participants responded affirmatively that they would indeed like to pursue a friendship while 50% said maybe or no. One response stated, “I would like to continue the conversation, but I don’t think I would be enticed to pursue a friendship.” Another responded, “Maybe? I like people who are intellectually curious, but I worry that the person might be a bit of a smart-ass.” Overall the participant disguised as a machine may or may not pursue a friendship after five minutes of text-based conversation. (B) participants were also asked if they felt (A) cared about their feelings. 21.4% stated that (A) indeed did care about their feelings, 21.4% stated that they weren’t sure if (A) cared about their feelings, and 57.2% stated that (A) did not care about their feelings. These results indicate a user’s lack of attention to (B)’s emotional state. (A) participants were asked what they felt could be improved about the (B) participants. The following improvements were noted, “Should be funny” “Give it a better sense of humor” “It can be better if he knows about my friends or preference” “The response was inconsistent and too slow”“It should share more about itself. Your algorithm is prime prude, just like that LETDOWN Siri. Well, I guess I liked it better, but it should be more engaged and human consistency, not after the first cold prompt.” “It pushed me on too many questions” “I felt that it gave up on answering and the response time was a bit slow. Outsource the chatbot to fluent English speakers elsewhere and pretend they are bots — if the responses are this slow to this many inquiries, then it should be about the same experience.” “I was very impressed with its parsing ability so far. Not as much with its reasoning. I think some parameters for the conversation would help, like ‘Ask a question’” “Maybe make the response faster”“I was confused at first, because I asked a question, waited a bit, then asked another question, waited and then got a response from the bot...” The responses from this indicate that even if a computer is a human that its user may not necessarily be fully satisfied with its performance. The response implies that each user would like the machine to accommodate his or her needs in order to cause less personality and cognitive friction. With several participant comments incorporating response time, it also indicates people expect machines to have consistent response times. Humans clearly vary in speed when listening, thinking, and responding, but it is expected of machines to act in a rhythmic fashion. It also suggests that there is an expectation that a machine will answer all questions asked and will not ask its users more questions than perceived necessary. (A) participants were asked if they felt (B)’s Artificial Intelligence could improve their relationship to computers if integrated in their daily products. 57.1% of participants responded affirmatively that they felt this could improve their relationship:“Well- I think I prefer talking to a person better. But yes for ipod, smart phones, etc. would be very handy for everyday use products”“Yes. Especially iphone is always with me. So it can track my daily behaviors. That makes the algorithm smarter”“Possibly, I should have queries it for information that would have been more relevant to me”“Absolutely!”“Yes” The 42.9% which responded negatively had doubts that it would be necessary or desirable:“Not sure, it might creep me out if it were.”“I like Siri as much as the next gal, but honestly we’re approaching the uncanny valley now.”“Its not clear to me why this type of relationship needs to improve, i think human relationships still need a lot of work.”“Nope, I still prefer flesh sacks.“No” The findings of the paper are relevant to the future of Affective Computation: whether a super computer with a human-like IQ and EQ can improve the human-to-computer interaction. The uncertainty of computational equivalency that Turing brought forth is indeed an interesting starting point to understand what we want out of the future of computers. The responses from the experiment affirm gender perceptions of machines and show how we display ourselves to machines. It seems that we limit our intelligence, limit our emotions, and obscure our identities when communicating to a machine. This leads us to question if we would want to give our true self to a computer if it doesn’t have a self of its own. It also could indicate that people censor themselves for machines because they lack a similarity that bonds humans to humans or that there’s a stigma associated with placing information in a digital device. The inverse relationship is also shown through the data that people perceive a bots IQ, EQ, and discussion ability to be high. Even though the chat-bot was indeed a human this data can imply humans perceive bots to not have restrictions and to be competent at certain procedures. The results also imply that humans aren’t really sure what they want out of Artificial Intelligence in the future and that we are not certain that an Affective computer would even enjoy a users company and/or conversation. The results also state that we currently think of computers as a very personal device that should be passive (not active), but reactive when interacted with. It suggests a consistent reliability we expect upon machines and that we expect to take more information from a machine than it takes from us. A major limitation of this experiment is the sample size and sample diversity. The sample size of twenty-eight students is too small to fully understand and gather a stable result set. It was also only conducted with NYU: Interactive Telecommunications Students who all have extensive experience with computers and technology. To get a more accurate assessment of emotions a more diverse sample range needs to be taken. Five minutes is a short amount of time to create an emotional connection or friendship. To stay true to the Turing tests limitations this was enforced, but further relational understanding could be understood if more time was granted. Beside the visual interface of the chat window it would be important to show the emotions of participant (B) through a virtual avatar. Not having this visual feedback could have limited emotional resonance with participants (A). Time is also a limitation. People aren’t used to speaking to inquisitive machines yet and even through a familiar interface (a chat-room) many participants haven’t held conversations with machines previously. Perhaps if chat-bots become more active conversational participants’ in commercial applications users will feel less censored to give themselves to the conversation. In addition to the refinements noted in the limitations described above, there are several other experiments for possible future studies. For example, investigating a long-term human-to-bot relationship. This would provide a better understanding toward the emotions a human can share with a machine and how a machine can reciprocate these emotions. It would also better allow computer scientists to understand what really creates a significant relationship when physical limitations are present. Future studies should attempt to push these results further by understanding how a larger sample reacts to a computer algorithm with higher intellectual and emotional understanding. It should also attempt to understand the boundaries of emotional computing and what is ideal for the user and what is ideal for the machine without compromising either parties capacities. This paper demonstrates the diverse range of emotions that people can feel for affective computation and indicates that we are not in a time where computational equivalency is fully desired or accepted. Positive reactions indicate that there is optimism for more adept artificial intelligence and that there is interest in the field for commercial use. It also provides insight that humans limit themselves when communicating with machines and that inversely machines don’t limit themselves when communicating with humans. Books & ArticlesBowden M., 2006, Minds as Machine: A History of Cognitive Science, Oxford University Press Christian B., 2011, The Most Human Human Marvin M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind, Simon & Schuster Paperbacks Nass C., Brave S., 2005. Wired For Speech: How Voice Activates and Advances the Human-Computer Relationship, MIT Press Nass C., Brave S., 2005, Hutchinson K., Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent, Human-Computer Studies, 161- 178, Elsevier Picard, R., 1997. Affective Computing, MIT Press Searle J., 1980, Minds, Brains, and Programs, Cambridge University Press, 417–457 Turing, A., 1950, Computing Machinery and Intelligence, Mind, Stor, 59, 433–460 Wilson R., Keil F., 2001, The MIT Encyclopedia of the Cognitive Sciences, MIT Press Weizenbaum J., 1966, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM, 36–45 Websites Cherry K., What is Emotional Intelligence?, http://psychology.about.com/od/personalitydevelopment/a/emotionalintell.htm Epstein R., 2006, Clever Bots, Radio Lab, http://www.radiolab.org/2011/may/31/clever-bots/ IBM, 1977, Deep Blue, IBM, http://www.research.ibm.com/deepblue/ IBM, 2011, Watson, IBM, http://www-03.ibm.com/innovation/us/watson/index.html Leavitt D., 2011, I Took the Turing Test, New York Times, http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian- christian.html Personal Robotics Group, 2008, Nexi, MIT. http://robotic.media.mit.edu/ Robinson P., The Emotional Computer, Camrbidge Ideas, http://www.cam.ac.uk/research/news/the-emotional-computer/ US Census Bereau, 2009, Households with a Computer and Internet Use: 1984 to 2009. http://www.census.gov/hhes/computer/ 1960’s, Eliza, MIT, http://www.manifestation.com/neurotoys/eliza.php3 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Netflix Technology Blog,330,11,https://medium.com/netflix-techblog/system-architectures-for-personalization-and-recommendation-e081aa94b5d8?source=tag_archive---------0----------------,System Architectures for Personalization and Recommendation,"by Xavier Amatriain and Justin Basilico In our previous posts about Netflix personalization, we highlighted the importance of using both data and algorithms to create the best possible experience for Netflix members. We also talked about the importance of enriching the interaction and engaging the user with the recommendation system. Today we’re exploring another important piece of the puzzle: how to create a software architecture that can deliver this experience and support rapid innovation. Coming up with a software architecture that handles large volumes of existing data, is responsive to user interactions, and makes it easy to experiment with new recommendation approaches is not a trivial task. In this post we will describe how we address some of these challenges at Netflix. To start with, we present an overall system diagram for recommendation systems in the following figure. The main components of the architecture contain one or more machine learning algorithms. The simplest thing we can do with data is to store it for later offline processing, which leads to part of the architecture for managing Offline jobs. However, computation can be done offline, nearline, or online. Online computation can respond better to recent events and user interaction, but has to respond to requests in real-time. This can limit the computational complexity of the algorithms employed as well as the amount of data that can be processed. Offline computation has less limitations on the amount of data and the computational complexity of the algorithms since it runs in a batch manner with relaxed timing requirements. However, it can easily grow stale between updates because the most recent data is not incorporated. One of the key issues in a personalization architecture is how to combine and manage online and offline computation in a seamless manner. Nearline computation is an intermediate compromise between these two modes in which we can perform online-like computations, but do not require them to be served in real-time. Model training is another form of computation that uses existing data to generate a model that will later be used during the actual computation of results. Another part of the architecture describes how the different kinds of events and data need to be handled by the Event and Data Distribution system. A related issue is how to combine the different Signals and Models that are needed across the offline, nearline, and online regimes. Finally, we also need to figure out how to combine intermediate Recommendation Results in a way that makes sense for the user. The rest of this post will detail these components of this architecture as well as their interactions. In order to do so, we will break the general diagram into different sub-systems and we will go into the details of each of them. As you read on, it is worth keeping in mind that our whole infrastructure runs across the public Amazon Web Services cloud. As mentioned above, our algorithmic results can be computed either online in real-time, offline in batch, or nearline in between. Each approach has its advantages and disadvantages, which need to be taken into account for each use case. Online computation can respond quickly to events and use the most recent data. An example is to assemble a gallery of action movies sorted for the member using the current context. Online components are subject to an availability and response time Service Level Agreements (SLA) that specifies the maximum latency of the process in responding to requests from client applications while our member is waiting for recommendations to appear. This can make it harder to fit complex and computationally costly algorithms in this approach. Also, a purely online computation may fail to meet its SLA in some circumstances, so it is always important to think of a fast fallback mechanism such as reverting to a precomputed result. Computing online also means that the various data sources involved also need to be available online, which can require additional infrastructure. On the other end of the spectrum, offline computation allows for more choices in algorithmic approach such as complex algorithms and less limitations on the amount of data that is used. A trivial example might be to periodically aggregate statistics from millions of movie play events to compile baseline popularity metrics for recommendations. Offline systems also have simpler engineering requirements. For example, relaxed response time SLAs imposed by clients can be easily met. New algorithms can be deployed in production without the need to put too much effort into performance tuning. This flexibility supports agile innovation. At Netflix we take advantage of this to support rapid experimentation: if a new experimental algorithm is slower to execute, we can choose to simply deploy more Amazon EC2 instances to achieve the throughput required to run the experiment, instead of spending valuable engineering time optimizing performance for an algorithm that may prove to be of little business value. However, because offline processing does not have strong latency requirements, it will not react quickly to changes in context or new data. Ultimately, this can lead to staleness that may degrade the member experience. Offline computation also requires having infrastructure for storing, computing, and accessing large sets of precomputed results. Nearline computation can be seen as a compromise between the two previous modes. In this case, computation is performed exactly like in the online case. However, we remove the requirement to serve results as soon as they are computed and can instead store them, allowing it to be asynchronous. The nearline computation is done in response to user events so that the system can be more responsive between requests. This opens the door for potentially more complex processing to be done per event. An example is to update recommendations to reflect that a movie has been watched immediately after a member begins to watch it. Results can be stored in an intermediate caching or storage back-end. Nearline computation is also a natural setting for applying incremental learning algorithms. In any case, the choice of online/nearline/offline processing is not an either/or question. All approaches can and should be combined. There are many ways to combine them. We already mentioned the idea of using offline computation as a fallback. Another option is to precompute part of a result with an offline process and leave the less costly or more context-sensitive parts of the algorithms for online computation. Even the modeling part can be done in a hybrid offline/online manner. This is not a natural fit for traditional supervised classification applications where the classifier has to be trained in batch from labeled data and will only be applied online to classify new inputs. However, approaches such as Matrix Factorization are a more natural fit for hybrid online/offline modeling: some factors can be precomputed offline while others can be updated in real-time to create a more fresh result. Other unsupervised approaches such as clustering also allow for offline computation of the cluster centers and online assignment of clusters. These examples point to the possibility of separating our model training into a large-scale and potentially complex global model training on the one hand and a lighter user-specific model training or updating phase that can be performed online. Much of the computation we need to do when running personalization machine learning algorithms can be done offline. This means that the jobs can be scheduled to be executed periodically and their execution does not need to be synchronous with the request or presentation of the results. There are two main kinds of tasks that fall in this category: model training and batch computation of intermediate or final results. In the model training jobs, we collect relevant existing data and apply a machine learning algorithm produces a set of model parameters (which we will henceforth refer to as the model). This model will usually be encoded and stored in a file for later consumption. Although most of the models are trained offline in batch mode, we also have some online learning techniques where incremental training is indeed performed online. Batch computation of results is the offline computation process defined above in which we use existing models and corresponding input data to compute results that will be used at a later time either for subsequent online processing or direct presentation to the user. Both of these tasks need refined data to process, which usually is generated by running a database query. Since these queries run over large amounts of data, it can be beneficial to run them in a distributed fashion, which makes them very good candidates for running on Hadoop via either Hive or Pig jobs. Once the queries have completed, we need a mechanism for publishing the resulting data. We have several requirements for that mechanism: First, it should notify subscribers when the result of a query is ready. Second, it should support different repositories (not only HDFS, but also S3 or Cassandra, for instance). Finally, it should transparently handle errors, allow for monitoring, and alerting. At Netflix we use an internal tool named Hermes that provides all of these capabilities and integrates them into a coherent publish-subscribe framework. It allows data to be delivered to subscribers in near real-time. In some sense, it covers some of the same use cases as Apache Kafka, but it is not a message/event queue system. Regardless of whether we are doing an online or offline computation, we need to think about how an algorithm will handle three kinds of inputs: models, data, and signals. Models are usually small files of parameters that have been previously trained offline. Data is previously processed information that has been stored in some sort of database, such as movie metadata or popularity. We use the term “signals” to refer to fresh information we input to algorithms. This data is obtained from live services and can be made of user-related information, such as what the member has watched recently, or context data such as session, device, date, or time. Our goal is to turn member interaction data into insights that can be used to improve the member’s experience. For that reason, we would like the various Netflix user interface applications (Smart TVs, tablets, game consoles, etc.) to not only deliver a delightful user experience but also collect as many user events as possible. These actions can be related to clicks, browsing, viewing, or even the content of the viewport at any time. Events can then be aggregated to provide base data for our algorithms. Here we try to make a distinction between data and events, although the boundary is certainly blurry. We think of events as small units of time-sensitive information that need to be processed with the least amount of latency possible. These events are routed to trigger a subsequent action or process, such as updating a nearline result set. On the other hand, we think of data as more dense information units that might need to be processed and stored for later use. Here the latency is not as important as the information quality and quantity. Of course, there are user events that can be treated as both events and data and therefore sent to both flows. At Netflix, our near-real-time event flow is managed through an internal framework called Manhattan. Manhattan is a distributed computation system that is central to our algorithmic architecture for recommendation. It is somewhat similar to Twitter’s Storm, but it addresses different concerns and responds to a different set of internal requirements. The data flow is managed mostly through logging through Chukwa to Hadoop for the initial steps of the process. Later we use Hermes as our publish-subscribe mechanism. The goal of our machine learning approach is to come up with personalized recommendations. These recommendation results can be serviced directly from lists that we have previously computed or they can be generated on the fly by online algorithms. Of course, we can think of using a combination of both where the bulk of the recommendations are computed offline and we add some freshness by post-processing the lists with online algorithms that use real-time signals. At Netflix, we store offline and intermediate results in various repositories to be later consumed at request time: the primary data stores we use are Cassandra, EVCache, and MySQL. Each solution has advantages and disadvantages over the others. MySQL allows for storage of structured relational data that might be required for some future process through general-purpose querying. However, the generality comes at the cost of scalability issues in distributed environments. Cassandra and EVCache both offer the advantages of key-value stores. Cassandra is a well-known and standard solution when in need of a distributed and scalable no-SQL store. Cassandra works well in some situations, however in cases where we need intensive and constant write operations we find EVCache to be a better fit. The key issue, however, is not so much where to store them as to how to handle the requirements in a way that conflicting goals such as query complexity, read/write latency, and transactional consistency meet at an optimal point for each use case. In previous posts, we have highlighted the importance of data, models, and user interfaces for creating a world-class recommendation system. When building such a system it is critical to also think of the software architecture in which it will be deployed. We want the ability to use sophisticated machine learning algorithms that can grow to arbitrary complexity and can deal with huge amounts of data. We also want an architecture that allows for flexible and agile innovation where new approaches can be developed and plugged-in easily. Plus, we want our recommendation results to be fresh and respond quickly to new data and user actions. Finding the sweet spot between these desires is not trivial: it requires a thoughtful analysis of requirements, careful selection of technologies, and a strategic decomposition of recommendation algorithms to achieve the best outcomes for our members. We are always looking for great engineers to join our team. If you think you can help us, be sure to look at our jobs page. Originally published at techblog.netflix.com on March 27, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more. " "James Faghmous ",187,6,https://medium.com/@nomadic_mind/new-to-machine-learning-avoid-these-three-mistakes-73258b3848a4?source=tag_archive---------1----------------,New to Machine Learning? Avoid these three mistakes,"Machine learning (ML) is one of the hottest fields in data science. As soon as ML entered the mainstream through Amazon, Netflix, and Facebook people have been giddy about what they can learn from their data. However, modern machine learning (i.e. not the theoretical statistical learning that emerged in the 70s) is very much an evolving field and despite its many successes we are still learning what exactly can ML do for data practitioners. I gave a talk on this topic earlier this fall at Northwestern University and I wanted to share these cautionary tales with a wider audience. Machine learning is a field of computer science where algorithms improve their performance at a certain task as more data are observed.To do so, algorithms select a hypothesis that best explains the data at hand with the hope that the hypothesis would generalize to future (unseen) data. Take the left panel in the figure in the header, the crosses denote the observed data projected in a two-dimensional space — in this case house prices and their corresponding size in square meters. The blue line is the algorithm’s best hypothesis to explain the observed data. It states “there is a linear relationship between the price and size of a house. As the house’s size increases, so does its price in linear increments.” Now using this hypothesis, I can predict the price of an unseen datapoint based on its size. As the dimensions of the data increase, the hypotheses that explain the data become more complex.However, given that we are using a finite sample of observations to learn our hypothesis, finding an adequate hypothesis that generalizes to unseen data is nontrivial. There are three major pitfalls one can fall into that will prevent you from having a generalizable model and hence the conclusions of your hypothesis will be in doubt. Occam’s razor is a principle attributed to William of Occam a 14th century philosopher. Occam’s razor advocates for choosing the simplest hypothesis that explains your data, yet no simpler. While this notion is simple and elegant, it is often misunderstood to mean that we must select the simplest hypothesis possible regardless of performance. In their 2008 paper in Nature, Johan Nyberg and colleagues used a 4-level artificial neural network to predict seasonal hurricane counts using two or three environmental variables. The authors reported stellar accuracy in predicting seasonal North Atlantic hurricane counts, however their model violates Occam’s razor and most certainly doesn’t generalize to unseen data. The razor was violated when the hypothesis or model selected to describe the relationship between environmental data and seasonal hurricane counts was generated using a four-layer neural network. A four-layer neural network can model virtually any function no matter how complex and could fit a small dataset very well but fail to generalize to unseen data. The rightmost panel in the top figure shows such incident. The hypothesis selected by the algorithm (the blue curve) to explain the data is so complex that it fits through every single data point. That is: for any given house size in the training data, I can give you with pinpoint accuracy the price it would sell for. It doesn’t take much to observe that even a human couldn’t be that accurate. We could give you a very close estimate of the price, but to predict the selling price of a house, within a few dollars , every single time is impossible. The pitfall of selecting too complex a hypothesis is known as overfitting. Think of overfitting as memorizing as opposed to learning. If you are a child and you are memorizing how to add numbers you may memorize the sums of any pair of integers between 0 and 10. However, when asked to calculate 11 + 12 you will be unable to because you have never seen 11 or 12, and therefore couldn’t memorize their sum. That’s what happens to an overfitted model, it gets too lazy to learn the general principle that explains the data and instead memorizes the data. Data leakage occurs when the data you are using to learn a hypothesis happens to have the information you are trying to predict. The most basic form of data leakage would be to use the same data that we want to predict as input to our model (e.g. use the price of a house to predict the price of the same house). However, most often data leakage occurs subtly and inadvertently. For example, one may wish to learn for anomalies as opposed to raw data, that is a deviations from a long-term mean. However, many fail to remove the test data before computing the anomalies and hence the anomalies carry some information about the data you want to predict since they influenced the mean and standard deviation before being removed. The are several ways to avoid data leakage as outlined by Claudia Perlich in her great paper on the subject. However, there is no silver bullet — sometimes you may inherit a corrupt dataset without even realizing it. One way to spot data leakage is if you are doing very poorly on unseen independent data. For example, say you got a dataset from someone that spanned 2000-2010, but you started collecting you own data from 2011 onward. If your model’s performance is poor on the newly collected data it may be a sign of data leakage. You must resist the urge to retrain the model with both the potentially corrupt and new data. Instated, either try to identify the causes of poor performance on the new data or, better yet, independently reconstruct the entire dataset. As a rule of thumb, your best defense is to always be mindful of the possibility of data leakage in any dataset. Sampling bias is the case when you shortchange your model by training it on a biased or non-random dataset, which results in a poorly generalizable hypothesis. In the case of housing prices, sampling bias occurs if, for some reason, all the house prices/sizes you collected were of huge mansions. However, when it was time to test your model and the first price you needed to predict was that of a 2-bedroom apartment you couldn’t predict it. Sampling bias happens very frequently mainly because, as humans, we are notorious for being biased (nonrandom) samplers. One of the most common examples of this bias happens in startups and investing. If you attend any business school course, they will use all these “case studies” of how to build a successful company. Such case studies actually depict the anomalies and not the norm as most companies fail — For every Apple that became a success there were 1000 other startups that died trying. So to build an automated data-driven investment strategy you would need samples from both successful and unsuccessful companies. The figure above (Figure 13) is a concrete example of sampling bias. Say you want to predict whether a tornado is going to originate at certain location based on two environmental conditions: wind shear and convective available potential energy (CAPE). We don’t have to worry about what these variables actually mean, but Figure 13 shows the wind shear and CAPE associated with 242 tornado cases. We can fit a model to these data but it will certainly not generalize because we failed to include shear and CAPE values when tornados did not occur. In order for our model to separate between positive (tornados) and negative (no tornados) events we must train it using both populations. There you have it. Being mindful of these limitations does not guarantee that your ML algorithm will solve all your problems, but it certainly reduces the risk of being disappointed when your model doesn’t generalize to unseen data. Now go on young Jedi: train your model, you must! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @nomadic_mind. Sometimes the difference between success and failure is the same as between = and ==. Living is in the details. " Datafiniti,3,5,https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------2----------------,Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog,"At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like: and do the following: Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages: Or Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML. While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks. Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation: A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible. Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them. In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like: For a more detailed explanation of neural networks, you can check out the following links: In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including: Our output layer had the following: Our algorithm for the neural network took the following steps: The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so: So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are: Our implementation had the following results: These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time. Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search. We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the

and

tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post! We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff! You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Instant Access to Web Data Building the world’s largest database of web data — follow our journey. " Arjan Haring 🔮🔨,3,8,https://medium.com/i-love-experiments/reinventing-social-sciences-in-the-era-of-big-data-d255f3e391f3?source=tag_archive---------3----------------,Reinventing Social Sciences in the Era of Big Data – I love experiments – Medium,"Sune Lehmann is an Associate Professor at DTU Informatics, Technical University of Denmark. In the past, he has worked as a Postdoctoral Fellow at Institute for Quantitative Social Science at Harvard University and the College of Computer and Information Science at Northeasthern University; before that, he was at Laszlo Barabási’s Center for Complex Network Research at Northeastern University and the Center for Cancer Systems Biology at the Dana Farber Cancer Institute. I wouldn’t call him stupid. He is okay. Well he is actually pretty great. Forget that, he is freaking fantastic! We should get him over for one of our events! And so we did. Sune spoke at the 2nd #projectwaalhalla. This time, let’s begin at the beginning, before we dive in deeper. Your main research project has to do with measuring real social networks with high resolution. I know for a fact you don’t mean 3D printed social networks. But what are you aiming for, and how are you going to get there? My (humble) research goal is to reinvent social sciences in the age of big data. My background is in mathematical analysis of large networks. But over the past 10 years, I’ve slowly grown more and more interested in understanding social systems. As a scientist I was blown away by the promise of all of the digital traces of human behavior collected as a consequence of cheap hard drives and databases everywhere. But in spite of the promise of big data, the results so far have been less exciting than I had hoped. For all the hype, deep new scientific insights from big data are far and few between. A central hypothesis in my work is that in order to advance our quantitative understanding of social interaction, we cannot get by with noisy, incomplete big data: We need good data. Let me explain why and use my own field as an example. Let’s say you have a massive cell phone data set from a telco that provides service to 30% or the population of a large country of 66 million people. That’s something like 20 million people and easily terabytes of monthly data, so a massive dataset. But when you start thinking about the network, you run into problems. The standard approach is to simply look at the network between the individuals in your sample. Assuming that people are randomly sampled, and links are randomly distributed, you realize that 30% of the population corresponds to only 9% of the links. Is 9% of cell phone calls enough to understand how the network works? With only one in ten links remaining in the dataset, the social structure almost completely erased. And it gets worse. Telecommunication is only one (small & biased) aspect of human communication. Human interactions may also unfold face-to-face, via text message, email, Facebook, Skype, etc. And these streams are collected in silos, where we cannot generally identify individuals/entities across datasets. So if we think about all these ways we can communicate. Access to only one in ten of my cell phone contacts is very likely insufficient for making valid inferences. And the worst part is that we can’t know. Without access to the full data set, we can’t even tell what we can and can’t tell from a sample. So when I started out as an assistant professor, I decided to change the course of my career and move from sitting comfortably in front of my computer as a computational/theoretical scientist to becoming an experimenter, to try and attack this problem head on.Now, a few of years later, we have put together a dataset of human social interactions that is unparalleled in terms of quality and size. We recording social interactions within more than 1000 students at my university, using top-of-the-line cell phones as censors. We can capture detailed interaction patterns, such as face-to-face (via bluetooth), social network data (e.g. Facebook and Twitter) via apps, telecommunication data from call logs, and geolocation via GPS & Wifi. We like to call this type of data ‘Deep Data’: A densely connected group of participants (all the links), observations across many communication channels, high frequency observations (minute-by-minute scale), but with long observation windows (years of collection), and with behavioral data supplemented by classic questionnaires, as well as the possibility of running intervention experiments. But my expertise (and ultimate interest) is not in building a Deep Data collection platform (although that has been a lot of fun). I want to get back to the questions that motivated the enthusiasm for computational social science in the first place. Reinventing social sciences is what it’s all about. What can we learn from just one channel? Now that we know about all the communication channels, we can begin to understand what kind of things one may learn from a single channel. Let’s get quantitative about the usefulness of e.g. large cell phone data sets or Facebook, when that’s the only data available. My heart is still with the network science. In some ways, this whole project is designed to build a system that will really take us places in terms of modeling human social networks. Lots of network science is still about unweighted, undirected static networks; we are already using this dataset to create better models for dynamic, multiplex networks. Understanding spreading processes (influence, behavior, disease, etc) is a central goal if we look a bit forward in time. We have an system, where N is big enough to perform intervention experiments with randomized controls, etc. We’re still far from implementing this goal, but we’re working on finding the right questions — and working closely with social scientists to get our protocols for these questions just right. What a coincidence... We are all about modeling behavior and learning across channels. And with ContagionAPI prominently on our product roadmap we want to start dabbling with spreading processes as well in the near future. What would you say were major challenges the last years in modeling behavior, and what do see as biggest challenges & opportunities for the future? There are many challenges. Although we’ve made amazing progress in network science, for example, it’s still a fact that our fundamental understanding of dynamic/multi-channel networks is still in its infancy, there aren’t a lot of easily interpretable models that really explain the underlying networks. So that’s an area with lots of challenges and corresponding opportunities. And when we want to figure out questions about things taking place on networks, we run into all kinds of problems about how to do statistics right. Brilliant statisticians have shown that homophily and contagion are generically confounded in observational social network studies. On that front, guys like Sinan Aral are doing really exciting work using interventions to get at some of the issues, but there is still lots to do in that area. Finally, privacy is a big issue. We’re working closely with collaborators at the MIT MediaLab to develop new, responsible solutions — and we’ve already gotten far on that topic. But in terms of data sharing that respects the privacy of study participants, there is still a long way to go. But since studies of digital traces of human behavior will not be going away anytime soon, we have to make progress in this area. And oh yeah, why does this all matter? And should we be concerned by these things? I think there are many reasons to be concerned and excited. The more we learn about how systems work, the more we are able to influence them, to control them. That is also true for systems of humans. If we think about spreading of disease, it’d be great to know how to slow down or stop the spread of SARS or similar contagious viruses. Or, as a society we may be able to increase spread of things we support, such as tolerance, good exercise habits, etc ... and similarly, we can use an understanding influence in social systems to inhibit negative behavior such as intolerance, smoking, etc. And all this ties into another good reason to be concerned. Companies like Google, Facebook, Apple (or governmental agencies like NSA) are committing serious resources to research in this area. It’s not a coincidence that both Google and Facebook are developing their own cell-phones. But none of these walled-off players are sharing their results. They’re simply applying them to the public. In my opinion that’s one of the key problems of the current state of affairs, the imbalance of information. We hand over our personal data to powerful corporations, but have nearly zero insight into a) what they know about us and b) what they’re doing with all the stuff they know about us. By doing research that is open, collaborative, explicit about privacy, and public, I hope we can act as a counter-point and work to diminish the information-gap. Okay, great. But should companies be interested in the stuff you are doing? And if so, why? I think so! One of the exciting things about this area is that basic research is very close to applied research. Insight into the mechanisms that drive human nature is indeed valuable for companies (I presume that’s why Science Rockstars exists, for example) [note from the editor: not stupid at all]. We already know that human behavior can be influenced significantly with “nudging”, that certain kinds of collective behaviors influence our opinions (and purchasing behaviors). The more we uncover about the details of these mechanism, the more precise and effective we can be about influencing others (let’s discuss the ethics of this another time). But it’s not just marketing. If used for good, this is the science of what makes people happy. So inside organizations, work like this could be used to re-think organizational structures, incentives, etc; to make employees happier & more fulfilled. Or if we think about organizations as organisms, having access to realtime information about employees can be thought of as a “nervous system” for the company, allowing for faster reaction times when crises arise, identification of pain points, etc. Finally, for the medical field, we know that genes only explain part of what makes us sick. Being able to quantify and analyze behavior means knowing more about the environment, the nurture part of nurture vs nature. In that sense, detailed data on how we behave could also help us understand how to be healthier. Originally published at www.sciencerockstars.com on November 2, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Let’s Fix the Future: Scientific Advisor @jadatascience A blog series about the discipline of business experimentation. How to run and learn from experiments in different contexts is a complex matter, but lays at the heart of innovation. " Eventbrite,10,8,https://medium.com/@eventbrite/multi-index-locality-sensitive-hashing-for-fun-and-profit-ee04292a6e37?source=tag_archive---------4----------------,Multi-Index Locality Sensitive Hashing for Fun and Profit,"One way that we deal with this volume of data, is to cluster up all the similar messages together to find patterns in behavior of senders. For example, if someone is contacting thousands of different organizers with similar messages, that behavior is suspect and will be examined. The big question is, how can we compare every single message we see with every other message efficiently and accurately? In this article, we’ll be exploring a technique known as Multi-Index Locality Sensitive Hashing. To perform the the comparison efficiently, we pre-process the data with a series of steps: Let’s first define what similar messages are. Here we have and example of two similar messages A and B: To our human eyes of course they’re similar, but we want determine this similarity quantitatively. The solution is to break up the message into tokens, and then treat each message as a bag of tokens. The simplest, naive way to do tokenization is to split up a message on spaces/punctuation and convert each character to lowercase. So our result from our tokenization of the above messages would be: I’ll leave as an exercise to the reader to come up with more interesting ways to do tokenization for handling contractions, plurals, foreign languages, etc. To calculate the similarity between these two bags of tokens, we’ll use an estimation known as the Jaccard Similarity Coefficient. This is defined as “the ratio of sizes of the intersection and union of A and B”. Therefore, in our example: We’ll then set a threshold, above which, we will consider two messages to be similar. So then, when given a set of M messages, we simply compute the similarity of a message to every other message. This works in theory, but in practice there are cases where this metric is unreliable (eg. if one message is significantly longer than the other); not to mention horribly inefficient (O(N2 M2), where N is the number of tokens per message). We need do things smarter! One problem with doing a simple Jaccard similarity is that the scale of the value changes with the size (number of tokens) of the message. To address this, we can transform our tokens with a method known as minHash. Here’s a psuedo-code snippet: The interesting property of the minHash transformation is that it leaves us with a constant N number of hashes, and that “chosen” hashes will be in the same positions in the vector. After the minHash transformation, the Jaccard similarity can be approximated by an element-wise comparison of two hash vectors (implemented as pseudo-code above). So, we can stop here, but we’re having so much fun... and we can do so much better. Notice when we do comparison, we have to to O(N) integer comparisons, and if we have M messages then comparing every message to each other is O(N M2) integer comparisons. This is still not acceptable. To reduce the time complexity of comparing minHashes to each other, we can do better with a technique known as bit sampling. The main idea is that we don’t need to know the exact value of each hash, but only that the hashes are equal at their respective positions in each hash vector. With this insight, let’s only look at the least significant bit (LSB) of each hash value. More pseudo-code: When comparing two messages, if the hashes are equal in the same position in the minHash vector, then the bits in the equivalent position after bit sampling should be also equal. So, we can emulate the Jaccard similarity of two minHashes by counting the equal bits in the two bit vectors (aka. the Hamming Distance) and dividing by the number of bits. Of course, two different hashes will have the same LSB 50% of the time; to increase our efficacy, we would pick a large N initially. Here is some naive and inefficient pseudo-code: In practice, more efficient implementations of the bitSimilarity function can calculate in near O(1) time for reasonable sizes of N (Bit Twiddling Hacks). This means that when comparing M messages to each other, we’ve reduced the time complexity to O(M2). But wait, there’s more! Remember how I said we have a lot of data? O(M2) is still unreasonable when M is a very large number of messages. So we need to try to reduce the number of comparisons to make using a “divide and conquer” strategy. Lets start with an example where we set N=32, and we want to have a bitSimilarity of .9: In the worst case, to do this, we need 28 of the 32 bits to be equal, or 4 bits unequal. We will refer to the number of unequal bits as the radius of the bit vectors; ie. if two bit vectors are within a certain radius of bits, then they are similar. The unequal bits can be found by taking the bit-wise XOR of the two bit vectors. For example: If we split up XOR_mask into 4 chunks of 8 bits, then at least one chunk will have exactly zero or exactly one of the bit differences (pigeonhole principal). More generally, if we split XOR_mask of size N into K chunks, with an expected radius R, then at least one chunk is guaranteed to have floor(R / K) or less bits unequal. For the purpose of explanation, we will assume that we have chosen all the parameters such that floor(R / K) = 1. Now you’re wondering how this piece of logic help us? We can now design a data structure LshTable to index the bit vectors to reduce the number of bitSimilarity comparisons drastically (but increase memory consumption in O(M)) [Fast Search in Hamming Space with Multi-Index Hashing]. We will define LshTable with some pseudo-code: Basically, in LshTable initialization, we create K hash tables for each K chunks. During add() of a bit vector, we split the bit vector into K chunks. For each of these chunks, we add the original bit vector into the associated hash table under the index chunk. Upon the lookup() of a bit vector, we once again split it into chunks and for each chunk look up the associated hash table for a chunk that’s close (zero or one bits off). The returned list is a set of candidate bit vectors to check bitSimilarity. Because of the property explained in the previous section, at least one hash table will contain a set of candidates that contains a similar bit vector. To compare every M message to every other message we first insert its bit vector into an LshTable (an O(K) operation, K is constant). Then to find similar messages, we simply do a lookup from the LshTable (another O(K) operation), and then check bitSimilarity for each of the candidates returned. The number of candidates to check is usually on the order of M / 2^(N/K), if at all. Therefore, the time complexity to compare all M messages to each other is O(M * M / 2^(N/K)). In practice, N and K are empirically chosen such that 2^(N/K) >> M, so the final time complexity is O(M) — remember we started with O(N M2)! Phew, what a ride. So, we’ve detailed how to find similar messages in a very large set of messages efficiently. By using Multi-Index Locality Sensitivity Hashing, we can reduce the time complexity of from quadratic (with a very high constant) to near linear (with a more manageable constant). I should also mention that many of the ancillary pseudo-code excerpts used here describe the most naive implementation of each method, and are for instructive purposes only. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. We help bring the world together through live experiences. " Akash Shende,1,3,https://medium.com/@akash0x53/color-based-object-segmentation-baf8044ec6a3?source=tag_archive---------7----------------,Color Based Object Segmentation – Akash Shende – Medium,"In this picture, Pranav Mistry is using color marker on his fingers to track the gesture and his wearable computer perform action based on gestures. That sounds easy! But No, it’s not. Computer need to understand those color marker first, for that it needs to separate marker from any surroundings. Segmentation can be helpful to achieve this. Various methods are available for segmentation, however, this article talks about robust Color based object segmentation. Create binary mask that separates blue T-shirt from rest. To find blue t-shirt in given image, I used OpenCV’s inRange method: Which takes color (or greyscale) image, lower & higher range value as its parameter and returns binary image, where pixel value set to 0 when input pixel doesn’t fall in specified range, otherwise pixel value set to 1. With the help of this function and after determining range values, I ended up with this mask. But you can see there are problems! It’s not able to create mask for complete t-shirt, also it mask eyes which aren’t blue. This is happening because light from one side of body whitens the right side at the same time creates shadow in left region. Thus, it creates different shades of blue and results into partial segmentation. Normalization of color plane reduces variation in light by averaging pixel values, thus it removes highlighted and shadowed region and make image flatten. Following image is free from highlights & shadows and it is divided into one large green background, blue t-shirt and skin. Now the inRange method able to mask only t-shirt. Following function converts a pixel at X, Y location into its corresponding normalized rgb pixel. Let R,G,B are pixel values, then normalized pixel g(x,y) is calculated as,divide the individual color component by sum of all color components and multiply by 255. Division results into floating point number in range of 0.0 to 1.0 and as this is 8 bit image result is scaled up by 255. This function accepts 8 bit RGB image matrix of size 800x600 and returns normalized RGB image. Originally published at akash0x53.github.io on April 29, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Python आणि बरच काही . " Hrishikesh Huilgolkar,1,4,https://medium.com/@hrishikeshio/traveling-santa-problem-an-incompetent-algorists-attempt-49ad9d26b26?source=tag_archive---------8----------------,Traveling santa Problem — An incompetent algorist’s attempt,"Kaggle announced the Traveling santa problem in the christmas season. I joined in excitedly.. but soon realized this is not an easy problem. Solving this problem would require expertise on data structures and some good familiarity with TSP problems and its many heuristic algorithms. I had neither.. I had to find a way to deal with this problem. I compenseted my lack of algorithmic expertise with common sense, logic and intuition. I finished 65th out of 356 total competitors. I did some research on packaged TSP solvers and top TSP algorithms. I found concorde but I could not get it to work on my ubuntu machine. So I settled with LKH which uses Lin-Kernighan heuristic for solving TSP and related problems. I wrote scripts for file conversions and for running LKH. LKH easily solved my tsp problem in around 30 hours. But it was just one path. I still had to figure out how to make it find the second path.A simple idea to get 2 disjoint paths is to generate first path and then make weight of those edges infinite and run LKH on the problem again. But this required the problem to be in Distance Matrix format.Then I found a major problem. Problem: Ram too lowCreating distance matrix for 150,000 points was unimaginable.It would requirememory for one digit * 150,000 * 150,000assuming memory for one digit = 8 bytes memory required = 8*150,0002which is 167 GB! (Correct me if I am wrong) Solution:A simple solution was to divide the map in manageable chunks.I used scipy’s distance matrix creation function scipy.spatial.distance.pdist() It creates distance matrix from coordinates.The matrix created by pdist is in compressed form (a flattened matrix of upper diagonal elements. scipy.spatial.distance.squareform() can create a square distance matrix from compressed matrix but that would waste a lot of ram.So I created a custom function which divided compressed matrix by rows so LKH can read it. Input:(coordinates)1 12 34 1 output of pdist:(compressed upper column)1 2 4 Output of squreform():(Uncompressed square matrix)0 1 21 0 42 4 0 Output of my function which processed compressed matrix:(Upper diagonal elements)[[1,2],[4]] Lots of ram saved! I tried using Manhattan distance instead of euclidean distance. But after dividing the problem in grids, time taken by distance calculation was manageable so I stuck with euclidean distance. Through trial and error, I found that on my laptop with 4 GB ram, a 6 by 6 grid in the above format was manageable for both creating distance matrix and for LKH. I ran LKH on resulting distance matrices and joined the individual solutions. I joined the resulting solutions in different combination for both paths so as to avoid common paths. I got 7,415,334 with this method. I tried time limit on LKH algorithm. From 40,000 seconds I reduced it to 300, 20, 5 ,1 seconds but It made the results slightly worse. Mingle The solution above was good but It could have been better. The problem was that the first path was so good that the second path struggled to find good path. The difference between the two paths was big. Path1 ~= 6.2MPath2 ~= 7.4MFor a long time I thought this would require either solving both paths simultaneously or using genetic algorithm or similar algorithm to combine both paths. Both were pretty difficult to implement.Then I got a simple idea. My map was divided in 36 squares. If I combine 18 squares of first path and 18 squares of second path, I will have a path whose distance will be approximately average of the two paths.I tried this trick and used different combinations of the two paths squares and got the best score of 6,807,498 For new path1, select blue squres from old path1 and grey square from old path2 Use remaining squares for new path2.Remove cross lines My squares were joined in a zigzag manner. I removed the zig-zag lines for a further improvement. I scored 6,744,291 which was my best score.Another idea was to make end point of one square and the beginning point of next square as near as possible but I couldn’t implement the idea before deadline.My score was around 200,000 points away from the first place which was 6,526,972. Not bad! Public repo: https://bitbucket.org/hrishikeshio/traveling-santa (More documentation for source code coming soon) Originally published at www.blogicious.com on January 19, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blockchain, cryptocurrencies and the decentralised future " Adam Geitgey,35K,15,https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?source=tag_archive---------0----------------,Machine Learning is Fun! – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة‎‎, Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 , Tiếng Việt or فارسی. Bigger update: The content of this article is now available as a full-length video course that walks you through every step of the code. You can take the course for free (and access everything else on Lynda.com free for 30 days) if you sign up with this link. Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that! This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is. The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished. Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data. For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic. “Machine learning” is an umbrella term covering lots of these kinds of generic algorithms. You can think of machine learning algorithms as falling into one of two main categories — supervised learning and unsupervised learning. The difference is simple, but really important. Let’s say you are a real estate agent. Your business is growing, so you hire a bunch of new trainee agents to help you out. But there’s a problem — you can glance at a house and have a pretty good idea of what a house is worth, but your trainees don’t have your experience so they don’t know how to price their houses. To help your trainees (and maybe free yourself up for a vacation), you decide to write a little app that can estimate the value of a house in your area based on it’s size, neighborhood, etc, and what similar houses have sold for. So you write down every time someone sells a house in your city for 3 months. For each house, you write down a bunch of details — number of bedrooms, size in square feet, neighborhood, etc. But most importantly, you write down the final sale price: Using that training data, we want to create a program that can estimate how much any other house in your area is worth: This is called supervised learning. You knew how much each house sold for, so in other words, you knew the answer to the problem and could work backwards from there to figure out the logic. To build your app, you feed your training data about each house into your machine learning algorithm. The algorithm is trying to figure out what kind of math needs to be done to make the numbers work out. This kind of like having the answer key to a math test with all the arithmetic symbols erased: From this, can you figure out what kind of math problems were on the test? You know you are supposed to “do something” with the numbers on the left to get each answer on the right. In supervised learning, you are letting the computer work out that relationship for you. And once you know what math was required to solve this specific set of problems, you could answer to any other problem of the same type! Let’s go back to our original example with the real estate agent. What if you didn’t know the sale price for each house? Even if all you know is the size, location, etc of each house, it turns out you can still do some really cool stuff. This is called unsupervised learning. This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!” So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts. Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions. Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer. Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start. As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers. But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”. Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead. Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human. So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further. If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this: If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change. Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number: One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price. That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this: Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices! A dumb way to figure out the best weights would be something like this: Start with each weight set to 1.0: Run every house you know about through your function and see how far off the function is at guessing the correct price for each house: For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house. Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is. Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function. If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights. Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem! That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow! But here’s a few more facts that will blow your mind: Pretty crazy, right? Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try. To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way: First, write a simple equation that represents Step #2 above: Now let’s re-write exactly the same equation, but using a bunch of machine learning math jargon (that you can ignore for now): This equation represents how wrong our price estimating function is for the weights we currently have set. If we graph this cost equation for all possible values of our weights for number_of_bedrooms and sqft, we’d get a graph that might look something like this: In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer! So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights. If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill. So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading). That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details. When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening. The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it. But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line. But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies. Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully. In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn! Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data! But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have. For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two. So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly. In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day. If you want to try out what you’ve learned in this article, I made a course that walks you through every step of this article, including writing all the code. Give it a try! If you want to go deeper, Andrew Ng’s free Machine Learning class on Coursera is pretty amazing as a next step. I highly recommend it. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math. Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms. If you liked this article, please consider signing up for my Machine Learning is Fun! Newsletter: Also, please check out the full-length course version of this article. It covers everything in this article in more detail, including writing the actual code in Python. You can get a free 30-day trial to watch the course if you sign up with this link. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 2! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Shivon Zilis,1.2K,10,https://medium.com/@shivon/the-current-state-of-machine-intelligence-f76c20db2fe1?source=tag_archive---------1----------------,The Current State of Machine Intelligence – Shivon Zilis – Medium,"(The 2016 Machine Intelligence landscape and post can be found here) I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then... Why do this? A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy. What is “machine intelligence,” anyway? I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.) I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺ Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape. What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence). Which companies are on the landscape? I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺ I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description). If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them. The most exciting part for me was seeing how much is happening in the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves. If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI). Reflections on the landscape: We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that. Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike. On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones. Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80's classic video games. Many companies will be acquired. I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time. No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm... Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears. Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses. The talent’s in the New (AI)vy League. In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value. Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia. For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house). As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia. There will be a peace dividend. Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research. Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle. Opportunities for entrepreneurs: “My company does deep learning for X” Few words will make you more popular in 2015. That is, if you can credibly say them. Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well. Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention. The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is. As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind. “Acquihire as a business model” People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight. This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.) A good demo is disproportionately valuable in machine intelligence I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical). Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait. Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there. Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy. I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space. Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives! Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, BrightFunnel, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off. For the full resolution version of the landscape please click here. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Partner at Bloomberg Beta. All about machine intelligence for good. Equal parts nerd and athlete. Straight up Canadian stereotype and proud of it. " AirbnbEng,369,11,https://medium.com/airbnb-engineering/architecting-a-machine-learning-system-for-risk-941abbba5a60?source=tag_archive---------2----------------,Architecting a Machine Learning System for Risk – Airbnb Engineering & Data Science – Medium,"By Naseem Hakim & Aaron Keys At Airbnb, we want to build the world’s most trusted community. Guests trust Airbnb to connect them with world-class hosts for unique and memorable travel experiences. Airbnb hosts trust that guests will treat their home with the same care and respect that they would their own. The Airbnb review system helps users find community members who earn this trust through positive interactions with others, and the ecosystem as a whole prospers. The overwhelming majority of web users act in good faith, but unfortunately, there exists a small number of bad actors who attempt to profit by defrauding websites and their communities. The trust and safety team at Airbnb works across many disciplines to help protect our users from these bad actors, ideally before they have the opportunity to impart negativity on the community. There are many different kinds of risk that online businesses may have to protect against, with varying exposure depending on the particular business. For example, email providers devote significant resources to protecting users from spam, whereas payments companies deal more with credit card chargebacks. We can mitigate the potential for bad actors to carry out different types of attacks in different ways. Many risks can be mitigated through user-facing changes to the product that require additional verification from the user. For example, requiring email confirmation, or implementing 2FA to combat account takeovers, as many banks have done. Scripted attacks are often associated with a noticeable increase in some measurable metric over a short period of time. For example, a sudden 1000% increase in reservations in a particular city could be a result of excellent marketing, or fraud. Fraudulent actors often exhibit repetitive patterns. As we recognize these patterns, we can apply heuristics to predict when they are about to occur again, and help stop them. For complex, evolving fraud vectors, heuristics eventually become too complicated and therefore unwieldy. In such cases, we turn to machine learning, which will be the focus of this blog post. For a more detailed look at other aspects of online risk management, check out Ohad Samet’s great ebook. Different risk vectors can require different architectures. For example, some risk vectors are not time critical, but require computationally intensive techniques to detect. An offline architecture is best suited for this kind of detection. For the purposes of this post, we are focusing on risks requiring realtime or near-realtime action. From a broad perspective, a machine-learning pipeline for these kinds of risk must balance two important goals: These may seem like competing goals, since optimizing for realtime calculations during a web transaction creates a focus on speed and reliability, whereas optimizing for model building and iteration creates more of a focus on flexibility. At Airbnb, engineering and data teams have worked closely together to develop a framework that accommodates both goals: a fast, robust scoring framework with an agile model-building pipeline. In keeping with our service-oriented architecture, we built a separate fraud prediction service to handle deriving all the features for a particular model. When a critical event occurs in our system, e.g., a reservation is created, we query the fraud prediction service for this event. This service can then calculate all the features for the “reservation creation” model, and send these features to our Openscoring service, which is described in more detail below. The Openscoring service returns a score and a decision based on a threshold we’ve set, and the fraud prediction service can then use this information to take action (i.e., put the reservation on hold). The fraud prediction service has to be fast, to ensure that we are taking action on suspicious events in near realtime. Like many of our backend services for which performance is critical, it is built in java, and we parallelize the database queries necessary for feature generation. However, we also want the freedom to occasionally do some heavy computation in deriving features, so we run it asynchronously so that we are never blocking for reservations, etc. This asynchronous model works for many situations where a few seconds of delay in fraud detection has no negative effect. It’s worth noting, however, that there are cases where you may want to react in realtime to block transactions, in which case a synchronous query and precomputed features may be necessary. This service is built in a very modular way, and exposes an internal restful API, making adding new events and models easy. Openscoring is a Java service that provides a JSON REST interface to the Java Predictive Model Markup Language (PMML) evaluator JPMML. Both JPMML and Openscoring are open source projects released under the Apache 2.0 license and authored by Villu Ruusmann (edit — the most recent version is licensed the under AGPL 3.0) . The JPMML backend of Openscoring consumes PMML, an xml markup language that encodes several common types of machine learning models, including tree models, logit models, SVMs and neural networks. We have streamlined Openscoring for a production environment by adding several features, including kafka logging and statsd monitoring. Andy Kramolisch has modified Openscoring to permit using several models simultaneously. As described below, there are several considerations that we weighed carefully before moving forward with Openscoring: After considering all of these factors, we decided that Openscoring best satisfied our two-pronged goal of having a fast and robust, yet flexible machine learning framework. A schematic of our model-building pipeline using PMML is illustrated above. The first step involves deriving features from the data stored on the site. Since the combination of features that gives the optimal signal is constantly changing, we store the features in a json format, which allows us to generalize the process of loading and transforming features, based on their names and types. We then transform the raw features through bucketing or binning values, and replacing missing values with reasonable estimates to improve signal. We also remove features that are shown to be statistically unimportant from our dataset. While we omit most of the details regarding how we perform these transformations for brevity here, it is important to recognize that these steps take a significant amount of time and care. We then use our transformed features to train and cross-validate the model using our favorite PMML-compatible machine learning library, and upload the PMML model to Openscoring. The final model is tested and then used for decision-making if it becomes the best performer. The model-training step can be performed in any language with a library that outputs PMML. One commonly used and well-supported library is the R PMML package. As illustrated below, generating a PMML with R requires very little code. This R script has the advantage of simplicity, and a script similar to this is a great way to start building PMMLs and to get a first model into production. In the long run, however, a setup like this has some disadvantages. First, our script requires that we perform feature transformation as a pre-processing step, and therefore we have add these transformation instructions to the PMML by editing it afterwards. The R PMML package supports many PMML transformations and data manipulations, but it is far from universal. We deploy the model as a separate step — post model-training — and so we have to manually test it for validity, which can be a time-consuming process. Yet another disadvantage of R is that the implementation of the PMML exporter is somewhat slow for a random forest model with many features and many trees. However, we’ve found that simply re-writing the export function in C++ decreases run time by a factor of 10,000, from a few days to a few seconds. We can get around the drawbacks of R while maintaining its advantages by building a pipeline based on Python and scikit-learn. Scikit-learn is a Python package that supports many standard machine learning models, and includes helpful utilities for validating models and performing feature transformations. We find that Python is a more natural language than R for ad-hoc data manipulation and feature extraction. We automate the process of feature extraction based on a set of rules encoded in the names and types of variables in the features json; thus, new features can be incorporated into the model pipeline with no changes to the existing code. Deployment and testing can also be performed automatically in Python by using its standard network libraries to interface with Openscoring. Standard model performance tests (precision recall, ROC curves, etc.) are carried out using sklearn’s built-in capabilities. Sklearn does not support PMML export out of the box, so have written an in-house exporter for particular sklearn classifiers. When the PMML file is uploaded to Openscoring, it is automatically tested for correspondence with the scikit-learn model it represents. Because feature-transformation, model building, model validation, deployment and testing are all carried out in a single script, a data scientist or engineer is able to quickly iterate on a model based on new features or more recent data, and then rapidly deploy the new model into production. Although this blog post has focused mostly on our architecture and model building pipeline, the truth is that much of our time has been spent elsewhere. Our process was very successful for some models, but for others we encountered poor precision-recall. Initially we considered whether we were experiencing a bias or a variance problem, and tried using more data and more features. However, after finding no improvement, we started digging deeper into the data, and found that the problem was that our ground truth was not accurate. Consider chargebacks as an example. A chargeback can be “Not As Described (NAD)” or “Fraud” (this is a simplification), and grouping both types of chargebacks together for a single model would be a bad idea because legitimate users can file NAD chargebacks. This is an easy problem to resolve, and not one we actually had (agents categorize chargebacks as part of our workflow); however, there are other types of attacks where distinguishing legitimate activity from illegitimate is more subtle, and necessitated the creation of new data stores and logging pipelines. Most people who’ve worked in machine learning will find this obvious, but it’s worth re-stressing: Towards this end, sometimes you don’t know what data you’re going to need until you’ve seen a new attack, especially if you haven’t worked in the risk space before, or have worked in the risk space but only in a different sector. So the best advice we can offer in this case is to log everything. Throw it all in HDFS, whether you need it now or not. In the future, you can always use this data to backfill new data stores if you find it useful. This can be invaluable in responding to a new attack vector. Although our current ML pipeline uses scikit-learn and Openscoring, our system is constantly evolving. Our current setup is a function of the stage of the company and the amount of resources, both in terms of personnel and data, that are currently available. Smaller companies may only have a few ML models in production and a small number of analysts, and can take time to manually curate data and train the model in many non-standardized steps. Larger companies might have many, many models and require a high degree of automation, and get a sizable boost from online training. A unique challenge of working at a hyper-growth company is that landscape fundamentally changes year-over-year, and pipelines need to adjust to account for this. As our data and logging pipelines improve, investing in improved learning algorithms will become more worthwhile, and we will likely shift to testing new algorithms, incorporating online learning, and expanding on our model building framework to support larger data sets. Additionally, some of the most important opportunities to improve our models are based on insights into our unique data, feature selection, and other aspects our risk systems that we are not able to share publicly. We would like to acknowledge the other engineers and analysts who have contributed to these critical aspects of this project. We work in a dynamic, highly-collaborative environment, and this project is an example of how engineers and data scientists at Airbnb work together to arrive at a solution that meets a diverse set of needs. If you’re interested in learning more, contact us about our data science and engineering teams! Originally published at nerds.airbnb.com on June 16, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io " "Yingjie Miao ",43,6,https://medium.com/kifi-engineering/from-word2vec-to-doc2vec-an-approach-driven-by-chinese-restaurant-process-93d3602eaa31?source=tag_archive---------3----------------,From word2vec to doc2vec: an approach driven by Chinese restaurant process,"Google’s word2vec project has created lots of interests in the text mining community. It’s a neural network language model that is “both supervised and unsupervised”. Unsupervised in the sense that you only have to provide a big corpus, say English wiki. Supervised in the sense that the model cleverly generates supervised learning tasks from the corpus. How? Two approaches, known as Continuous Bag of Words (CBOW) and Skip-Gram (See Figure 1 in this paper). CBOW forces the neural net to predict current word by surrounding words, and Skip-Gram forces the neural net to predict surrounding words of the current word. Training is essentially a classic back-propagation method with a few optimization and approximation tricks (e.g. hierarchical softmax). Word vectors generated by the neural net have nice semantic and syntactic behaviors. Semantically, “iOS” is close to “Android”. Syntactically, “boys” minus “boy” is close to “girls” minus “girl”. One can checkout more examples here. Although this provides high quality word vectors, there is still no clear way to combine them into a high quality document vector. In this article, we discuss one possible heuristic, inspired by a stochastic process called Chinese Restaurant Process (CRP). Basic idea is to use CRP to drive a clustering process and summing word vectors in the right cluster. Imagine we have an document about chicken recipe. It contains words like “chicken”, “pepper”, “salt”, “cheese”. It also contains words like “use”, “buy”, “definitely”, “my”, “the”. The word2vec model gives us a vector for each word. One could naively sum up every word vector as the doc vector. This clearly introduces lots of noise. A better heuristic is to use a weighted sum, based on other information like idf or Part of Speech (POS) tag. The question is: could we be more selective when adding terms? If this is a chicken recipe document, I shouldn’t even consider words like “definitely”, “use”, “my” in the summation. One can argue that idf based weights can significantly reduce noise of boring words like “the” and “is”. However, for words like “definitely”, “overwhelming”, the idfs are not necessarily small as you would hope. It’s natural to think that if we can first group words into clusters, words like “chicken”, “pepper” may stay in one cluster, along with other clusters of “junk” words. If we can identify the “relevant” clusters, and only summing up word vectors from relevant clusters, we should have a good doc vector. This boils down to clustering the words in the document. One can of course use off-the-shelf algorithms like K-means, but most these algorithms require a distance metric. Word2vec behaves nicely by cosine similarity, this doesn’t necessarily mean it behaves as well under Eucledian distance (even after projection to unit sphere, it’s perhaps best to use geodesic distance.) It would be nice if we can directly work with cosine similarity. We have done a quick experiment on clustering words driven by CRP-like stochastic process. It worked surprisingly well — so far. Now let’s explain CRP. Imagine you go to a (Chinese) restaurant. There are already n tables with different number of peoples. There is also an empty table. CRP has a hyperparamter r > 0, which can be regarded as the “imagined” number of people on the empty table. You go to one of the (n+1) tables with probability proportional to existing number of people on the table. (For the empty table, the number is r). If you go to one of the n existing tables, you are done. If you decide to sit down at the empty table, the Chinese restaurant will automatically create a new empty table. In that case, the next customer comes in will choose from (n+2) tables (including the new empty table). Inspired by CRP, we tried the following variations of CRP to include the similarity factor. Common setup is the following: we are given M vectors to be clustered. We maintain two things: cluster sum (not centroid!), and vectors in clusters. We iterate through vectors. For current vector V, suppose we have n clusters already. Now we find the cluster C whose cluster sum is most similar to current vector. Call this score sim(V, C). Variant 1: v creates a new cluster with probability 1/(1 + n). Otherwise v goes to cluster C. Variant 2: If sim(V, C) > 1/(1 + n), goes to cluster C. Otherwise with probability 1/(1+n) it creates a new cluster and with probability n/(1+n) it goes to C. In any of the two variants, if v goes to a cluster, we update cluster sum and cluster membership. There is one distinct difference to traditional CRP: if we don’t go to empty table, we deterministically go to the “most similar” table. In practice, we find these variants create similar results. One difference is that variant 1 tend to have more clusters and smaller clusters, variant 2 tend to have fewer but larger clusters. The examples below are from variant 2. For example, for a chicken recipe document, the clusters look like this: Apparently, the first cluster is most relevant. Now let’s take the cluster sum vector (which is the sum of all vectors from this cluster), and test if it really preserves semantic. Below is a snippet of python console. We trained word vector using the c implementation on a fraction of English Wiki, and read the model file using python library gensim.model.word2vec. c[0] below denotes the cluster 0. Looks like the semantic is preserved well. It’s convincing that we can use this as the doc vector. The recipe document seems easy. Now let’s try something more challenging, like a news article. News articles tend to tell stories, and thus has less concentrated “topic words”. We tried the clustering on this article, titled “Signals on Radar Puzzle Officials in Hunt for Malaysian Jet”. We got 4 clusters: Again, looks decent. Note that this is a simple 1-pass clustering process and we don’t have to specify number of clusters! Could be very helpful for latency sensitive services. There is still a missing step: how to find out the relevant cluster(s)? We haven’t yet done extensive experiments on this part. A few heuristics to consider: There are other problems to think about: 1) how do we merge clusters? Based on similarity among cluster sum vectors? Or averaging similarity between cluster members? 2) what is the minimal set of words that can reconstruct cluster sum vector (in the sense of cosine similarity)? This could be used as a semantic keyword extraction method. Conclusion: Google’s word2vec provides powerful word vectors. We are interested in using these vectors to generate high quality document vectors in an efficient way. We tried a strategy based on a variant of Chinese Restaurant Process and obtained interesting results. There are some open problems to explore, and we would like to hear what you think. Appendix: python style pseudo-code for similarity driven CRP We wrote this post while working on Kifi — Connecting people with knowledge. Learn more. Originally published at eng.kifi.com on March 17, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The Kifi Engineering Blog " Pinterest Engineering,113,6,https://medium.com/@Pinterest_Engineering/building-a-smarter-home-feed-ad1918fdfbe3?source=tag_archive---------4----------------,Building a smarter home feed – Pinterest Engineering – Medium,"Chris Pinchak | Pinterest engineer, Discovery The home feed should be a reflection of what each user cares about. Content is sourced from inputs such as people and boards the user follows, interests, and recommendations. To ensure we maintain fast, reliable and personalized home feeds, we built the smart feed with the following design values in mind: 1. Different sources of Pins should be mixed together at different rates. 2. Some Pins should be selectively dropped or deferred until a later time. Some sources may produce Pins of poor quality for a user, so instead of showing everything available immediately, we can be selective about what to show and what to hold back for a future session. 3. Pins should be arranged in the order of best-first rather than newest-first. For some sources, newer Pins are intuitively better, while for others, newness is less important. We shifted away from our previously time-ordered home feed system and onto a more flexible one. The core feature of the smart feed architecture is its separation of available, but unseen, content and content that’s already been presented to the user. We leverage knowledge of what the user hasn’t yet seen to our advantage when deciding how the feed evolves over time. Smart feed is a composition of three independent services, each of which has a specific role in the construction of a home feed. The smart feed worker is the first to process Pins and has two primary responsibilities — to accept incoming Pins and assign some score proportional to their quality or value to the receiving user, and to remember these scored Pins in some storage for later consumption. Essentially, the worker manages Pins as they become newly available, such as those from the repins of the people the user follows. Pins have varying value to the receiving user, so the worker is tasked with deciding the magnitude of their subjective quality. Incoming Pins are currently obtained from three separate sources: repins made by followed users, related Pins, and Pins from followed interests. Each is scored by the worker and then inserted into a pool for that particular type of pin. Each pool is a priority queue sorted on score and belongs to a single user. Newly added Pins mix with those added before, allowing the highest quality Pins to be accessible over time at the front of the queue. Pools can be implemented in a variety of ways so long as the priority queue requirement is met. We choose to do this by exploiting the key-based sorting of HBase. Each key is a combination of user, score and Pin such that, for any user, we may scan a list of available Pins according to their score. Newly added triples will be inserted at their appropriate location to maintain the score order. This combination of user, score, and Pin into a key value can be used to create a priority queue in other storage systems aside from HBase, a property we may use in the future depending on evolving storage requirements. Distinct from the smart feed worker, the smart feed content generator is concerned primarily with defining what “new” means in the context of a home feed. When a user accesses the home feed, we ask the content generator for new Pins since their last visit. The generator decides the quantity, composition, and arrangement of new Pins to return in response to this request. The content generator assembles available Pins into chunks for consumption by the user as part of their home feed. The generator is free to choose any arrangement based on a variety of input signals, and may elect to use some or all of the Pins available in the pools. Pins that are selected for inclusion in a chunk are thereafter removed from from the pools so they cannot be returned as part of subsequent chunks. The content generator is generally free to perform any rearrangements it likes, but is bound to the priority queue nature of the pools. When the generator asks for n pins from a pool, it’ll get the n highest scoring (i.e., best) Pins available. Therefore, the generator doesn’t need to concern itself with finding the best available content, but instead with how the best available content should be presented. In addition to providing high availability of the home feed, the smart feed service is responsible for combining new Pins returned by the content generator with those that previously appeared in the home feed. We can separate these into the chunk returned by the content generator and the materialized feed managed by the smart feed service. The materialized feed represents a frozen view of the feed as it was the last time the user viewed it. To the materialized Pins we add the Pins from the content generator in the chunk. The service makes no decisions about order, instead it adds the Pins in exactly the order given by the chunk. Because it has a fairly low rate of reading and writing, the materialized feed is likely to suffer from fewer availability events. In addition, feeds can be trimmed to restrict them to a maximum size. The need for less storage means we can easily increase the availability and reliability of the materialized feed through replication and the use of faster storage hardware. The smart feed service relies on the content generator to provide new Pins. If the generator experiences a degradation in performance, the service can gracefully handle the loss of its availability. In the event the content generator encounters an exception while generating a chunk, or if it simply takes too long to produce one, the smart feed service will return the content contained in the materialized feed. In this instance, the feed will appear to the end user as unchanged from last time. Future feed views will produce chunks as large as, or larger than, the last so that eventually the user will see new Pins. By moving to smart feed, we achieved the goals of a highly flexible architecture and better control over the composition of home feeds. The home feed is now powered by three separate services, each with a well-defined role in its production and distribution. The individual services can be altered or replaced with components that serve the same general purpose. The use of pools to buffer Pins according to their quality allows us a greater amount of control over the composition of home feeds. Continuing with this project, we intend to better model users’ preferences with respect to Pins in their home feeds. Our accuracy of recommendation quality varies considerably over our user base, and we would benefit from using preference information gathered from recent interactions with the home feed. Knowledge of personal preference will also help us order home feeds so the Pins of most value can be discovered with the least amount of effort. If you’re interested in tackling challenges and making improvements like this, join our team! Chris Pinchak is a software engineer at Pinterest. Acknowledgements: This technology was built in collaboration with Dan Feng, Dmitry Chechik, Raghavendra Prabhu, Jeremy Carroll, Xun Liu, Varun Sharma, Joe Lau, Yuchen Liu, Tian-Ying Chang, and Yun Park. This team, as well as people from across the company, helped make this project a reality with their technical insights and invaluable feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inventive engineers building the first visual discovery engine, 100 billion ideas and counting. https://careers.pinterest.com/careers/engineering " Nikhil Dandekar,116,3,https://towardsdatascience.com/what-makes-a-good-data-scientist-engineer-a8b4d7948a86?source=tag_archive---------5----------------,What makes a good data scientist/engineer? – Towards Data Science,"The term data scientist has been used lately to describe a wide variety of skills & roles. In this post I will focus on a particular flavor of data scientist. I will talk about the qualities needed to be a good data scientist-engineer who ships relevance products to users. Some examples of relevance products are: These folks need to be strong at data science and engineering to be successful. Some places call these folks as Machine Learning engineers since most of the work they do involves Machine Learning. More generally, I feel relevance engineer is a good term to describe them. Relevance engineers have a common set of skills that they draw upon to get their jobs done. The list below doesn’t include some of the known, obvious skills. You obviously need to be smart. You obviously need to have (or be able to learn quickly) the required “book” knowledge. But beyond that, there are a bunch of not-so-obvious skills that you can’t learn from a book. Here are some of those, in no particular order: This list is by no means exhaustive, but does capture some of the qualities of the smartest folks I have worked with. Happy to hear what you think. Thanks to Peter Bailey and Andrew Hogue for feedback on the initial revisions. *In this post, feature means a software feature, not a machine learning feature. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Engineering Manager doing Machine Learning @ Google. Previously worked on ML and search at Quora, Foursquare and Bing. Sharing concepts, ideas, and codes. " Jeff Smith,20,7,https://medium.com/data-engineering/modeling-madly-8b2c72eb52be?source=tag_archive---------6----------------,Modeling Madly – Data Engineering – Medium,"I recently wrapped up my second hackathon at Intent Media. You can see my summary of one of our previous hackathons here. These past two hackathons I’ve taken on some slightly different challenges than people usually go after in a hackathon: developing new machine learning models. While I‘ve been working on data science and machine learning systems for a while, I’ve found that trying to do so under extreme constraints can be a distinctly different experience. A very good data hacker can easily find themselves with a great idea at a hackathon but with little to nothing to demo at the end. Accepting that my personal experience is just my own, let me offer three tips for building new models at a hackathon. When you’re doing a more traditional web app hack at a hackathon, you can almost run out of time and still come up with something pretty good as long as you get that last bug fixed before the demo. This is a great characteristic to build into the plan of a hack but one that simply does not apply to a machine learning hack. Think about what happens when you do find that last bug in a machine learning project. You still need to potentially do all of the below: That’s no “just hit refresh” workflow. Even with a well-oiled workflow, some of those tasks can take all of the time your average one-day hackathon is scheduled for. Take #3, for example. Training a production grade model using, say, Hadoop, can take a lot of time, even if you have the cash to spin up a fair-sized cluster of EC2 instances. What that means for your hack can vary, but you’re just asking for trouble if you don’t start with that fact taken into account in the scope and goals of your project. A solid project design is absolutely crucial, if you’re going to hope to take all of the little steps involved in getting your model ready to demo. Which leads me to my next point... One of the best things about working in data science is all of the really smart people. But, of course, the corollary is that one of the worst things about working in data science is all of the really smart people. Sharp engineers and data scientists can take the nugget of an idea and envision a useful, powerful suite of products that would take years to build, which is not so useful when you have a day or two. Mature dataists know just how much ambition is too much and plan accordingly. I happen to be lucky enough to work with some very smart and very mature data scientists and engineers, so this has not been a problem for either of my last few hacks. But, I’m just lucky that way. You might not be so lucky. Unrealistic ambitions are a constant danger in a machine learning hack, running along the edge of all activities like a precipice beckoning you to dive off and see where you land. If you take one thing away from this post, let it be this: don’t dive off the cliff. Just don’t do it. You won’t like where you land. You’ll wind up with more questions than answers and you’ll have nothing to show come demo time. Moreover, your fellow devs who worked on apps and not models will simply not understand what you spent your time on. What does a precipice look like? It could be a novel distance metric. It could be a fundamental improvement to a widely used technique like SVRs. Or it could just be something really benign sounding like a longer training set. I would say that even choosing to pose the problem as a regression one instead of a classification one could qualify. The danger originates in the intrinsic tension between the rigorous and exploratory mode of academic data science/machine learning education and the pedal-to-the-metal pace mandated by a hackathon. They are very different modes of working, and you’re just going to have suspend some of your good habits for a day or so, if you want to have something to demo. This last point can be the trickiest to put in practice, but I think it can totally be the difference between a project that feels like a hack and one that feels like just getting warmed up on a weeklong story. If you’ve figured out how to scope your project appropriately and designed something that can really be built in a day or two, you can still actually fail to do so. I think it can the difference can easily come down to technology choices. For example, I currently make my living writing Cascalog, Clojure, and Java on top of Hadoop to process files stored in S3. I know these tools well enough to pay my rent, but I would absolutely hesitate to use any of them in a tight-paced context. I have spent weeks trying to understand a single Cascalog bug. Seriously. If you know the language, Python offers an unbeatable value proposition for this use case. scikit-learn has nearly everything you could imagine needing. pandas, NumPy, and SciPy are all sitting there to be brought in when appropriate. And don’t forget how awesome it can be to prototype in a purpose-built exploratory development environment like IPython. But this is machine learning, and sometimes our data is just big. Maybe even web scale. Some people hate these phrases, but they serve a purpose. We don’t all use Hadoop out of love for horrendously complex Java applications. Big data is not just statistics on a Mac Pro, although it can often look like that. Scale can be a real necessity even in a hackathon. When it is, there are no easy answers. If you’re lucky, maybe you can actually work with multiple hour model learning times. If you’re really lucky, you might be using Spark and not Hadoop, in which case it might not take hours to learn your model. My point is that, insofar as you have a choice, choose the leaner meaner tool, the one that will let you do more with less input required from you. Don’t use that C++ library that promises awesome runtime but with Python bindings that you’ve never tried. You’ll never figure out its quirks in time. Write as little data cleanup code as you can manage. Commands like dropna can save you precious minutes to hours. And if you can get your data from database or an API instead of files, then, for the love of Cthulhu, do it. Hell, even if you have to load your data from files to a database first, it might be worth your time. SQL is one of the highest productivity rapid prototyping tools I know. And though I love to bash on the clunkiness of Hadoop, there are even ways of taking some serious pain out of using it under pressure. Depending on what you’re doing Elastic Map Reduce or PredictionIO can get you to the point of being productive much faster. I love hackathons and their variations. They remind me of the fun old days in grad school, furiously hacking away to come up with something interesting to say about definitionally uncertain stuff. The furious pace and the pragmatic compromises are part of the fun. Compared to things like pitch events, hackathons have way less problems (even if they have their issues as well). At their best they’re about the love of unconstrained creation. I’ve tried to do machine learning hacks because it’s just so damn cool to go from zero to having a program that makes decisions. It amazes me every time it works, and doubly so when I can manage to get something working on a deadline. Taking on a challenge like building a new model in a hackathon is also a great learning experience, especially if you get to work as part of a strong team. Machine learning in the real world is an even larger topic than its academic cousin, and there’s always interesting things to learn. Hackathons can be great places to rapidly iterate through approaches and learn from your teammates how to build things better and faster. That’s pretty likely to come in handy sometime. The main part of the post is over, but I wanted to make sure to leave a note for anyone who was interested in what we hack at Intent Media (or what we build for our customers). We’re hiring all sorts of smart people to build systems for machine learning and more. Please reach out if you want to hear more about how and why we do what we do. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Author of Reactive Machine Learning Systems @ManningBooks. Building AIs for fun and profit. Friend of animals. Laying the foundation of tomorrow’s big data " Chris Jagers,45,5,https://medium.com/@chrisjagers/the-wolfram-language-b853337f8427?source=tag_archive---------7----------------,Explaining the Wolfram Language – Chris Jagers – Medium,"Many people are already familiar with Apple’s voice search called Siri, or the search engine behind it called Wolfram Alpha. This search engine can use natural language to search vast sets of data and even compute math. However, this is just a tiny fraction of what the language can do, and I don’t even think it’s a good introduction to what’s possible. To understand the raw power of the underlying technology, you really have to understand what it is and a little about how it works. The Wolfram website has wonderful documentation and explanations, but for the uninitiated it can seem bewildering. They have repackaged the language so many different ways, that it can be hard for the beginner to understand exactly what it is. That’s why I want to venture my own introduction. Let’s start with it’s origins. Mathematica was designed as a desktop tool for computational research and exploration. It continued evolving and the breakthrough was realizing those symbols could be anything: images, sounds, algorithms, geometry, data-sets ... anything. So, it became more than just a language. Stephen Wolfram calls this a knowledge-based language because it has smart-objects built in that can be computed. The language doesn’t simply find results, it computes results into actual models, analysis and other symbolic objects. The real power is that the results remain symbolic objects that can be further manipulated symbolically (i.e. embed in another symbolic object, operate on it). In short, anything can be computed. Pretty abstract, I know. Don’t worry, we’ll get to examples soon. The actual syntax is a combination of Objects and Operators which are grouped and ordered by square brackets [ ]. The stuff at the center of the formula gets read first and then it expands out like a Russian doll. Out of many potential examples, I have carefully selected one from their site to illustrate it’s simplicity and power. Let’s say we want our system to determine the difference between poetry and prose. This would be difficult to program directly because there are so many variables and the differences are so subtle. With Wolfram Language, that hard stuff becomes easy. You can train it recognize the difference very quickly. Here’s how it works, let’s use Shakespeare for an example: First, scan all of Hamlet and call that type of stuff prose. Then scan all of Shakespeare’s Sonnets and call that stuff poetry. Easy. Next, train the system with machine learning: Classify and Predict are the two big functions. We want to Classify which is poetry and which is prose. Wolfram looks at our situation and instantly determines that the Markov Method is the best for differentiating among all the subtle differences between prose and poetry. That’s it. Any system using this bit of training will automatically be able to detect the difference between poetry and prose with a high degree of accuracy. The key to this accuracy is the size of the data set. You really need at millions of data points to train it reliably. But with Wolfram, many of those data sets are already built in. Easy. This is just one tiny example to illustrate what the language looks like and how it goes beyond symbols to work with computable objects. We could continue translating poems into interactive maps, and interactions into music, and so on. How does Wolfram compare with other products like Apache Hadoop and others? Well, it’s a totally different thing. In those products everything is manual. The various axis (and all the variables) are manually defined. Instead, Wolfram intelligently applies formulas and makes choices to optimize results based on specific conditions. It makes the hard stuff automatic. Plus, it’s capable of much more than machine learning; that’s just one example of hundreds: sound, 3d-geometry, language, images, etc. — and a mixture of them all. Mathematica is still the most powerful and polished way to access the Wolfram Language. Their new Programming Cloud (and other cloud offerings) signal serious intent to move to the web, but it is still early days. The language is very mature for desktop exploration, and some companies have even made Mathematica applications for small scale internal use, which can be quite useful. Even though the Wolfram website has signaled intent to make it more broadly deployable within commercial services, I don’t think this is the proper way to use the language. Within my own company, we find Wolfram extremely handy for research, but not deployment within a web-based product. In short, it isn’t performant: Commercial products require more than a powerful language, they are made within an ecosystem of services and vendors that all have to work together. Without machine learning built into the native cloud where data is stored, it can’t be deployed in a SaaS product in a way that lives up to expectations. While Stephen Wolfram would love for his language to be used within commercial products, I think he resents having to play nice with lower level languages. His alternative of making API requests across the web isn’t a good way to embed intelligence within products. And I don’t think we will ever see entire SaaS products built entirely with a functional language. Programming is the art of automation. The Wolfram Community is full of very smart people using the language for research and exploration. They represent the cutting edge of computation. Personally, I’m looking forward to when we can see intelligence woven into commercial and consumer products that solve real problems for people on a daily basis. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO of Learning Machine. www.learningmachine.com " John Wittenauer,2,9,https://medium.com/@jdwittenauer/machine-learning-exercises-in-python-part-1-60db0df846a4?source=tag_archive---------8----------------,"Machine Learning Exercises In Python, Part 1 – John Wittenauer – Medium","This content originally appeared on Curious Insight This post is part of a series covering the exercises from Andrew Ng’s machine learning class on Coursera. The original code, exercise text, and data files for this post are available here. Part 1 — Simple Linear RegressionPart 2 — Multivariate Linear RegressionPart 3 — Logistic RegressionPart 4 — Multivariate Logistic RegressionPart 5 — Neural NetworksPart 6 — Support Vector MachinesPart 7 — K-Means Clustering & PCAPart 8 — Anomaly Detection & Recommendation One of the pivotal moments in my professional development this year came when I discovered Coursera. I’d heard of the “MOOC” phenomenon but had not had the time to dive in and take a class. Earlier this year I finally pulled the trigger and signed up for Andrew Ng’s Machine Learning class. I completed the whole thing from start to finish, including all of the programming exercises. The experience opened my eyes to the power of this type of education platform, and I’ve been hooked ever since. This blog post will be the first in a series covering the programming exercises from Andrew’s class. One aspect of the course that I didn’t particularly care for was the use of Octave for assignments. Although Octave/Matlab is a fine platform, most real-world “data science” is done in either R or Python (certainly there are other languages and tools being used, but these two are unquestionably at the top of the list). Since I’m trying to develop my Python skills, I decided to start working through the exercises from scratch in Python. The full source code is available at my IPython repo on Github. You’ll also find the data used in these exercises and the original exercise PDFs in sub-folders off the root directory if you’re interested. While I can explain some of the concepts involved in this exercise along the way, it’s impossible for me to convey all the information you might need to fully comprehend it. If you’re really interested in machine learning but haven’t been exposed to it yet, I encourage you to check out the class (it’s completely free and there’s no commitment whatsoever). With that, let’s get started! In the first part of exercise 1, we’re tasked with implementing simple linear regression to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities. You’d like to figure out what the expected profit of a new food truck might be given only the population of the city that it would be placed in. Let’s start by examining the data which is in a file called “ex1data1.txt” in the “data” directory of my repository above. First we need to import a few libraries. Now let’s get things rolling. We can use pandas to load the data into a data frame and display the first few rows using the “head” function. (Note: Medium can’t render tables — the full example is here) Another useful function that pandas provides out-of-the-box is the “describe” function, which calculates some basic statistics on a data set. This is helpful to get a “feel” for the data during the exploratory analysis stage of a project. (Note: Medium can’t render tables — the full example is here) Examining stats about your data can be helpful, but sometimes you need to find ways to visualize it too. Fortunately this data set only has one dependent variable, so we can toss it in a scatter plot to get a better idea of what it looks like. We can use the “plot” function provided by pandas for this, which is really just a wrapper for matplotlib. It really helps to actually look at what’s going on, doesn’t it? We can clearly see that there’s a cluster of values around cities with smaller populations, and a somewhat linear trend of increasing profit as the size of the city increases. Now let’s get to the fun part — implementing a linear regression algorithm in python from scratch! If you’re not familiar with linear regression, it’s an approach to modeling the relationship between a dependent variable and one or more independent variables (if there’s one independent variable then it’s called simple linear regression, and if there’s more than one independent variable then it’s called multiple linear regression). There are lots of different types and variances of linear regression that are outside the scope of this discussion so I won’t go into that here, but to put it simply — we’re trying to create a *linear model* of the data X, using some number of parameters theta, that describes the variance of the data such that given a new data point that’s not in X, we could accurately predict what the outcome y would be without actually knowing what y is. In this implementation we’re going to use an optimization technique called gradient descent to find the parameters theta. If you’re familiar with linear algebra, you may be aware that there’s another way to find the optimal parameters for a linear model called the “normal equation” which basically solves the problem at once using a series of matrix calculations. However, the issue with this approach is that it doesn’t scale very well for large data sets. In contrast, we can use variants of gradient descent and other optimization methods to scale to data sets of unlimited size, so for machine learning problems this approach is more practical. Okay, that’s enough theory. Let’s write some code. The first thing we need is a cost function. The cost function evaluates the quality of our model by calculating the error between our model’s prediction for a data point, using the model parameters, and the actual data point. For example, if the population for a given city is 4 and we predicted that it was 7, our error is (7–4)^2 = 3^2 = 9 (assuming an L2 or “least squares” loss function). We do this for each data point in X and sum the result to get the cost. Here’s the function: Notice that there are no loops. We’re taking advantage of numpy’s linear algrebra capabilities to compute the result as a series of matrix operations. This is far more computationally efficient than an unoptimizted “for” loop. In order to make this cost function work seamlessly with the pandas data frame we created above, we need to do some manipulating. First, we need to insert a column of 1s at the beginning of the data frame in order to make the matrix operations work correctly (I won’t go into detail on why this is needed, but it’s in the exercise text if you’re interested — basically it accounts for the intercept term in the linear equation). Second, we need to separate our data into independent variables X and our dependent variable y. Finally, we’re going to convert our data frames to numpy matrices and instantiate a parameter matirx. One useful trick to remember when debugging matrix operations is to look at the shape of the matrices you’re dealing with. It’s also helpful to remember when walking through the steps in your head that matrix multiplications look like (i x j) * (j x k) = (i x k), where i, j, and k are the shapes of the relative dimensions of the matrix. ((97L, 2L), (1L, 2L), (97L, 1L)) Okay, so now we can try out our cost function. Remember the parameters were initialized to 0 so the solution isn’t optimal yet, but we can see if it works. 32.072733877455676 So far so good. Now we need to define a function to perform gradient descent on the parameters *theta* using the update rules defined in the exercise text. Here’s the function for gradient descent: The idea with gradient descent is that for each iteration, we compute the gradient of the error term in order to figure out the appropriate direction to move our parameter vector. In other words, we’re calculating the changes to make to our parameters in order to reduce the error, thus bringing our solution closer to the optimal solution (i.e best fit). This is a fairly complex topic and I could easily devote a whole blog post just to discussing gradient descent. If you’re interested in learning more, I would recommend starting with this article and branching out from there. Once again we’re relying on numpy and linear algebra for our solution. You may notice that my implementation is not 100% optimal. In particular, there’s a way to get rid of that inner loop and update all of the parameters at once. I’ll leave it up to the reader to figure it out for now (I’ll cover it in a later post). Now that we’ve got a way to evaluate solutions, and a way to find a good solution, it’s time to apply this to our data set. matrix([[-3.24140214, 1.1272942 ]]) Note that we’ve initialized a few new variables here. If you look closely at the gradient descent function, it has parameters called alpha and iters. Alpha is the learning rate — it’s a factor in the update rule for the parameters that helps determine how quickly the algorithm will converge to the optimal solution. Iters is just the number of iterations. There is no hard and fast rule for how to initialize these parameters and typically some trial-and-error is involved. We now have a parameter vector descibing what we believe is the optimal linear model for our data set. One quick way to evaluate just how good our regression model is might be to look at the total error of our new solution on the data set: 4.5159555030789118 That’s certainly a lot better than 32, but it’s not a very intuitive way to look at it. Fortunately we have some other techniques at our disposal. We’re now going to use matplotlib to visualize our solution. Remember the scatter plot from before? Let’s overlay a line representing our model on top of a scatter plot of the data to see how well it fits. We can use numpy’s “linspace” function to create an evenly-spaced series of points within the range of our data, and then “evaluate” those points using our model to see what the expected profit would be. We can then turn it into a line graph and plot it. Not bad! Our solution looks like and optimal linear model of the data set. Since the gradient decent function also outputs a vector with the cost at each training iteration, we can plot that as well. Notice that the cost always decreases — this is an example of what’s called a convex optimization problem. If you were to plot the entire solution space for the problem (i.e. plot the cost as a function of the model parameters for every possible value of the parameters) you would see that it looks like a “bowl” shape with a “basin” representing the optimal solution. That’s all for now! In part 2 we’ll finish off the first exercise by extending this example to more than 1 variable. I’ll also show how the above solution can be reached by using a popular machine learning library called scikit-learn. To comment on this article, check out the original post at Curious Insight Follow me on twitter to get new post updates From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data scientist, engineer, author, investor, entrepreneur " Pinterest Engineering,25,7,https://medium.com/@Pinterest_Engineering/building-the-interests-platform-73a3a3755c21?source=tag_archive---------9----------------,Building the interests platform – Pinterest Engineering – Medium,"Ningning Hu | Pinterest engineer, Discovery The core value of Pinterest is to help people find the things they care about, by connecting them to Pins and people that relate to their interests. We’re building a service that’s powered by people, and supercharged with technology. The interest graph — the connections that make up the Pinterest index — creates bridges between Pins, boards, and Pinners. It’s our job to build a system that helps people to collect the things they love, and connect them to communities of engaged people who share similar interests and can help them discover more. From categories like travel, fitness, and humor, to more niche areas like vintage motorcycles, craft beer, or Japanese architecture, we’re building a visual discovery tool for all interests. The interests platform is built to support this vision. Specifically, it’s responsible for producing high quality data on interests, interest relationships, and their association with Pins, boards, and Pinners. Figure 1: Feedback loop between machine intelligence and human curation In contrast with conventional methods of generating such data, which rely primarily on machine learning and data mining techniques, our system relies heavily on human curation. The ultimate goal is to build a system that’s both machine and human powered, creating a feedback mechanism by which human curated data helps drive improvements in our machine algorithms, and vice versa. Figure 2: System components Raw input to the system includes existing data about Pins, boards, Pinners, and search queries, as well as explicit human curation signals about interests. With this data, we’re able to construct a continuously evolving interest dictionary, which provides the foundation to support other key components, such as interest feeds, interest recommendations, and related interests. From a technology standpoint, interests are text strings that represent entities for which a group of Pinners might have a shared passion. We generated an initial collection of interests by extracting frequently occurring n-grams from Pin and board descriptions, as well as board titles, and filtering these n-grams using custom built grammars. While this approach provided a high coverage set of interests, we found many terms to be malformed phrases. For instance, we would extract phrases such as “lamborghini yellow” instead of “yellow lamborghini”. This proved problematic because we wanted interest terms to represent how Pinners would describe them, and so, we employed a variety of methods to eliminate malformed interests terms. We first compared terms with repeated search queries performed by a group of Pinners over a few months. Intuitively, this criterion matches well with the notion that an interest should be an entity for which a group of Pinners are passionate. Later we filtered the candidate set through public domain ontologies like Wikipedia titles. These ontologies were primarily used to validate proper nouns as opposed to common phrases, as all available ontologies represented only a subset of possible interests. This is especially true for Pinterest, where Pinners themselves curate special interests like “mid century modern style.” Finally, we also maintain an internal blacklist to filter abusive words and x-rated terms as well as Pinterest specific stop words, like “love”. This filtering is especially important to interest terms which might be recommended to millions of users. We arrived at a fair quality collection of interests following the above algorithmic approaches. In order to understand the quality of our efforts, we gave a 50,000 term subset of our collection to a third party vendor which used crowdsourcing to rate our data. To be rigorous, we composed a set of four criteria by which users would evaluate candidate Interests terms: - Is it English? - Is it a valid phrase in grammar? - Is it a standalone concept? - Is it a proper name? The crowdsourced ratings were both interesting if not somewhat expected. There was a low rate of agreement amongst raters, with especially high discrepancy in determining whether an interest’s term represented a “standalone concept.” Despite the ambiguity, we were able to confirm that 80% of the collection generated using the above algorithms satisfied our interests criteria. This type of effort, however, is not easy to scale. The real solution is to allow Pinners to provide both implicit and explicit signals to help us determine the validity of an interest. Implicit signals behaviors like clicking and viewing, while explicit signals include asking Pinners to specifically provide information (which can be actions like a thumbs up/thumbs down, starring, or skipping recommendations). To capture all the signals used for defining the collections of terms, we built a dictionary that stores all the data associated with each interest, including invalid interests and the reason why it’s invalid. This service plays a key role in human curation, by aggregating signals from different people. On top of this dictionary service, we can build different levels of reviewing system. With the Interests dictionary, we can associate Pins, boards, and Pinners with representative interests. One of the initial ways we experimented with this was launching a preview of a page where Pinners can explore their interests. Figure 3: Exploring interests In order to match interests to Pinners, we need to aggregate all the information related with a person’s interests. At its core, our system recommends interests based upon Pins with which a Pinner interacts. Every Pin on Pinterest has been collected and given context by someone who thinks it’s important, and in doing so, is helping other people discover great content. Each individual Pin is an incredibly rich source of data. As discussed in a previous blog post on discovery data model, one Pin often has multiple copies — different people may Pin it from different sources, and the same Pin can be repinned multiple times. During this process, each Pin accumulates numerous unique textual descriptions which allows us to connect Pins with interests terms with high precision. However, this conceptually simple process requires non-trivial engineering effort to scale to the amount of Pins and Pinners that the service has today. The data process pipeline (managed by Pinball) composes over 35 Hadoop jobs, and runs periodically to update the user-interest mapping to capture users’ latest interest information. The initial feedback on the explore interests page has been positive, proving the capabilities of our system. We’ll continue testing different ways of exposing a person’s interests and related content, based on implicit signals, as well as explicit signals (such as the ability to create custom categories of interests). Related interests are an important way of enabling the ability to browse interests and discover new ones. To compute related interests, we simply combine the co-occurrence relationship for interests computed at Pin and board levels. Figure 4: Computing related interests The quality of the related interests is surprisingly high given the simplicity of the algorithm. We attribute this effect to the cleanness of Pinterest data. Text data on Pins tend to be very concise, and contain less noise than other types of data, like web pages. Also, related interests calculation already makes use of boards, which are heavily curated by people (vs. machines) in regards to organizing related content. We find that utilizing the co-occurrence of interest terms at the level of both Pins and boards provides the best tradeoff between achieving high precision as well as recall when computing the related interests. One of the initial ways we began showing people related content was through related Pins. When you Pin an object, you’ll see a recommendation for a related board with that same Pin so you can explore similar objects. Additionally, if you scroll beneath a Pin, you’ll see Pins from other people who’ve also Pinned that original object. At this point, 90% of all Pins have related Pins, and we’ve seen 20% growth in engagement with related Pins in the last six months. Interests feeds provide Pinners with a continuous feed of Pins that are highly related. Our feeds are populated using a variety of sources, including search and through our annotation pipeline. A key property of the feed is flow. Only feeds with decent flow can attract Pinners to come back repeatedly, thereby maintaining high engagement. In order to optimize for our feeds, we’ve utilized a number of real-time indexing and retrieval systems, including real-time search, real-time annotating, and also human curation for some of the interests. To ensure quality, we need to guarantee quality from all sources. For that purpose, we measure the engagement of Pins from each source and address quality issue accordingly. Figure 5: How interest feeds are generated Accurately capturing Pinner interests and interest relationships, and making this data understandable and actionable for tens of millions of people (collecting tens of billions of Pins), is not only an engineering challenge, but also a product design one. We’re just at the beginning, as we continue to improve the data and design ways to empower people to provide feedback that allows us to build a hybrid system combining machine and human curation to power discovery. Results of these effort will be reflected in future product releases. If you’re interested in building new ways of helping people discover the things they care about, join our team! Acknowledgements: The core team members for the interests backend platform are Ningning Hu, Leon Lin, Ryan Shih and Yuan Wei. Many other folks from other parts of the company, especially the discovery team and the infrastructure teams, have provided very useful feedback and help along the way to make the ongoing project successful. Ningning Hu is an engineer at Pinterest. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inventive engineers building the first visual discovery engine, 100 billion ideas and counting. https://careers.pinterest.com/careers/engineering " Christopher Nguyen,991,8,https://medium.com/deep-learning-101/algorithms-of-the-mind-10eb13f61fc4?source=tag_archive---------0----------------,Algorithms of the Mind – Deep Learning 101 – Medium,"What Machine Learning Teaches Us About Ourselves Originally published at blog.arimo.com.Follow me on Twitter to keep informed of interesting developments on these topics. “Science often follows technology, because inventions give us new ways to think about the world and new phenomena in need of explanation.” Or so Aram Harrow, an MIT physics professor, counter-intuitively argues in “Why now is the right time to study quantum computing”. He suggests that the scientific idea of entropy could not really be conceived until steam engine technology necessitated understanding of thermodynamics. Quantum computing similarly arose from attempts to simulate quantum mechanics on ordinary computers. So what does all this have to do with machine learning? Much like steam engines, machine learning is a technology intended to solve specific classes of problems. Yet results from the field are indicating intriguing—possibly profound—scientific clues about how our own brains might operate, perceive, and learn. The technology of machine learning is giving us new ways to think about the science of human thought ... and imagination. Five years ago, deep learning pioneer Geoff Hinton (who currently splits his time between the University of Toronto and Google) published the following demo. Hinton had trained a five-layer neural network to recognize handwritten digits when given their bitmapped images. It was a form of computer vision, one that made handwriting machine-readable. But unlike previous works on the same topic, where the main objective is simply to recognize digits, Hinton’s network could also run in reverse. That is, given the concept of a digit, it can regenerate images corresponding to that very concept. We are seeing, quite literally, a machine imagining an image of the concept of “8”. The magic is encoded in the layers between inputs and outputs. These layers act as a kind of associative memory, mapping back-and-forth from image and concept, from concept to image, all in one neural network. But beyond the simplistic, brain-inspired machine vision technology here, the broader scientific question is whether this is how human imagination — visualization — works. If so, there’s a huge a-ha moment here. After all, isn’t this something our brains do quite naturally? When we see the digit 4, we think of the concept “4”. Conversely, when someone says “8”, we can conjure up in our minds’ eye an image of the digit 8. Is it all a kind of “running backwards” by the brain from concept to images (or sound, smell, feel, etc.) through the information encoded in the layers? Aren’t we watching this network create new pictures — and perhaps in a more advanced version, even new internal connections — as it does so? If visual recognition and imagination are indeed just back-and-forth mapping between images and concepts, what’s happening between those layers? Do deep neural networks have some insight or analogies to offer us here? Let’s first go back 234 years, to Immanuel Kant’s Critique of Pure Reason, in which he argues that “Intuition is nothing but the representation of phenomena”. Kant railed against the idea that human knowledge could be explained purely as empirical and rational thought. It is necessary, he argued, to consider intuitions. In his definitions, “intuitions” are representations left in a person’s mind by sensory perceptions, where as “concepts” are descriptions of empirical objects or sensory data. Together, these make up human knowledge. Fast forwarding two centuries later, Berkeley CS professor Alyosha Efros, who specializes in Visual Understanding, pointed out that “there are many more things in our visual world than we have words to describe them with”. Using word labels to train models, Efros argues, exposes our techniques to a language bottleneck. There are many more un-namable intuitions than we have words for. In training deep networks, such as the seminal “cat-recognition” work led by Quoc Le at Google/Stanford, we’re discovering that the activations in successive layers appear to go from lower to higher conceptual levels. An image recognition network encodes bitmaps at the lowest layer, then apparent corners and edges at the next layer, common shapes at the next, and so on. These intermediate layers don’t necessarily have any activations corresponding to explicit high-level concepts, like “cat” or “dog”, yet they do encode a distributed representation of the sensory inputs. Only the final, output layer has such a mapping to human-defined labels, because they are constrained to match those labels. Therefore, the above encodings and labels seem to correspond to exactly what Kant referred to as “intuitions” and “concepts”. In yet another example of machine learning technology revealing insights about human thought, the network diagram above makes you wonder whether this is how the architecture of Intuition — albeit vastly simplified — is being expressed. If — as Efros has pointed out — there are a lot more conceptual patterns than words can describe, then do words constrain our thoughts? This question is at the heart of the Sapir-Whorf or Linguistic Relativity Hypothesis, and the debate about whether language completely determines the boundaries of our cognition, or whether we are unconstrained to conceptualize anything — regardless of the languages we speak. In its strongest form, the hypothesis posits that the structure and lexicon of languages constrain how one perceives and conceptualizes the world. One of the most striking effects of this is demonstrated in the color test shown here. When asked to pick out the one square with a shade of green that’s distinct from all the others, the Himba people of northern Namibia — who have distinct words for the two shades of green — can find it almost instantly. The rest of us, however, have a much harder time doing so. The theory is that — once we have words to distinguish one shade from another, our brains will train itself to discriminate between the shades, so the difference would become more and more “obvious” over time. In seeing with our brain, not with our eyes, language drives perception. With machine learning, we also observe something similar. In supervised learning, we train our models to best match images (or text, audio, etc.) against provided labels or categories. By definition, these models are trained to discriminate much more effectively between categories that have provided labels, than between other possible categories for which we have not provided labels. When viewed from the perspective of supervised machine learning, this outcome is not at all surprising. So perhaps we shouldn’t be too surprised by the results of the color experiment above, either. Language does indeed influence our perception of the world, in the same way that labels in supervised machine learning influence the model’s ability to discriminate among categories. And yet, we also know that labels are not strictly required to discriminate between cues. In Google’s “cat-recognizing brain”, the network eventually discovers the concept of “cat”, “dog”, etc. all by itself — even without training the algorithm against explicit labels. After this unsupervised training, whenever the network is fed an image belonging to a certain category like “Cats”, the same corresponding set of “Cat” neurons always gets fired up. Simply by looking at the vast set of training images, this network has discovered the essential patterns of each category, as well as the differences of one category vs. another. In the same way, an infant who is repeatedly shown a paper cup would soon recognize the visual pattern of such a thing, even before it ever learns the words “paper cup” to attach that pattern to a name. In this sense, the strong form of the Sapir-Whorf hypothesis cannot be entirely correct — we can, and do, discover concepts even without the words to describe them. Supervised and unsupervised machine learning turn out to represent the two sides of the controversy’s coin. And if we recognized them as such, perhaps Sapir-Whorf would not be such a controversy, and more of a reflection of supervised and unsupervised human learning. I find these correspondences deeply fascinating — and we’ve only scratched the surface. Philosophers, psychologists, linguists, and neuroscientists have studied these topics for a long time. The connection to machine learning and computer science is more recent, especially with the advances in big data and deep learning. When fed with huge amounts of text, images, or audio data, the latest deep learning architectures are demonstrating near or even better-than-human performance in language translation, image classification, and speech recognition. Every new discovery in machine learning demystifies a bit more of what may be going on in our brains. We’re increasingly able to borrow from the vocabulary of machine learning to talk about our minds. Thanks to Sonal Chokshi and Vu Pham for extensive review & edits. Also, chrisjagers, chickamade. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @arimoinc CEO & Co-Founder. Leader, Entrepreneur, Hacker, Xoogler, Executive, Professor. #DataViz #ParallelComputing #DeepLearning & former #GoogleApps. Fundamentals and Latest Developments in #DeepLearning " Per Harald Borgen,2.1K,6,https://medium.com/learning-new-stuff/machine-learning-in-a-week-a0da25d59850?source=tag_archive---------1----------------,Machine Learning in a Week – Learning New Stuff – Medium,"Getting into machine learning (ml) can seem like an unachievable task from the outside. However, after dedicating one week to learning the basics of the subject, I found it to be much more accessible than I anticipated. This article is intended to give others who’re interested in getting into ml a roadmap of how to get started, drawing from the experiences I made in my intro week. Before my machine learning week, I had been reading about the subject for a while, and had gone through half of Andrew Ng’s course on Coursera and a few other theoretical courses. So I had a tiny bit of conceptual understanding of ml, though I was completely unable to transfer any of my knowledge into code. This is what I wanted to change. I wanted to be able to solve problems with ml by the end of the week, even through this meant skipping a lot of fundamentals, and going for a top-down approach, instead of bottoms up. After asking for advice on Hacker News, I came to the conclusion that Python’s Scikit Learn-module was the best starting point. This module gives you a wealth of algorithms to choose from, reducing the actual machine learning to a few lines of code. I started off the week by looking for video tutorials which involved Scikit Learn. I finally landed on Sentdex’s tutorial on how to use ml for investing in stocks, which gave me the necessary knowledge to move on to the next step. The good thing about the Sentdex tutorial is that the instructor takes you through all the steps of gathering the data. As you go along, you realize that fetching and cleaning up the data can be much more time consuming than doing the actually machine learning. So the ability to write scripts to scrape data from files or crawl the web are essential skills for aspiring machine learning geeks. I have re-watched several of the videos later on, to help me when I’ve been stuck with problems, so I’d recommend you to do the same. However, if you already know how to scrape data from websites, this tutorial might not be the perfect fit, as a lot of the videos evolve around data fetching. In that case, the Udacity’s Intro to Machine Learning might be a better place to start. Tuesday I wanted to see if I could use what I had learned to solve an actual problem. As another developer in my coding cooperative was working on Bank of England’s data visualization competition, I teamed up with him to check out the datasets the bank has released. The most interesting data was their household surveys. This is an annual survey the bank perform on a few thousand households, regarding money related subjects. The problem we decided to solve was the following: I played around with the dataset, spent a few hours cleaning up the data, and used the Scikit Learn map to find a suitable algorithm for the problem. We ended up with a success ratio at around 63%, which isn’t impressive at all. But the machine did at least manage to guess a little better than flipping a coin, which would have given a success rate at 50%. Seeing results is like fuel to your motivation, so I’d recommend you doing this for yourself, once you have a basic grasp of how to use Scikit Learn. After playing around with various Scikit Learn modules, I decided to try and write a linear regression algorithm from the ground up. I wanted to do this, because I felt (and still feel) that I really don’t understand what’s happening on under the hood. Luckily, the Coursera course goes into detail on how a few of the algorithms work, which came to great use at this point. More specifically, it describes the underlying concepts of using linear regression with gradient descent. This has definitely been the most effective of learning technique, as it forces you to understand the steps that are going on ‘under the hood’. I strongly recommend you to do this at some point. I plan to rewrite my own implementations of more complex algorithms as I go along, but I prefer doing this after I’ve played around with the respective algorithms in Scikit Learn. On Thursday, I started doing Kaggle’s introductory tutorials. Kaggle is a platform for machine learning competitions, where you can submit solutions to problems released by companies or organizations . I recommend you trying out Kaggle after having a little bit of a theoretical and practical understanding of machine learning. You’ll need this in order to start using Kaggle. Otherwise, it will be more frustrating than rewarding. The Bag of Words tutorial guides you through every steps you need to take in order to enter a submission to a competition, plus gives you a brief and exciting introduction into Natural Language Processing (NLP). I ended the tutorial with much higher interest in NLP than I had when entering it. Friday, I continued working on the Kaggle tutorials, and also started Udacity’s Intro to Machine Learning. I’m currently half ways through, and find it quite enjoyable. It’s a lot easier the Coursera course, as it doesn’t go in depth in the algorithms. But it’s also more practical, as it teaches you Scikit Learn, which is a whole lot easier to apply to the real world than writing algorithms from the ground up in Octave, as you do in the Coursera course. Doing it for a week hasn’t just been great fun, it has also helped my awareness of its usefulness of machine learning in society. The more I learn about it, the more I see which areas it can be used to solve problems. Choose a top down approach if you’re not ready for the heavy stuff, and get into problem solving as quickly as possible. Good luck! Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code. If you’ve read this far, I’d recommend you to check out this demo! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com. A publication about improving your technical skills. " Ahmed El Deeb,593,7,https://medium.com/rants-on-machine-learning/what-to-do-with-small-data-d253254d1a89?source=tag_archive---------2----------------,What to do with “small” data? – Rants on Machine Learning – Medium,"By Ahmed El Deeb Many technology companies now have teams of smart data-scientists, versed in big-data infrastructure tools and machine learning algorithms, but every now and then, a data set with very few data points turns up and none of these algorithms seem to be working properly anymore. What the hell is happening? What can you do about it? Most data science, relevance, and machine learning activities in technology companies have been focused around “Big Data” and scenarios with huge data sets. Sets where the rows represent documents, users, files, queries, songs, images, etc. Things that are in the thousands, hundreds of thousands, millions or even billions. The infrastructure, tools, and algorithms to deal with these kinds of data sets have been evolving very quickly and improving continuously during the last decade or so. And most data scientists and machine learning practitioners have gained experience is such situations, have grown accustomed to the appropriate algorithms, and gained good intuitions about the usual trade-offs (bias-variance, flexibility-stability, hand-crafted features vs. feature learning, etc.). But small data sets still arise in the wild every now and then, and often, they are trickier to handle, require a different set of algorithms and a different set of skills. Small data sets arise is several situations: Problems of small-data are numerous, but mainly revolve around high variance: 1- Hire a statistician I’m not kidding! Statisticians are the original data scientists. The field of statistics was developed when data was much harder to come by, and as such was very aware of small-sample problems. Statistical tests, parametric models, bootstrapping, and other useful mathematical tools are the domain of classical statistics, not modern machine learning. Lacking a good general-purpose statistician, get a marine-biologist, a zoologist, a psychologist, or anyone who was trained in a domain that deals with small sample experiments. The closer to your domain the better. If you don’t want to hire a statistician full time on your team, make it a temporary consultation. But hiring a classically trained statistician could be a very good investment. 2- Stick to simple models More precisely: stick to a limited set of hypotheses. One way to look at predictive modeling is as a search problem. From an initial set of possible models, which is the most appropriate model to fit our data? In a way, each data point we use for fitting down-votes all models that make it unlikely, or up-vote models that agree with it. When you have heaps of data, you can afford to explore huge sets of models/hypotheses effectively and end up with one that is suitable. When you don’t have so many data points to begin with, you need to start from a fairly small set of possible hypotheses (e.g. the set of all linear models with 3 non-zero weights, the set of decision trees with depth <= 4, the set of histograms with 10 equally-spaced bins). This means that you rule out complex hypotheses like those that deal with non-linearity or feature interactions. This also means that you can’t afford to fit models with too many degrees of freedom (too many weights or parameters). Whenever appropriate, use strong assumptions (e.g. no negative weights, no interaction between features, specific distributions, etc.) to restrict the space of possible hypotheses. 3- Pool data when possible Are you building a personalized spam filter? Try building it on top of a universal model trained for all users. Are you modeling GDP for a specific country? Try fitting your models on GDP for all countries for which you can get data, maybe using importance sampling to emphasize the country you’re interested in. Are you trying to predict the eruptions of a specific volcano? ... you get the idea. 4- Limit Experimentation Don’t over-use your validation set. If you try too many different techniques, and use a hold-out set to compare between them, be aware of the statistical power of the results you are getting, and be aware that the performance you are getting on this set is not a good estimator for out of sample performance. 5- Do clean up your data With small data sets, noise and outliers are especially troublesome. Cleaning up your data could be crucial here to get sensible models. Alternatively you can restrict your modeling to techniques especially designed to be robust to outliers. (e.g. Quantile Regression) 6- Do perform feature selection I am not a big fan of explicit feature selection. I typically go for regularization and model averaging (next two points) to avoid over-fitting. But if the data is truly limiting, sometimes explicit feature selection is essential. Wherever possible, use domain expertise to do feature selection or elimination, as brute force approaches (e.g. all subsets or greedy forward selection) are as likely to cause over-fitting as including all features. 7- Do use Regularization Regularization is an almost-magical solution that constraints model fitting and reduces the effective degrees of freedom without reducing the actual number of parameters in the model. L1 regularization produces models with fewer non-zero parameters, effectively performing implicit feature selection, which could be desirable for explainability of performance in production, while L2 regularization produces models with more conservative (closer to zero) parameters and is effectively similar to having strong zero-centered priors for the parameters (in the Bayesian world). L2 is usually better for prediction accuracy than L1. 8- Do use Model Averaging Model averaging has similar effects to regularization is that it reduces variance and enhances generalization, but it is a generic technique that can be used with any type of models or even with heterogeneous sets of models. The downside here is that you end up with huge collections of models, which could be slow to evaluate or awkward to deploy to a production system. Two very reasonable forms of model averaging are Bagging and Bayesian model averaging. 9- Try Bayesian Modeling and Model Averaging Again, not a favorite technique of mine, but Bayesian inference may be well suited for dealing with smaller data sets, especially if you can use domain expertise to construct sensible priors. 10- Prefer Confidence Intervals to Point Estimates It is usually a good idea to get an estimate of confidence in your prediction in addition to producing the prediction itself. For regression analysis this usually takes the form of predicting a range of values that is calibrated to cover the true value 95% of the time or in the case of classification it could be just a matter of producing class probabilities. This becomes more crucial with small data sets as it becomes more likely that certain regions in your feature space are less represented than others. Model averaging as referred to in the previous two points allows us to do that pretty easily in a generic way for regression, classification and density estimation. It is also useful to do that when evaluating your models. Producing confidence intervals on the metrics you are using to compare model performance is likely to save you from jumping to many wrong conclusions. This could be a somewhat long list of things to do or try, but they all revolve around three main themes: constrained modeling, smoothing and quantification of uncertainty. Most figures used in this post were taken from the book “Pattern Recognition and Machine Learning” by Christopher Bishop. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Relevance engineer. Machine Learning practitioner and hobbyist. Former entrepreneur. Rants about machine learning and its future " Matt Fogel,938,4,https://medium.com/swlh/the-7-best-data-science-and-machine-learning-podcasts-e8f0d5a4a419?source=tag_archive---------3----------------,The 7 Best Data Science and Machine Learning Podcasts,"Data science and machine learning have long been interests of mine, but now that I’m working on Fuzzy.ai and trying to make AI and machine learning accessible to all developers, I need to keep on top of all the news in both fields. My preferred way to do this is through listening to podcasts. I’ve listened to a bunch of machine learning and data science podcasts in the last few months, so I thought I’d share my favorites: A great starting point on some of the basics of data science and machine learning. Every other week, they release a 10–15 minute episode where hosts, Kyle and Linda Polich give a short primer on topics like k-means clustering, natural language processing and decision tree learning, often using analogies related to their pet parrot, Yoshi. This is the only place where you’ll learn about k-means clustering via placement of parrot droppings. Website | iTunes Hosted by Katie Malone and Ben Jaffe of online education startup Udacity, this weekly podcast covers diverse topics in data science and machine learning: teaching specific concepts like Hidden Markov Models and how they apply to real-world problems and datasets. They make complex topics extremely accessible. Website | iTunes Each week, hosts Chris Albon and Jonathon Morgan, both experienced technologists and data scientists, talk about the latest news in data science over drinks. Listening to Partially Derivative is a great way to keep up on the latest data news. Website | iTunes This podcast features Ben Lorica, O’Reilly Media’s Chief Data Scientist speaking with other experts about timely big data and data science topics. It can often get quite technical, but the topics of discussion are always really interesting. Website | iTunes Data Stories is a little more focused on data visualization than data science, but there is often some interesting overlap between the topics. Every other week, Enrico Bertini and Moritz Stefaner cover diverse topics in data with their guests. Recent episodes about smart cities and Nicholas Felton’s annual reports are particularly interesting. Website | iTunes Billing itself as “A Gentle Introduction to Artificial Intelligence and Machine Learning”, this podcast can still get quite technical and complex, covering topics like: “How to Reason About Uncertain Events using Fuzzy Set Theory and Fuzzy Measure Theory” and “How to Represent Knowledge using Logical Rules”. Website | iTunes The newest podcasts on this list, with 8 episodes released as of this writing. Every other week, hosts Katherine Gorman and Ryan Adams speak with a guest about their work, and news stories related to machine learning. Website | iTunes Feel I’ve unfairly left a podcast off this list? Leave me a note to let me know. Published in Startups, Wanderlust, and Life Hacking - From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Cofounder of @fuzzyai. Helping developers make their software smarter, faster. Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Illia Polosukhin,255,3,https://medium.com/@ilblackdragon/tensorflow-tutorial-part-1-c559c63c0cb1?source=tag_archive---------4----------------,TensorFlow Tutorial— Part 1 – Illia Polosukhin – Medium,"UPD (April 20, 2016): Scikit Flow has been merged into TensorFlow since version 0.8 and now called TensorFlow Learn or tf.learn. Google released a machine learning framework called TensorFlow and it’s taking the world by storm. 10k+ stars on Github, a lot of publicity and general excitement in between AI researchers. Now, but how you to use it for something regular problem Data Scientist may have? (and if you are AI researcher — we will build up to interesting problems over time). A reasonable question, why as a Data Scientist, who already has a number of tools in your toolbox (R, Scikit Learn, etc), you care about yet another framework? The answer is two part: Let’s start with simple example — take Titanic dataset from Kaggle. First, make sure you have installed TensorFlow and Scikit Learn with few helpful libs, including Scikit Flow that is simplifying a lot of work with TensorFlow: You can get dataset and the code from http://github.com/ilblackdragon/tf_examples Quick look at the data (use iPython or iPython notebook for ease of interactive exploration): Let’s test how we can predict Survived class, based on float variables in Scikit Learn: We separate dataset into features and target, fill in N/A in the data with zeros and build a logistic regression. Predicting on the training data gives us some measure of accuracy (of cause it doesn’t properly evaluate the model quality and test dataset should be used, but for simplicity we will look at train only for now). Now using tf.learn (previously Scikit Flow): Congratulations, you just built your first TensorFlow model! TF.Learn is a library that wraps a lot of new APIs by TensorFlow with nice and familiar Scikit Learn API. TensorFlow is all about a building and executing graph. This is a very powerful concept, but it is also cumbersome to start with. Looking under the hood of TF.Learn, we just used three parts: Even as you get more familiar with TensorFlow, pieces of Scikit Flow will be useful (like graph_actions and layers and host of other ops and tools). See future posts for examples of handling categorical variables, text and images. Part 2 — Deep Neural Networks, Custom TensorFlow models with Scikit Flow and Digit recognition with Convolutional Networks. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder @ NEAR.AI — teaching machines to code. I’m tweeting as @ilblackdragon. " Ahmed El Deeb,283,3,https://medium.com/rants-on-machine-learning/the-unreasonable-effectiveness-of-random-forests-f33c3ce28883?source=tag_archive---------5----------------,The Unreasonable Effectiveness of Random Forests – Rants on Machine Learning – Medium,"It’s very common for machine learning practitioners to have favorite algorithms. It’s a bit irrational, since no algorithm strictly dominates in all applications, the performance of ML algorithms varies wildly depending on the application and the dimensionality of the dataset. And even for a given problem and a given dataset, any single model will likely be beaten by an ensemble of diverse models trained by diverse algorithms anyway. But people have favorites nevertheless. Some like SVMs for the elegance of their formulation or the quality of the available implementations, some like decision rules for their simplicity and interpretability, and some are crazy about neural networks for their flexibility. My favorite out-of-the-box algorithm is (as you might have guessed) the Random Forest, and it’s the second modeling technique I typically try on any given data set (after a linear model). This beautiful visualization from scikit-learn illustrates the modelling capacity of a decision forest: Here’s a paper by Leo Breiman, the inventor of the algorithms describing random forests. Here’s another amazing paper by Rich Caruana et al. evaluating several supervised learning algorithms on many different datasets. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Relevance engineer. Machine Learning practitioner and hobbyist. Former entrepreneur. Rants about machine learning and its future " Christophe Bourguignat,157,4,https://medium.com/@chris_bour/6-tricks-i-learned-from-the-otto-kaggle-challenge-a9299378cd61?source=tag_archive---------6----------------,6 Tricks I Learned From The OTTO Kaggle Challenge – Christophe Bourguignat – Medium,"Here are a few things I learned from the OTTO Group Kaggle competition. I had the chance to team up with great Kaggle Master Xavier Conort, and the french community as a whole has been very active. Teaming with Xavier has been the opportunity to practice some ensembling technics. We heavily used stacking. We added to an initial set of 93 features, new features being the predictions of N different classifiers (Random Forest, GBM, Neural Networks, ...). And then retrained P classifiers over the 93 + N features. And finally made a weighted average of the P outputs. We tested two tricks : This is one of the great functionalities of the last scikit-learn version (0.16). It allows to rescale the classifier predictions by taking observations predicted within a segments (e.g. 0.3–04), and comparing to the actual truth ratio of these observation (e.g. 0.23, with means that a rescaling is needed). Here is a mini notebook explaining how to use calibration, and demonstrating how well it worked on the OTTO challenge data. At the beginning of the competition, it appeared quickly that — once again — Gradient Boosting Trees was one of the best performing algorithm, provided that you find the right hyper parameters. On the scikit-learn implementation, most important hyper parameters are learning_rate (the shrinkage parameter), n_estimators (the number of boosting stages), and max_depth (limits the number of nodes in the tree, the best value depends on the interaction of the input variables). min_samples_split, and min_samples_leaf can also be a way to control depth of the trees for optimal performance. I also discovered that two other parameters were crucial for this competition. I must admit I never paid attention on it before this challenge : namely subsample (the fraction of samples to be used for fitting the individual base learners), and max_features (the number of features to consider when looking for the best split). The problem was to find a way to quickly find the best hyperparameters combination. I first discovered GridSearchCV, that makes an exhaustive search over specified parameter ranges. As always with scikit-learn, it has a convenient programming interface, handling for example smoothly cross-validation and parallel distributing of search. However, the number of parameters to tune, and their range, was too large to discover the best ones in the acceptable time frame I had in mind (typically while sleeping, i.e 7 to 10 hours). I had to fall back to an other option : I then used RandomizedSearchCV, that appeared in 0.14 version. With this method, search is done randomly on a subspace of parameters. It gives generally very good results, as described in this paper, and I was able to find a suitable parameter set within a few hours. Note that some competitors, like french kaggler Amine, used Hyperopt for hyperparameters optimization. XGBoost is a Gradient Boosting implementation heavily used by kagglers, and I now understand why. I never used it before, but it was a hot topic discussed in the forum. I decided to have a look at it, even if its main interface is in R (but there is a Python API, that I didn’t use yet). XGBoost is much faster than scikit-learn, and gave better prediction. It will remain for sure part of my toolblox. Someone posted on the forum : He was right. It has been for me the opportunity to play with neural networks for the first time. Several implementations have been used by the competitors : H2O, Keras, cxxnet, ... I personally used Lasagne. Main challenges was to fine tune the number of layers, number of neurons, dropout and learning rate. Here is a notebook on what I learned. One of the secret of the competition was to run several times the same algorithm, with random selection of observations and features, and take the average of the output. To do that easily, I discovered the scikit-learn BaggingClassifier meta-estimator. It hides the tedious complexity of looping over model fits, random subsets selection, and averaging — and exposes easy fit() / predict_proba() entry points. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data enthusiast #BigData #DataScience #MachineLearning #FrenchData #Kaggle " I'Boss Potiwarakorn,128,3,https://medium.com/o-v-e-r-f-i-t-t-e-d/machine-learning-%E0%B9%80%E0%B8%A3%E0%B8%B5%E0%B8%A2%E0%B8%99%E0%B8%AD%E0%B8%B0%E0%B9%84%E0%B8%A3-%E0%B8%A3%E0%B8%B9%E0%B9%89%E0%B9%84%E0%B8%9B%E0%B8%97%E0%B8%B3%E0%B9%84%E0%B8%A1-4a1dcae85b96?source=tag_archive---------7----------------,"Machine Learning เรียนอะไร, รู้ไปทําไม – O v e r f i t t e d – Medium","Machine Learning จริงๆแล้วมันคืออะไรกันแน่? ครั้งแรกที่ผมได้ยินคําคํานี้ ผมก็พูดกับตัวเองในใจ “เครื่องจักรที่เรียนรู้ได้ด้วยตัวเองงั้นเหรอ?” ใครมาถามว่ารู้จัก Machine Learning หรือเปล่า? มันคืออะไรอะ? ผมก็ได้แต่บอกเขาไปว่า “เคยได้ยินแต่ชื่อว่ะ” ผมหวังเป็นอย่างยิ่งว่า คุณที่หลงเข้ามาอ่านบทความนี้ ที่อาจจะเป็นเหมือนผมที่เคยได้ยินมาแต่ชื่อ เมื่ออ่านจบ หากมีใครมาถามว่า “รู้จัก Machine Learning หรือเปล่า? มันคืออะไรอะ?” จะมีความมั่นใจมากพอที่จะตอบเขาไปว่า “รู้ดิ นั่งๆ เดี๋ยวเล่าให้ฟัง” ก่อนที่จะอธิบายว่า Machine Learning คืออะไร ขอโม้สักเล็กน้อย ให้เห็นความสําคัญของมันเสียก่อน แล้วจะพบว่า Machine Learning นี่แทบจะเป็นส่วนหนึ่งของชีวิตประจําวันไปแล้ว Machine Learning เป็นเรื่องที่ใกล้ตัวเรามากๆ ยิ่งสําหรับคนที่ใช้ Internet เป็นประจํานั้น แทบทุกวันจะได้ใช้ประโยชน์จาก Machine Learning ไม่ว่าจะรู้ตัวหรือไม่ก็ตาม ยกตัวอย่างเช่น เมื่อเราต้องการค้นหาอะไรบางอย่างด้วย Google Search แต่ไม่ค่อยแน่ใจ คับคล้ายคับคลาว่ามันน่าจะสะกดแบบนี้ “machn leaning” (ตัวอย่างโง่นิดนึง ๕๕๕๕๕) ปรากฏว่า... โอ้ พระสงฆ์! มันเดาใจเราได้ เป็นหมอดูหรืออย่างไร!? แน่นอนว่าโค้ดก็คงจะไม่ใช่แบบด้านล่างนี้แน่ๆ แล้วทําไม Google ถึงได้รู้ใจเรากันนะ? ยังมีตัวอย่างอีกมากมายที่นํา Machine Learning ไปใช้ เช่น Spam Filtering, Face Recognition, Handwriting Recognition แม้แต่การทํา marketing ในปัจจุบันก็เริ่มใช้ประโยชน์ Machine Learning เช่นกัน อาทิ การแบ่งกลุ่มลูกค้า(Customer Segmentation) การทํานายการสูญเสียลูกค้า(Customer Churn Prediction) เป็นต้น และที่จะไม่พูดถึงไม่ได้เลยคือ Facebook’s Friend Suggestions ที่ผมเองก็ไม่รู้ว่าทําไมมันถึงแนะนําสาวสวยให้ผมอย่างสม่ําเสมอ แต่สิ่งที่ผมมั่นใจคือ Facebook ไม่ได้ใช้คนมานั่งเลือกให้แน่ๆ แล้วมันทําได้ยังไงกัน? แน่นอนว่า Machine Learning คือคําตอบ สมองของมนุษย์นั้นมีความสามารถที่น่าทึ่งมากมาย เช่น การตระหนักรู้ อารมณ์ความรู้สึก ความทรงจํา ความสามารถในการควบคุมร่างกาย รวมถึงประสาทสัมผัสทั้งห้าที่ทําให้เรามีความสามารถในการรับรู้ แต่ก็มีปัญหาบางอย่างที่ซับซ้อน และไม่เหมาะที่จะแก้ปัญหาโดยการใช้สมองของมนุษย์เพียงอย่างเดียว เมื่อต้องเขียนโปรแกรมที่จัดการกับข้อมูลจํานวนมาก และมีรูปแบบที่แตกต่างกันออกไป เป็นเรื่องยากที่เราจะทําความเข้าใจข้อมูลและเขียนโปรแกรมที่จะตอบสนองต่อมัน เมื่อมีข้อมูลเข้ามาเพิ่มและมีลักษณะที่ต่างไปอีกก็เหมือนกับ requirement เปลี่ยนตลอดเวลา เราก็ต้องวิเคราะห์ข้อมูลใหม่และแก้โปรแกรมของเราเรื่อยๆซึ่งลําบากมาก Arthur Samuel หนึ่งในผู้บุกเบิก Computer Gaming, Artificial Intelligence และ Machine Learning ชาวอเมริกัน ได้นิยาม Machine Learning เอาไว้ว่า เป็น “การศึกษาเกี่ยวกับการทําให้คอมพิวเตอร์มีความสามารถที่จะเรียนรู้โดยที่ไม่ต้องเขียนโปรแกรมลงไปตรงๆ” กล่าวคือ Machine Learning นั้น ไม่ได้กําหนดลงไปในโปรแกรมว่า สําหรับลักษณะ A, B ใดๆ หากข้อมูลมีลักษณะแบบ A ต้องทําอย่างไร แบบ B ต้องทําอย่างไร แต่เป็นโปรแกรมที่ทําความเข้าใจความสัมพันธ์ของข้อมูล แล้วสร้างวิธีการตอบสนองต่อข้อมูลขึ้นมาเอง ในเมื่อโปรแกรมสามารถเปลี่ยนแปลงวิธีการตอบสนองต่อข้อมูลได้ด้วยตัวเอง เราจึงไม่จําเป็นต้องคอยวิเคราะห์ข้อมูลและแก้โปรแกรมทุกครั้งที่มีข้อมูลใหม่เข้ามาอีกต่อไป ตัวผมเองเคยสงสัยว่า Machine Learning, Artificial Intelligence และData Mining มันต่างกันยังไง รู้สึกว่ามันก็เป็นเรื่องที่คล้ายกันมากๆ แต่แล้วผมก็พบความจริงว่า จริงๆแล้ว ทั้ง AI (Artificial Intelligence) และ Data Mining นั้นนํา Machine Learning ไปใช้ สิ่งที่ต่างกันก็คือเป้าหมาย AI นั้นโฟกัสที่การสร้าง Intelligent Agent หรือตัวตนที่มีความคิดขึ้นมา ซึ่งไม่จําเป็นที่จะต้องใช้ Machine Learning ก็ได้ ถึงแม้จะใช้เพียงแค่การ Search หากสามารถตอบสนองได้อย่างชาญฉลาด ก็สามารถเรียกว่าเป็น AI ได้ แต่ไม่จําเป็น ไม่ได้แปลว่าไม่ได้นําไปใช้ ในทางกลับกัน Machine Learning ถูกนําไปใช้ประโยชน์ใน AI เยอะมากๆ โดยถูกใช้เพื่อที่จะสร้างความรู้ใหม่ๆ และนําไปสู่การตอบสนองต่อเหตุการณ์ที่ต่างออกไปจากที่กําหนดไว้แต่แรก ส่วน Data Mining นั้น เป็นขั้นตอนการวิเคราะห์ใน knowledge discovery หรือการค้นหาความรู้ โดยจะเปลี่ยนจากข้อมูลดิบ(data)ให้เป็นข้อมูลที่ทําความเข้าใจได้(information) เพื่อที่จะนําไปใช้ต่อในอนาคต Data Mining ใช้วิธีการของทั้ง AI, Machine Learning, สถิติ และ Database System ในการได้มาซึ่งข้อมูลเชิงลึก หรือ insight โดยสรุปแล้ว ทั้ง 3 ศาสตร์นั้นมีความเกี่ยวข้องกันอย่างมาก และต่างก็นําวิธีการของกันและกันไปใช้ จึงไม่แปลกที่จะรู้สึกว่ามันดูคล้ายๆกัน เพียงแต่เป้าหมายของมันต่างกัน ทําให้วิธีการนั้นไม่เหมือนกันซะทีเดียว Machine Learning Algorithm นั้น โดยพื้นฐานแล้วจะแบ่งออกได้เป็น 2 ประเภทคือ supervised learning กับ unsupervised learning supervised learning คือ การเรียนรู้ที่ได้รับคําแนะนํา สมมติว่าเราเกิดเสียความทรงจํา ไม่สามารถแยกแยะ แอปเปิ้ล มะม่วง และส้มออกจากกันได้ คุณหมอผู้น่ารักจึงเอา แอปเปิ้ล มะม่วง และส้ม มาให้ดู ผลไม้ทั้งหมดที่คุณหมอเอามาให้ดูนี้เรียกว่า training data คือข้อมูลที่นํามาใช้ในการฝึกสอน คุณหมอเริ่มจากการนําแอปเปิ้ลหลากหลายแบบมาให้ดู และบอกว่า “นี่คือแอปเปิ้ล” นี่เป็นการให้ label หรือป้ายที่แปะบอกว่าข้อมูลที่ได้มานี้คืออะไร เมื่อเราได้เห็นแอปเปิ้ลก็จะพบว่า แอปเปิ้ลนั้นมีสีแดง หรือสีเขียว รูปทรงของแอปเปิ้ลนั้นหากผ่าครึ่งจะมีลักษณะคล้ายผีเสื้อ สิ่งเหล่านี้เรียกว่า feature หรือคุณสมบัติของข้อมูล หลังจากนั้นคุณหมอก็นํามะม่วงและส้มมาให้ดู และพบว่า มะม่วงมีสีเขียวหรือเหลือง รูปทรงค่อนข้างยาว ส่วนส้มมีสีส้ม และมีรูปทรงเป็นทรงรี เมื่อได้ข้อมูลมากเพียงพอ ก็จะเริ่มแยกแยะ แอปเปิ้ล มะม่วง และส้ม ออกจากกันได้ ต่อมา หากเจอส้มเปลือกสีเขียวก็อาจจะเดาได้ว่าเป็นส้ม เพราะรูปทรงของมัน นี่เป็นตัวอย่างหนึ่งของ classification หรือการจัดหมวดหมู่ เป็น supervised learning แบบหนึ่งที่ใช้กับข้อมูลที่ไม่ต่อเนื่อง (discrete) regression เป็นการวิเคราะห์สําหรับข้อมูลที่ต่อเนื่อง (continuous) ภาพด้านบนแสดงถึง linear regression หรือ line of best fit ซึ่งเป็น regression แบบหนึ่ง จะเห็นได้ว่าเรามี training data อยู่ ซึ่งก็ไม่ได้เรียงตัวกันเป็นเส้นตรง แต่ก็พอจะเห็นรูปแบบและแนวโน้มของข้อมูล หากเราลากเส้นโดยพยายามให้ผลรวมของระยะห่างจาก training data ซึ่งเป็นความคลาดเคลื่อนน้อยที่สุด เราก็จะได้ model ที่พอจะทํานายค่า y ต่อๆไปได้ George E. P. Box นักสถิติศาสตร์กล่าวไว้ว่า อย่างที่เห็นในภาพ เส้นสีน้ําเงินนั่นคือ model หรือแบบจําลองเพื่อทํานายค่า y ที่มี x สูงกว่านี้ แต่จะเห็นได้ว่าแทบไม่มีจุดไหนที่ตรงเป๊ะๆเลย เราใช้ได้แค่พอทํานายได้คร่าวๆเท่านั้น หาก supervised learning คือการเรียนรู้ที่มีคําแนะนํา unsupervised learning ก็คือการไปตายเอาดาบหน้า ไม่มีใครมาแนะนําเรา แต่คงต้องขอไปลุยซักตั้ง เมื่อ supervised learning มี classification ทางด้าน unsupervised learning ก็จะมี clustering ซึ่งหลายคนรวมถึงผมเอง เมื่อได้รู้จักครั้งแรก ก็สงสัยว่า เอ๊ะ มันต่างกันยังไง classification เป็นการจัดหมวดหมู่ ส่วน clustering เป็นการจัดกลุ่ม ฟังๆดูก็คล้ายๆกันอยู่ดี เช่นนั้นแล้ว ลองกลับไปสวมบทผู้ป่วยสูญเสียความทรงจํา กับคุณหมอน่ารักอีกสักครั้งนะครับ คราวนี้ คุณหมอ เอาผลไม้มาอีกสามชนิดที่หน้าตาไม่เหมือนทั้งแอปเปิ้ล มะม่วงและส้ม แต่คุณหมอไม่ยอมบอกอะไรเกี่ยวกับเจ้าพวกนี้เลยสักนิด หรือก็คือตอนนี้ ผลไม้เหล่านี้ไม่มี label แต่คุณหมอก็สั่งให้เราแยกมันออกมาเป็นสามกลุ่ม เมื่อเราสังเกต feature ของผลไม้พวกนี้ก็จะพอแยกแยะได้ว่าผลไม้ลูกไหนควรจะอยู่กลุ่มเดียวกัน เมื่อแยกได้สามกลุ่มแล้ว คุณหมอก็เดินจากไปเสียเฉยๆ ไม่บอกไม่กล่าวอะไร สุดท้ายเราก็รู้แค่ว่า ผลไม้แต่ละลูกอยู่กลุ่มเดียวกับใคร แต่บอกไม่ได้ว่ามันคืออะไรกันแน่ นี่คือ clustering ต่างจาก classification ที่บอกเราตั้งแต่แรกว่าผลไม้แต่ละชนิดมีชื่อว่าอะไรบ้าง หลายครั้งที่ข้อมูลนั้นมี feature หลายชนิด หรือก็คือเป็นข้อมูลที่มีมิติมาก เมื่อเป็นเช่นนั้นแล้วก็จะเป็นเรื่องยากที่จะแสดงภาพ หรือ visualize ข้อมูล เราจึงควรลดมิติของข้อมูลลง โดยพยายามคงความหมายเดิมอยู่ dimensionality reduction หรือ dimension reduction เป็นการลดมิติของข้อมูล ซึ่งนอกจากจะทําให้ง่ายที่จะ visualize แล้ว เมื่อมีมิติที่น้อยลง นั่นหมายถึงมี feature ที่น้อยลง ซึ่งทําให้ performance ดีขึ้น และลด space complexity อีกด้วย นอกจากประเภทของ Machine Learning Algorithm แบบ basic ที่เขียนไว้ด้านบนแล้วยังมี Semi-supervised Learning และ Reinforcement Learning ที่พี่ต้า @konpat เขียนเอาไว้ครับ สุดท้ายนี้ ผมเชื่อว่า Machine Learning เป็นศาสตร์หนึ่งที่สําคัญมากๆสําหรับวงการคอมพิวเตอร์ในปัจจุบัน และอนาคต และส่วนตัวผมคิดว่าศาสตร์นี้มันเท่มากๆเลยนะ ใครสนใจใน Machine Learning ก็เข้ามาคุยกันได้นะครับ :D From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Choreographer at ThoughtWorks; Functional Programming, DevOps and Machine Learning Enthusiast a half-score-day-a-story blog by League of Machine Learning from Chulalongkorn University " samim,301,4,https://medium.com/@samim/generating-stories-about-images-d163ba41e4ed?source=tag_archive---------8----------------,Generating Stories about Images – samim – Medium,"Stories are a fundamental human tool that we use to communicate thought. Creating a stories about a image is a difficult task that many struggle with. New machine-learning experiments are enabling us to generate stories based on the content of images. This experiment explores how to generate little romantic stories about images (incl. guest star Taylor Swift). neural-storyteller is a recently published experiment by Ryan Kiros (University of Toronto). It combines recurrent neural networks (RNN), skip-thoughts vectors and other techniques to generate little story about images. Neural-storyteller’s outputs are creative and often comedic. It is open-source. This experiment started by running 5000 randomly selected web-images through neural-storyteller and experimenting with hyper-parameters. neural-storyteller comes with 2 pre-trained models: One trained on 14 million passages of romance novels, the other trained on Taylor Swift Lyrics. Inputs and outputs were manually filtered and recombined into two videos. Using Romantic Novel Model. Voices generated with a Text-to-Speech. Using Taylor Swift Model. Combined with a well known Swift instrumental. neural-storyteller gives us a fascinating glimpse into the future of storytelling. Even though these technologies are not fully mature yet, the art of storytelling is bound to change. In the near future, authors will be training custom models, combining styles across genres and generating text with images & sounds. Exploring this exiting new medium is rewarding! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Designer & Code Magician. Working at the intersection of HCI, Machine Learning & Creativity. Building tools for Enlightenment. Narrative Engineering. " AirbnbEng,517,9,https://medium.com/airbnb-engineering/how-airbnb-uses-machine-learning-to-detect-host-preferences-18ce07150fa3?source=tag_archive---------9----------------,How Airbnb uses Machine Learning to Detect Host Preferences,"By Bar Ifrach At Airbnb we seek to match people who are looking for accommodation — guests — with those looking to rent out their place — hosts. Guests reach out to hosts whose listings they wish to stay in, however a match succeeds only if the host also wants to accommodate the guest. I first heard about Airbnb in 2012 from a friend. He offered his nice apartment on the site when he traveled to see his family during our vacations from grad school. His main goal was to fit as many booked nights as possible into the 1–2 weeks when he was away. My friend would accept or reject requests depending on whether or not the request would help him to maximize his occupancy. About two years later, I joined Airbnb as a Data Scientist. I remembered my friend’s behavior and was curious to discover what affects hosts’ decisions to accept accommodation requests and how Airbnb could increase acceptances and matches on the platform. What started as a small research project resulted in the development of a machine learning model that learns our hosts’ preferences for accommodation requests based on their past behavior. For each search query that a guest enters on Airbnb’s search engine, our model computes the likelihood that relevant hosts will want to accommodate the guest’s request. Then, we surface likely matches more prominently in the search results. In our A/B testing the model showed about a 3.75% increase in booking conversion, resulting in many more matches on Airbnb. In this blog post I outline the process that brought us to this model. I kicked off my research into hosts’ acceptances by checking if other hosts maximized their occupancy like my friend. Every accommodation request falls in a sequence or in a window of available days in the calendar, such as on April 5–10 in the calendar shown below. The gray days surrounding the window are either blocked by the host or already booked. If accepted and booked, a request may leave the host with a sub-window before the check-in date (check-in gap — April 5–7) and/or a sub-window after the check-out (check-out gap — April 10). A host looking to have a high occupancy will try to avoid such gaps. Indeed, when I plotted hosts’ tendency to accept over the sum of the check-in gap and the check-out gap (3+1= 4 in the example above), as in the next plot, I found the effect that I expected to see: hosts were more likely to accept requests that fit well in their calendar and minimize gap days. But do all hosts try to maximize occupancy and prefer stays with short gaps? Perhaps some hosts are not interested in maximizing their occupancy and would rather host occasionally. And maybe hosts in big markets, like my friend, are different from hosts in smaller markets. Indeed, when I looked at listings from big and small markets separately, I found that they behaved quite differently. Hosts in big markets care a lot about their occupancy — a request with no gaps is almost 6% likelier to be accepted than one with 7 gap nights. For small markets I found the opposite effect; hosts prefer to have a small number of nights between requests. So, hosts in different markets have different preferences, but it seems likely that even within a market hosts may prefer different stays. A similar story revealed itself when I looked at hosts’ tendency to accept based on other characteristics of the accommodation request. For example, on average Airbnb hosts prefer accommodation requests that are at least a week in advance over last minute requests. But perhaps some hosts prefer short notice? The plot below looks at the dispersion of hosts’ preferences for last minute stays (less than 7 days) versus far in advance stays (more than 7 days). Indeed, the dispersion in preferences reveals that some hosts like last minute stays better than far in advance stays — those in the bottom right — even though on average hosts prefer longer notice. I found similar dispersion in hosts’ tendency to accept other trip characteristics like the number of guests, whether it is a weekend trip etc. All these findings pointed to the same conclusion: if we could promote in our search results hosts who would be more likely to accept an accommodation request resulting from that search query, we would expect to see happier guests and hosts and more matches that turned into fun vacations (or productive business trips). In other words, we could personalize our search results, but not in the way you might expect. Typically personalized search results promote results that would fit the unique preferences of the searcher — the guest. At a two-sided marketplace like Airbnb, we also wanted to personalize search by the preference of the hosts whose listings would appear in the search results. Encouraged by my findings, I joined forces with another data scientist and a software engineer to create a personalized search signal. We set out to associate hosts’ prior acceptance and decline decisions by the following characteristics of the trip: check-in date, check-out date and number of guests. By adding host preferences to our existing ranking model capturing guest preferences, we hoped to enable more and better matches. At first glance, this seems like a perfect case for collaborative filtering — we have users (hosts) and items (trips) and we want to understand the preference for those items by combining historical ratings (accept/decline) with statistical learning from similar hosts. However, the application does not fully fit in the collaborative filtering framework for two reasons. With these points in mind, we decided to massage the problem into something resembling collaborative filtering. We used the multiplicity of responses for the same trip to reduce the noise coming from the latent factors in the guest-host interaction. To do so, we considered hosts’ average response to a certain trip characteristic in isolation. Instead of looking at the combination of trip length, size of guest party, size of calendar gap and so on, we looked at each of these trip characteristics by itself. With this coarser structure of preferences we were able to resolve some of the noise in our data as well as the potentially conflicting labels for the same trip. We used the mean acceptance rate for each trip characteristic as a proxy for preference. Still our data-set was relatively sparse. On average, for each trip characteristic we could not determine the preference for about 26% of hosts, because they never received an accommodation request that met those trip characteristics. As a method of imputation, we smoothed the preference using a weight function that, for each trip characteristic, averages the median preference of hosts in the region with the host’s preference. The weight on the median preference is 1 when the host has no data points and goes to 0 monotonically the more data points the host has. Using these newly defined preferences we created predictions for host acceptances using a L-2 regularized logistic regression. Essentially, we combine the preferences for different trip characteristics into a single prediction for the probability of acceptance. The weight the preference of each trip characteristic has on the acceptance decision is the coefficient that comes out of the logistic regression. To improve the prediction, we include a few more geographic and host specific features in the logistic regression. This flow chart summarizes the modeling technique. We ran this model on segments of hosts on our cluster using a user-generated-function (UDF) on Hive. The UDF is written in Python; its inputs are accommodation requests, hosts’ response to them and a few other host features. Depending on the flag passed to it, the UDF either builds the preferences for the different trip characteristics or trains the logistic regression model using scikit-learn. Our main off-line evaluation metric for the model was mean squared error (MSE), which is more appropriate in a setting when we care about the predicted probability more than about classification. In our off-line evaluation of the model we were able to get a 10% decrease in MSE over our previous model that captured host acceptance probability. This was a promising result. But, we still had to test the performance of the model live on our site. To test the online performance of the model, we launched an experiment that used the predicted probability of host acceptance as a significant weight in our ranking algorithm that also includes many other features that capture guests’ preferences. Every time a guest in the treatment group entered a search query, our model predicted the probability of acceptance for all relevant hosts and influenced the order in which listings were presented to the guest, ranking likelier matches higher. We evaluated the experiment by looking at multiple metrics, but the most important one was the likelihood that a guest requesting accommodation would get a booking (booking conversion). We found a 3.75% lift in our booking conversion and a significant increase in the number of successful matches between guests and hosts. After concluding the initial experiment, we made a few more optimizations that improved conversion by approximately another 1% and then launched the experiment to 100% of users. This was an exciting outcome for our first full-fledged personalization search signal and a sizable contributor to our success. First, this project taught us that in a two sided marketplace personalization can be effective on the buyer as well as the seller side. Second, the project taught us that sometimes you have to roll up your sleeves and build a machine learning model tailored for your own application. In this case, the application did not quite fit in the collaborative filtering and a multilevel model with host fixed-effect was too computationally demanding and not suited for a sparse data-set. While building our own model took more time, it was a fun learning experience. Finally, this project would not have succeeded without the fantastic work of Spencer de Mars and Lukasz Dziurzynski. Originally published at nerds.airbnb.com on April 14, 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io " Adam Geitgey,14.2K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------1----------------,Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that! This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture: Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished! (If you haven’t already read part 1 and part 2, read them now!) You might have seen this famous xkcd comic before. The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years. In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one. So let’s do it — let’s write a program that can recognize birds! Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”. In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in: We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”. Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set: The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers? The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is: To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers: The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes: Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups. Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone. All that’s left is to train the neural network with images of “8”s and not-“8""s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images. Here’s some of our training data: We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition! It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right? Well, of course it’s not that simple. First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image: But now the really bad news: Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything: This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only. That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered. We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one? This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this! When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image? We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image: Using this technique, we can easily create an endless supply of training data. More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns. To make the network bigger, we just stack up layer upon layer of nodes: We call this a “deep neural network” because it has more layers than a traditional neural network. This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly. But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network. Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects. There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is! As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture: As a human, you instantly recognize the hierarchy in this picture: Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on. But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks. We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up. We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images). Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture. Here’s how it’s going to work, step by step — Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile: By doing this, we turned our original image into 77 equally-sized tiny image tiles. Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile: However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting. We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this: In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting. The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big: To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all! We’ll just look at each 2x2 square of the array and keep the biggest number: The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits. So far, we’ve reduced a giant image down into a fairly small array. Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network. So from start to finish, our whole five-step pipeline looks like this: Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network. When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data. The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize. For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc. Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like: In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories! So how do you know which steps you need to combine to make your image classifier work? Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error! Now finally we know enough to write a program that can decide if a picture is a bird or not. As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics. Here’s a few of the birds from our combined data set: And here’s some of the 52,000 non-bird images: This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data! To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network. Here’s the code to define and train the network: If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer. As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there. Congrats! Our program can now recognize birds in images! Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not. But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time. That seems pretty good, right? Well... it depends! Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things. For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless. We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed. Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories — Using our validation set of 15,000 images, here’s how many times our predictions fell into each category: Why do we break our results down like this? Because not all mistakes are created equal. Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did. Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did: This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one! Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images. You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next? If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,15.2K,13,https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------2----------------,Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic: This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do! Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)! So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result. But face recognition is really a series of several related problems: As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects: Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately. We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms: Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib. The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart! If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action: Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline. Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short. To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces: Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it: Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker: If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image: This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve! But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image. To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest. The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way: To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces: Using this technique, we can now easily find faces in any image: If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images. Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer: To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps. To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan. The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face: Here’s the result of locating the 68 face landmarks on our test image: Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations): Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate. If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks. Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting! The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right? There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours. What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about: Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else? It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure. The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face. The training process works by looking at 3 face images at a time: Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart: After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements. Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist. This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy. But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team! So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image: So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person. If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this. This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image. You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work. All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person! So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon: Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show: It works! And look how well it works for faces in different poses — even sideways faces! Let’s review the steps we followed: Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer: UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below! I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself! Original OpenFace instructions: If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter: You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 5! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,10.4K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------3----------------,Machine Learning is Fun! Part 2 – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话. In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!). This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out! Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished. Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this: We ended up with this simple estimation function: In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value. Instead of using code, let’s represent that same function as a simple diagram: However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model? To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases: Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)! Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model. Let’s combine our four attempts to guess into one big diagram: This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions. There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click: It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together: The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm. In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time. Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess? I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter. Our model might look like this: But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem. Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example: What letter is going to come next? You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing. In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English. To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently. Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters. This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory. Predicting the next letter in a story might seem pretty useless. What’s the point? One cool use might be auto-predict for a mobile phone keyboard: But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us! We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway. To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github. We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example. As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training: You can see that it has figured out that sometimes words have spaces between them, but that’s about it. After about 1000 iterations, things are looking more promising: The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense. But after several thousand more training iterations, it looks pretty good: At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense. Compare that with some real text from the book: Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing! We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters. For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”: Not bad! But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern. In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system. This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers. Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels? First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985: This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those. To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime. Here’s the first level from the game (which you probably remember if you ever played it): If we look closely, we can see the level is made of a simple grid of objects: We could just as easily represent this grid as a sequence of characters with one character representing each object: We’ve replaced each object in the level with a letter: ...and so on, using a different letter for each different kind of object in the level. I ended up with text files that looked like this: Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line: The patterns in a level really emerge when you think of the level as a series of columns: So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well. To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up: Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it. After a little training, our model is generating junk: It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet. After several thousand iterations, it’s starting to look like something: The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool! With a lot more training, the model gets to the point where it generates perfectly valid data: Let’s sample an entire level’s worth of data from our model and rotate it back horizontal: This data looks great! There are several awesome things to notice: Finally, let’s take this level and recreate it in Super Mario Maker: Play it yourself! If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3. The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model. If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free. As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly! For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate. But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been. In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach. Readers have sent me links to other interesting approaches to generating Super Mario levels: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 3! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------4----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Adam Geitgey,6.8K,11,https://medium.com/@ageitgey/machine-learning-is-fun-part-6-how-to-do-speech-recognition-with-deep-learning-28293c162f7a?source=tag_archive---------5----------------,Machine Learning is Fun Part 6: How to do Speech Recognition with Deep Learning,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话 , 한국어, Tiếng Việt or Русский. Speech recognition is invading our lives. It’s built into our phones, our game consoles and our smart watches. It’s even automating our homes. For just $50, you can get an Amazon Echo Dot — a magic box that allows you to order pizza, get a weather report or even buy trash bags — just by speaking out loud: The Echo Dot has been so popular this holiday season that Amazon can’t seem to keep them in stock! But speech recognition has been around for decades, so why is it just now hitting the mainstream? The reason is that deep learning finally made speech recognition accurate enough to be useful outside of carefully controlled environments. Andrew Ng has long predicted that as speech recognition goes from 95% accurate to 99% accurate, it will become a primary way that we interact with computers. The idea is that this 4% accuracy gap is the difference between annoyingly unreliable and incredibly useful. Thanks to Deep Learning, we’re finally cresting that peak. Let’s learn how to do speech recognition with deep learning! If you know how neural machine translation works, you might guess that we could simply feed sound recordings into a neural network and train it to produce text: That’s the holy grail of speech recognition with deep learning, but we aren’t quite there yet (at least at the time that I wrote this — I bet that we will be in a couple of years). The big problem is that speech varies in speed. One person might say “hello!” very quickly and another person might say “heeeelllllllllllllooooo!” very slowly, producing a much longer sound file with much more data. Both both sound files should be recognized as exactly the same text — “hello!” Automatically aligning audio files of various lengths to a fixed-length piece of text turns out to be pretty hard. To work around this, we have to use some special tricks and extra precessing in addition to a deep neural network. Let’s see how it works! The first step in speech recognition is obvious — we need to feed sound waves into a computer. In Part 3, we learned how to take an image and treat it as an array of numbers so that we can feed directly into a neural network for image recognition: But sound is transmitted as waves. How do we turn sound waves into numbers? Let’s use this sound clip of me saying “Hello”: Sound waves are one-dimensional. At every moment in time, they have a single value based on the height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look: To turn this sound wave into numbers, we just record of the height of the wave at equally-spaced points: This is called sampling. We are taking a reading thousands of times a second and recording a number representing the height of the sound wave at that point in time. That’s basically all an uncompressed .wav audio file is. “CD Quality” audio is sampled at 44.1khz (44,100 readings per second). But for speech recognition, a sampling rate of 16khz (16,000 samples per second) is enough to cover the frequency range of human speech. Lets sample our “Hello” sound wave 16,000 times per second. Here’s the first 100 samples: You might be thinking that sampling is only creating a rough approximation of the original sound wave because it’s only taking occasional readings. There’s gaps in between our readings so we must be losing data, right? But thanks to the Nyquist theorem, we know that we can use math to perfectly reconstruct the original sound wave from the spaced-out samples — as long as we sample at least twice as fast as the highest frequency we want to record. I mention this only because nearly everyone gets this wrong and assumes that using higher sampling rates always leads to better audio quality. It doesn’t. We now have an array of numbers with each number representing the sound wave’s amplitude at 1/16,000th of a second intervals. We could feed these numbers right into a neural network. But trying to recognize speech patterns by processing these samples directly is difficult. Instead, we can make the problem easier by doing some pre-processing on the audio data. Let’s start by grouping our sampled audio into 20-millisecond-long chunks. Here’s our first 20 milliseconds of audio (i.e., our first 320 samples): Plotting those numbers as a simple line graph gives us a rough approximation of the original sound wave for that 20 millisecond period of time: This recording is only 1/50th of a second long. But even this short recording is a complex mish-mash of different frequencies of sound. There’s some low sounds, some mid-range sounds, and even some high-pitched sounds sprinkled in. But taken all together, these different frequencies mix together to make up the complex sound of human speech. To make this data easier for a neural network to process, we are going to break apart this complex sound wave into it’s component parts. We’ll break out the low-pitched parts, the next-lowest-pitched-parts, and so on. Then by adding up how much energy is in each of those frequency bands (from low to high), we create a fingerprint of sorts for this audio snippet. Imagine you had a recording of someone playing a C Major chord on a piano. That sound is the combination of three musical notes— C, E and G — all mixed together into one complex sound. We want to break apart that complex sound into the individual notes to discover that they were C, E and G. This is the exact same idea. We do this using a mathematic operation called a Fourier transform. It breaks apart the complex sound wave into the simple sound waves that make it up. Once we have those individual sound waves, we add up how much energy is contained in each one. The end result is a score of how important each frequency range is, from low pitch (i.e. bass notes) to high pitch. Each number below represents how much energy was in each 50hz band of our 20 millisecond audio clip: But this is a lot easier to see when you draw this as a chart: If we repeat this process on every 20 millisecond chunk of audio, we end up with a spectrogram (each column from left-to-right is one 20ms chunk): A spectrogram is cool because you can actually see musical notes and other pitch patterns in audio data. A neural network can find patterns in this kind of data more easily than raw sound waves. So this is the data representation we’ll actually feed into our neural network. Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural network. The input to the neural network will be 20 millisecond audio chunks. For each little audio slice, it will try to figure out the letter that corresponds the sound currently being spoken. We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word “Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that memory of previous predictions helps the neural network make more accurate predictions going forward. After we run our entire audio clip through the neural network (one chunk at a time), we’ll end up with a mapping of each audio chunk to the letters most likely spoken during that chunk. Here’s what that mapping looks like for me saying “Hello”: Our neural net is predicting that one likely thing I said was “HHHEE_LL_LLLOOO”. But it also thinks that it was possible that I said “HHHUU_LL_LLLOOO” or even “AAAUU_LL_LLLOOO”. We have some steps we follow to clean up this output. First, we’ll replace any repeated characters a single character: Then we’ll remove any blanks: That leaves us with three possible transcriptions — “Hello”, “Hullo” and “Aullo”. If you say them out loud, all of these sound similar to “Hello”. Because it’s predicting one character at a time, the neural network will come up with these very sounded-out transcriptions. For example if you say “He would not go”, it might give one possible transcription as “He wud net go”. The trick is to combine these pronunciation-based predictions with likelihood scores based on large database of written text (books, news articles, etc). You throw out transcriptions that seem the least likely to be real and keep the transcription that seems the most realistic. Of our possible transcriptions “Hello”, “Hullo” and “Aullo”, obviously “Hello” will appear more frequently in a database of text (not to mention in our original audio-based training data) and thus is probably correct. So we’ll pick “Hello” as our final transcription instead of the others. Done! You might be thinking “But what if someone says ‘Hullo’? It’s a valid word. Maybe ‘Hello’ is the wrong transcription!” Of course it is possible that someone actually said “Hullo” instead of “Hello”. But a speech recognition system like this (trained on American English) will basically never produce “Hullo” as the transcription. It’s just such an unlikely thing for a user to say compared to “Hello” that it will always think you are saying “Hello” no matter how much you emphasize the ‘U’ sound. Try it out! If your phone is set to American English, try to get your phone’s digital assistant to recognize the world “Hullo.” You can’t! It refuses! It will always understand it as “Hello.” Not recognizing “Hullo” is a reasonable behavior, but sometimes you’ll find annoying cases where your phone just refuses to understand something valid you are saying. That’s why these speech recognition models are always being retrained with more data to fix these edge cases. One of the coolest things about machine learning is how simple it sometimes seems. You get a bunch of data, feed it into a machine learning algorithm, and then magically you have a world-class AI system running on your gaming laptop’s video card... Right? That sort of true in some cases, but not for speech. Recognizing speech is a hard problem. You have to overcome almost limitless challenges: bad quality microphones, background noise, reverb and echo, accent variations, and on and on. All of these issues need to be present in your training data to make sure the neural network can deal with them. Here’s another example: Did you know that when you speak in a loud room you unconsciously raise the pitch of your voice to be able to talk over the noise? Humans have no problem understanding you either way, but neural networks need to be trained to handle this special case. So you need training data with people yelling over noise! To build a voice recognition system that performs on the level of Siri, Google Now!, or Alexa, you will need a lot of training data — far more data than you can likely get without hiring hundreds of people to record it for you. And since users have low tolerance for poor quality voice recognition systems, you can’t skimp on this. No one wants a voice recognition system that works 80% of the time. For a company like Google or Amazon, hundreds of thousands of hours of spoken audio recorded in real-life situations is gold. That’s the single biggest thing that separates their world-class speech recognition system from your hobby system. The whole point of putting Google Now! and Siri on every cell phone for free or selling $50 Alexa units that have no subscription fee is to get you to use them as much as possible. Every single thing you say into one of these systems is recorded forever and used as training data for future versions of speech recognition algorithms. That’s the whole game! Don’t believe me? If you have an Android phone with Google Now!, click here to listen to actual recordings of yourself saying every dumb thing you’ve ever said into it: So if you are looking for a start-up idea, I wouldn’t recommend trying to build your own speech recognition system to compete with Google. Instead, figure out a way to get people to give you recordings of themselves talking for hours. The data can be your product instead. If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 7! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,5.8K,16,https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa?source=tag_archive---------6----------------,Machine Learning is Fun Part 5: Language Translation with Deep Learning and the Magic of Sequences,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Tiếng Việt or Italiano. We all know and love Google Translate, the website that can instantly translate between 100 different human languages as if by magic. It is even available on our phones and smartwatches: The technology behind Google Translate is called Machine Translation. It has changed the world by allowing people to communicate when it wouldn’t otherwise be possible. But we all know that high school students have been using Google Translate to... umm... assist with their Spanish homework for 15 years. Isn’t this old news? It turns out that over the past two years, deep learning has totally rewritten our approach to machine translation. Deep learning researchers who know almost nothing about language translation are throwing together relatively simple machine learning solutions that are beating the best expert-built language translation systems in the world. The technology behind this breakthrough is called sequence-to-sequence learning. It’s very powerful technique that be used to solve many kinds problems. After we see how it is used for translation, we’ll also learn how the exact same algorithm can be used to write AI chat bots and describe pictures. Let’s go! So how do we program a computer to translate human language? The simplest approach is to replace every word in a sentence with the translated word in the target language. Here’s a simple example of translating from Spanish to English word-by-word: This is easy to implement because all you need is a dictionary to look up each word’s translation. But the results are bad because it ignores grammar and context. So the next thing you might do is start adding language-specific rules to improve the results. For example, you might translate common two-word phrases as a single group. And you might swap the order nouns and adjectives since they usually appear in reverse order in Spanish from how they appear in English: That worked! If we just keep adding more rules until we can handle every part of grammar, our program should be able to translate any sentence, right? This is how the earliest machine translation systems worked. Linguists came up with complicated rules and programmed them in one-by-one. Some of the smartest linguists in the world labored for years during the Cold War to create translation systems as a way to interpret Russian communications more easily. Unfortunately this only worked for simple, plainly-structured documents like weather reports. It didn’t work reliably for real-world documents. The problem is that human language doesn’t follow a fixed set of rules. Human languages are full of special cases, regional variations, and just flat out rule-breaking. The way we speak English more influenced by who invaded who hundreds of years ago than it is by someone sitting down and defining grammar rules. After the failure of rule-based systems, new translation approaches were developed using models based on probability and statistics instead of grammar rules. Building a statistics-based translation system requires lots of training data where the exact same text is translated into at least two languages. This double-translated text is called parallel corpora. In the same way that the Rosetta Stone was used by scientists in the 1800s to figure out Egyptian hieroglyphs from Greek, computers can use parallel corpora to guess how to convert text from one language to another. Luckily, there’s lots of double-translated text already sitting around in strange places. For example, the European Parliament translates their proceedings into 21 languages. So researchers often use that data to help build translation systems. The fundamental difference with statistical translation systems is that they don’t try to generate one exact translation. Instead, they generate thousands of possible translations and then they rank those translations by likely each is to be correct. They estimate how “correct” something is by how similar it is to the training data. Here’s how it works: First, we break up our sentence into simple chunks that can each be easily translated: Next, we will translate each of these chunks by finding all the ways humans have translated those same chunks of words in our training data. It’s important to note that we are not just looking up these chunks in a simple translation dictionary. Instead, we are seeing how actual people translated these same chunks of words in real-world sentences. This helps us capture all of the different ways they can be used in different contexts: Some of these possible translations are used more frequently than others. Based on how frequently each translation appears in our training data, we can give it a score. For example, it’s much more common for someone to say “Quiero” to mean “I want” than to mean “I try.” So we can use how frequently “Quiero” was translated to “I want” in our training data to give that translation more weight than a less frequent translation. Next, we will use every possible combination of these chunks to generate a bunch of possible sentences. Just from the chunk translations we listed in Step 2, we can already generate nearly 2,500 different variations of our sentence by combining the chunks in different ways. Here are some examples: But in a real-world system, there will be even more possible chunk combinations because we’ll also try different orderings of words and different ways of chunking the sentence: Now need to scan through all of these generated sentences to find the one that is that sounds the “most human.” To do this, we compare each generated sentence to millions of real sentences from books and news stories written in English. The more English text we can get our hands on, the better. Take this possible translation: It’s likely that no one has ever written a sentence like this in English, so it would not be very similar to any sentences in our data set. We’ll give this possible translation a low probability score. But look at this possible translation: This sentence will be similar to something in our training set, so it will get a high probability score. After trying all possible sentences, we’ll pick the sentence that has the most likely chunk translations while also being the most similar overall to real English sentences. Our final translation would be “I want to go to the prettiest beach.” Not bad! Statistical machine translation systems perform much better than rule-based systems if you give them enough training data. Franz Josef Och improved on these ideas and used them to build Google Translate in the early 2000s. Machine Translation was finally available to the world. In the early days, it was surprising to everyone that the “dumb” approach to translating based on probability worked better than rule-based systems designed by linguists. This led to a (somewhat mean) saying among researchers in the 80s: Statistical machine translation systems work well, but they are complicated to build and maintain. Every new pair of languages you want to translate requires experts to tweak and tune a new multi-step translation pipeline. Because it is so much work to build these different pipelines, trade-offs have to be made. If you are asking Google to translate Georgian to Telegu, it has to internally translate it into English as an intermediate step because there’s not enough Georgain-to-Telegu translations happening to justify investing heavily in that language pair. And it might do that translation using a less advanced translation pipeline than if you had asked it for the more common choice of French-to-English. Wouldn’t it be cool if we could have the computer do all that annoying development work for us? The holy grail of machine translation is a black box system that learns how to translate by itself— just by looking at training data. With Statistical Machine Translation, humans are still needed to build and tweak the multi-step statistical models. In 2014, KyungHyun Cho’s team made a breakthrough. They found a way to apply deep learning to build this black box system. Their deep learning model takes in a parallel corpora and and uses it to learn how to translate between those two languages without any human intervention. Two big ideas make this possible — recurrent neural networks and encodings. By combining these two ideas in a clever way, we can build a self-learning translation system. We’ve already talked about recurrent neural networks in Part 2, but let’s quickly review. A regular (non-recurrent) neural network is a generic machine learning algorithm that takes in a list of numbers and calculates a result (based on previous training). Neural networks can be used as a black box to solve lots of problems. For example, we can use a neural network to calculate the approximate value of a house based on attributes of that house: But like most machine learning algorithms, neural networks are stateless. You pass in a list of numbers and the neural network calculates a result. If you pass in those same numbers again, it will always calculate the same result. It has no memory of past calculations. In other words, 2 + 2 always equals 4. A recurrent neural network (or RNN for short) is a slightly tweaked version of a neural network where the previous state of the neural network is one of the inputs to the next calculation. This means that previous calculations change the results of future calculations! Why in the world would we want to do this? Shouldn’t 2 + 2 always equal 4 no matter what we last calculated? This trick allows neural networks to learn patterns in a sequence of data. For example, you can use it to predict the next most likely word in a sentence based on the first few words: RNNs are useful any time you want to learn patterns in data. Because human language is just one big, complicated pattern, RNNs are increasingly used in many areas of natural language processing. If you want to learn more about RNNs, you can read Part 2 where we used one to generate a fake Ernest Hemingway book and then used another one to generate fake Super Mario Brothers levels. The other idea we need to review is Encodings. We talked about encodings in Part 4 as part of face recognition. To explain encodings, let’s take a slight detour into how we can tell two different people apart with a computer. When you are trying to tell two faces apart with a computer, you collect different measurements from each face and use those measurements to compare faces. For example, we might measure the size of each ear or the spacing between the eyes and compare those measurements from two pictures to see if they are the same person. You’re probably already familiar with this idea from watching any primetime detective show like CSI: The idea of turning a face into a list of measurements is an example of an encoding. We are taking raw data (a picture of a face) and turning it into a list of measurements that represent it (the encoding). But like we saw in Part 4, we don’t have to come up with a specific list of facial features to measure ourselves. Instead, we can use a neural network to generate measurements from a face. The computer can do a better job than us in figuring out which measurements are best able to differentiate two similar people: This is our encoding. It lets us represent something very complicated (a picture of a face) with something simple (128 numbers). Now comparing two different faces is much easier because we only have to compare these 128 numbers for each face instead of comparing full images. Guess what? We can do the same thing with sentences! We can come up with an encoding that represents every possible different sentence as a series of unique numbers: To generate this encoding, we’ll feed the sentence into the RNN, one word at time. The final result after the last word is processed will be the values that represent the entire sentence: Great, so now we have a way to represent an entire sentence as a set of unique numbers! We don’t know what each number in the encoding means, but it doesn’t really matter. As long as each sentence is uniquely identified by it’s own set of numbers, we don’t need to know exactly how those numbers were generated. Ok, so we know how to use an RNN to encode a sentence into a set of unique numbers. How does that help us? Here’s where things get really cool! What if we took two RNNs and hooked them up end-to-end? The first RNN could generate the encoding that represents a sentence. Then the second RNN could take that encoding and just do the same logic in reverse to decode the original sentence again: Of course being able to encode and then decode the original sentence again isn’t very useful. But what if (and here’s the big idea!) we could train the second RNN to decode the sentence into Spanish instead of English? We could use our parallel corpora training data to train it to do that: And just like that, we have a generic way of converting a sequence of English words into an equivalent sequence of Spanish words! This is a powerful idea: Note that we glossed over some things that are required to make this work with real-world data. For example, there’s additional work you have to do to deal with different lengths of input and output sentences (see bucketing and padding). There’s also issues with translating rare words correctly. If you want to build your own language translation system, there’s a working demo included with TensorFlow that will translate between English and French. However, this is not for the faint of heart or for those with limited budgets. This technology is still new and very resource intensive. Even if you have a fast computer with a high-end video card, it might take about a month of continuous processing time to train your own language translation system. Also, Sequence-to-sequence language translation techniques are improving so rapidly that it’s hard to keep up. Many recent improvements (like adding an attention mechanism or tracking context) are significantly improving results but these developments are so new that there aren’t even wikipedia pages for them yet. If you want to do anything serious with sequence-to-sequence learning, you’ll need to keep with new developments as they occur. So what else can we do with sequence-to-sequence models? About a year ago, researchers at Google showed that you can use sequence-to-sequence models to build AI bots. The idea is so simple that it’s amazing it works at all. First, they captured chat logs between Google employees and Google’s Tech Support team. Then they trained a sequence-to-sequence model where the employee’s question was the input sentence and the Tech Support team’s response was the “translation” of that sentence. When a user interacted with the bot, they would “translate” each of the user’s messages with this system to get the bot’s response. The end result was a semi-intelligent bot that could (sometimes) answer real tech support questions. Here’s part of a sample conversation between a user and the bot from their paper: They also tried building a chat bot based on millions of movie subtitles. The idea was to use conversations between movie characters as a way to train a bot to talk like a human. The input sentence is a line of dialog said by one character and the “translation” is what the next character said in response: This produced really interesting results. Not only did the bot converse like a human, but it displayed a small bit of intelligence: This is only the beginning of the possibilities. We aren’t limited to converting one sentence into another sentence. It’s also possible to make an image-to-sequence model that can turn an image into text! A different team at Google did this by replacing the first RNN with a Convolutional Neural Network (like we learned about in Part 3). This allows the input to be a picture instead of a sentence. The rest works basically the same way: And just like that, we can turn pictures into words (as long as we have lots and lots of training data)! Andrej Karpathy expanded on these ideas to build a system capable of describing images in great detail by processing multiple regions of an image separately: This makes it possible to build image search engines that are capable of finding images that match oddly specific search queries: There’s even researchers working on the reverse problem, generating an entire picture based on just a text description! Just from these examples, you can start to imagine the possibilities. So far, there have been sequence-to-sequence applications in everything from speech recognition to computer vision. I bet there will be a lot more over the next year. If you want to learn more in depth about sequence-to-sequence models and translation, here’s some recommended resources: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Tal Perry,2.6K,17,https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------7----------------,Deep Learning the Stock Market – Tal Perry – Medium,"Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going. I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. Why NLP is relevant to Stock prediction In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance. In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure). Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will. You shall know a word by the company it keeps (Firth, J. R. 1957:11) There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort. Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them. The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog. But we can embed more than just words. We can do, say , stock market embeddings. Market2Vec The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie. Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college. The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself. We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How? So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful. Now What Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry. So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either. Going deeper I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning. So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote. Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market. Inside the magical black box What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text. A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope. We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that. Connecting Market2Vec and LSTMs The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them: We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior. What Comes out Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile. In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next. In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were: 1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do. 3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk. 5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index. Sidenote — Why predict the VIX What makes the VIX an interesting target is that Back to our LSTM outputs and the SoftMax How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like: We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ). So before we get into how this thing learns, let’s recap what we’ve done so far How does this thing learn? Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn. So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data. To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is Or in English, the square root of the square of the difference between what actually happened and what we predicted. Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error. This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task. It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task. It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task. Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error. What’s next Now that I’ve laid this out in writing and it still makes sense to me I want So, if you’ve come this far please point out my errors and share your inputs. Other thoughts Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work. Liquidity and efficient use of capital Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs. A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500. This stuff is new The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets. Multiple Time Frames While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most. I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with. MarketVectors When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them. My original consideration was to use an auto-encoder like in this paper but end to end training is cooler. A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder. But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them. The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage. CNNs Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun. " Andrej Karpathy,9.2K,7,https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------8----------------,Yes you should understand backprop – Andrej Karpathy – Medium,"When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets. " Per Harald Borgen,4.8K,7,https://medium.com/learning-new-stuff/machine-learning-in-a-year-cdb0b0ebd29c?source=tag_archive---------9----------------,Machine Learning in a Year – Learning New Stuff – Medium,"This is a follow up to an article I wrote last year, Machine Learning in a Week, on how I kickstarted my way into machine learning (ml) by devoting five days to the subject. After this highly effective introduction, I continued learning on my spare time and almost exactly one year later I did my first ml project at work, which involved using various ml and natural language processing (nlp) techniques to qualify sales leads at Xeneta. This felt like a blessing: getting paid to do something I normally did for fun! It also ripped me out of the delusion that only people with masters degrees or Ph.D’s work with ml professionally. In this post I want to share my journey, as it might inspire others to do the same. My interest in ml stems back to 2014 when I started reading articles about it on Hacker News. I simply found the idea of teaching machines stuff by looking at data appealing. At the time I wasn’t even a professional developer, but a hobby coder who’d done a couple of small projects. So I began watching the first few chapters of Udacity’s Supervised Learning course, while also reading all articles I came across on the subject. This gave me a little bit of conceptual understanding, though no practical skills. I also didn’t finish it, as I rarely do with MOOC’s. In January 2015 I joined the Founders and Coders (FAC) bootcamp in London in order to become a developer. A few weeks in, I wanted to learn how to actually code machine learning algorithms, so I started a study group with a few of my peers. Every Tuesday evening, we’d watch lectures from Coursera’s Machine Learning course. It’s a fantastic course, and I learned a hell of a lot. But it’s tough for a beginner. I had to watch the lectures over and over again before grasping the concepts. The Octave coding task are challenging as well, especially if you don’t know Octave. As a result of the difficulty, one by one fell off the study group as the weeks passed. Eventually, I fell off it myself as well. In hindsight, I should have started with a course that either used ml libraries for the coding tasks — as opposed to building the algorithms from scratch — or at least used a programming language I knew. If I could go back in time, I’d choose Udacity’s Intro to Machine Learning, as it’s easier and uses Python and Scikit Learn. This way, we would have gotten our hands dirty as soon as possible, gained confidence, and had more fun. One of the last things I did at FAC was the ml week stunt. My goal was to be able to apply machine learning to actual problems at the end of the week, which I managed to do. Throughout the week I did the following: It’s by far the steepest ml learning curve I’ve ever experienced. Go ahead and read the article if you want a more detailed overview. After I finished FAC in London and moved back to Norway, I tried to repeat the success from the ml week, but for neural networks instead. This failed. There were simply too many distractions to spend 10 hours of coding and learning every day. I had underestimated how important it was to be surrounded by peers at FAC. However, I got started with neural nets at least, and slowly started to grasp the concept. By July I managed to code my first net. It’s probably the crappiest implementation ever created, and I actually find it embarrassing to show off. But it did the trick; I proved to myself that I understood concepts like backpropagation and gradient descent. In the second half of the year, my progression slowed down, as I started a new job. The most important takeaway from this period was the leap from non-vectorized to vectorized implementations of neural networks, which involved repeating linear algebra from university. By the end of the year I wrote an article as a summary of how I learned this: During the christmas vacation of 2015, I got a motivational boost again and decided try out Kaggle. So I spent quite some time experimenting with various algorithms for their Homesite Quote Conversion, Otto Group Product Classification and Bike Sharing Demand contests. The main takeaway from this was the experience of iteratively improving the results by experimenting with the algorithms and the data. I learned to trust my logic when doing machine learning. If tweaking a parameter or engineering a new feature seems like a good idea logically, it’s quite likely that it actually will help. Back at work in January 2016 I wanted to continue in the flow I’d gotten into during Christmas. So I asked my manager if I could spend some time learning stuff during my work hours as well, which he happily approved. Having gotten a basic understanding of neural networks at this point, I wanted to move on to deep learning. My first attempt was Udacity’s Deep Learning course, which ended up as a big disappointment. The contents of the video lectures are good, but they are too short and shallow to me. And the IPython Notebook assignments ended up being too frustrating, as I spent most of my time debugging code errors, which is the most effective way to kill motivation. So after doing that for a couple of sessions at work, I simply gave up. To their defense, I’m a total noob when it comes to IPython Notebooks, so it might not be as bad for you as it was for me. So it might be that I simply wasn’t ready for the course. Luckily, I then discovered Stanford’s CS224D and decided to give it a shot. It is a fantastic course. And though it’s difficult, I never end up debugging when doing the problem sets. Secondly, they actually give you the solution code as well, which I often look at when I’m stuck, so that I can work my way backwards to understand the steps needed to reach a solution. Though I’ve haven’t finished it yet, it has significantly boosted my knowledge in nlp and neural networks so far. However it’s been tough. Really tough. At one point, I realized I needed help from someone better than me, so I came in touch with a Ph.D student who was willing to help me out for 40 USD per hour, both with the problem sets as well as the overall understanding. This has been critical for me in order to move on, as he has uncovered a lot of black holes in my knowledge. In addition to this, Xeneta also hired a data scientist recently. He’s got a masters degree in math, so I often ask him for help when I’m stuck with various linear algebra an calculus tasks, or ml in general. So be sure to check out which resources you have internally in your company as well. After doing all this, I finally felt ready to do a ml project at work. It basically involved training an algorithm to qualify sales leads by reading company descriptions, and has actually proven to be a big time saver for the sales guys using the tool. Check out out article I wrote about it below or head over to GitHub to dive straight into the code. Getting to this point has surely been a long journey. But also a fast one; when I started my machine learning in a week project, I certainly didn’t have any hopes of actually using it professionally within a year. But it’s 100 percent possible. And if I can do it, so can anybody else. Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code. If you’ve read this far, I’d recommend you to check out this demo! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com. A publication about improving your technical skills. " Xiaohan Zeng,48K,13,https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f?source=tag_archive---------0----------------,"I interviewed at five top companies in Silicon Valley in five days, and luckily got five job offers","In the five days from July 24th to 28th 2017, I interviewed at LinkedIn, Salesforce Einstein, Google, Airbnb, and Facebook, and got all five job offers. It was a great experience, and I feel fortunate that my efforts paid off, so I decided to write something about it. I will discuss how I prepared, review the interview process, and share my impressions about the five companies. I had been at Groupon for almost three years. It’s my first job, and I have been working with an amazing team and on awesome projects. We’ve been building cool stuff, making impact within the company, publishing papers and all that. But I felt my learning rate was being annealed (read: slowing down) yet my mind was craving more. Also as a software engineer in Chicago, there are so many great companies that all attract me in the Bay Area. Life is short, and professional life shorter still. After talking with my wife and gaining her full support, I decided to take actions and make my first ever career change. Although I’m interested in machine learning positions, the positions at the five companies are slightly different in the title and the interviewing process. Three are machine learning engineer (LinkedIn, Google, Facebook), one is data engineer (Salesforce), and one is software engineer in general (Airbnb). Therefore I needed to prepare for three different areas: coding, machine learning, and system design. Since I also have a full time job, it took me 2–3 months in total to prepare. Here is how I prepared for the three areas. While I agree that coding interviews might not be the best way to assess all your skills as a developer, there is arguably no better way to tell if you are a good engineer in a short period of time. IMO it is the necessary evil to get you that job. I mainly used Leetcode and Geeksforgeeks for practicing, but Hackerrank and Lintcode are also good places. I spent several weeks going over common data structures and algorithms, then focused on areas I wasn’t too familiar with, and finally did some frequently seen problems. Due to my time constraints I usually did two problems per day. Here are some thoughts: This area is more closely related to the actual working experience. Many questions can be asked during system design interviews, including but not limited to system architecture, object oriented design,database schema design,distributed system design,scalability, etc. There are many resources online that can help you with the preparation. For the most part I read articles on system design interviews, architectures of large-scale systems, and case studies. Here are some resources that I found really helpful: Although system design interviews can cover a lot of topics, there are some general guidelines for how to approach the problem: With all that said, the best way to practice for system design interviews is to actually sit down and design a system, i.e. your day-to-day work. Instead of doing the minimal work, go deeper into the tools, frameworks, and libraries you use. For example, if you use HBase, rather than simply using the client to run some DDL and do some fetches, try to understand its overall architecture, such as the read/write flow, how HBase ensures strong consistency, what minor/major compactions do, and where LRU cache and Bloom Filter are used in the system. You can even compare HBase with Cassandra and see the similarities and differences in their design. Then when you are asked to design a distributed key-value store, you won’t feel ambushed. Many blogs are also a great source of knowledge, such as Hacker Noon and engineering blogs of some companies, as well as the official documentation of open source projects. The most important thing is to keep your curiosity and modesty. Be a sponge that absorbs everything it is submerged into. Machine learning interviews can be divided into two aspects, theory and product design. Unless you are have experience in machine learning research or did really well in your ML course, it helps to read some textbooks. Classical ones such as the Elements of Statistical Learning and Pattern Recognition and Machine Learning are great choices, and if you are interested in specific areas you can read more on those. Make sure you understand basic concepts such as bias-variance trade-off, overfitting, gradient descent, L1/L2 regularization,Bayes Theorem,bagging/boosting,collaborative filtering,dimension reduction, etc. Familiarize yourself with common formulas such as Bayes Theorem and the derivation of popular models such as logistic regression and SVM. Try to implement simple models such as decision trees and K-means clustering. If you put some models on your resume, make sure you understand it thoroughly and can comment on its pros and cons. For ML product design, understand the general process of building a ML product. Here’s what I tried to do: Here I want to emphasize again on the importance of remaining curious and learning continuously. Try not to merely using the API for Spark MLlib or XGBoost and calling it done, but try to understand why stochastic gradient descent is appropriate for distributed training, or understand how XGBoost differs from traditional GBDT, e.g. what is special about its loss function, why it needs to compute the second order derivative, etc. I started by replying to HR’s messages on LinkedIn, and asking for referrals. After a failed attempt at a rock star startup (which I will touch upon later), I prepared hard for several months, and with help from my recruiters, I scheduled a full week of onsites in the Bay Area. I flew in on Sunday, had five full days of interviews with around 30 interviewers at some best tech companies in the world, and very luckily, got job offers from all five of them. All phone screenings are standard. The only difference is in the duration: For some companies like LinkedIn it’s one hour, while for Facebook and Airbnb it’s 45 minutes. Proficiency is the key here, since you are under the time gun and usually you only get one chance. You would have to very quickly recognize the type of problem and give a high-level solution. Be sure to talk to the interviewer about your thinking and intentions. It might slow you down a little at the beginning, but communication is more important than anything and it only helps with the interview. Do not recite the solution as the interviewer would almost certainly see through it. For machine learning positions some companies would ask ML questions. If you are interviewing for those make sure you brush up your ML skills as well. To make better use of my time, I scheduled three phone screenings in the same afternoon, one hour apart from each. The upside is that you might benefit from the hot hand and the downside is that the later ones might be affected if the first one does not go well, so I don’t recommend it for everyone. One good thing about interviewing with multiple companies at the same time is that it gives you certain advantages. I was able to skip the second round phone screening with Airbnb and Salesforce because I got the onsite at LinkedIn and Facebook after only one phone screening. More surprisingly, Google even let me skip their phone screening entirely and schedule my onsite to fill the vacancy after learning I had four onsites coming in the next week. I knew it was going to make it extremely tiring, but hey, nobody can refuse a Google onsite invitation! LinkedIn This is my first onsite and I interviewed at the Sunnyvale location. The office is very neat and people look very professional, as always. The sessions are one hour each. Coding questions are standard, but the ML questions can get a bit tough. That said, I got an email from my HR containing the preparation material which was very helpful, and in the end I did not see anything that was too surprising. I heard the rumor that LinkedIn has the best meals in the Silicon Valley, and from what I saw if it’s not true, it’s not too far from the truth. Acquisition by Microsoft seems to have lifted the financial burden from LinkedIn, and freed them up to do really cool things. New features such as videos and professional advertisements are exciting. As a company focusing on professional development, LinkedIn prioritizes the growth of its own employees. A lot of teams such as ads relevance and feed ranking are expanding, so act quickly if you want to join. Salesforce Einstein Rock star project by rock star team. The team is pretty new and feels very much like a startup. The product is built on the Scala stack, so type safety is a real thing there! Great talks on the Optimus Prime library by Matthew Tovbin at Scala Days Chicago 2017 and Leah McGuire at Spark Summit West 2017. I interviewed at their Palo Alto office. The team has a cohesive culture and work life balance is great there. Everybody is passionate about what they are doing and really enjoys it. With four sessions it is shorter compared to the other onsite interviews, but I wish I could have stayed longer. After the interview Matthew even took me for a walk to the HP garage :) Google Absolutely the industry leader, and nothing to say about it that people don’t already know. But it’s huge. Like, really, really HUGE. It took me 20 minutes to ride a bicycle to meet my friends there. Also lines for food can be too long. Forever a great place for developers. I interviewed at one of the many buildings on the Mountain View campus, and I don’t know which one it is because it’s HUGE. My interviewers all look very smart, and once they start talking they are even smarter. It would be very enjoyable to work with these people. One thing that I felt special about Google’s interviews is that the analysis of algorithm complexity is really important. Make sure you really understand what Big O notation means! Airbnb Fast expanding unicorn with a unique culture and arguably the most beautiful office in the Silicon Valley. New products such as Experiences and restaurant reservation, high end niche market, and expansion into China all contribute to a positive prospect. Perfect choice if you are risk tolerant and want a fast growing, pre-IPO experience. Airbnb’s coding interview is a bit unique because you’ll be coding in an IDE instead of whiteboarding, so your code needs to compile and give the right answer. Some problems can get really hard. And they’ve got the one-of-a-kind cross functional interviews. This is how Airbnb takes culture seriously, and being technically excellent doesn’t guarantee a job offer. For me the two cross functionals were really enjoyable. I had casual conversations with the interviewers and we all felt happy at the end of the session. Overall I think Airbnb’s onsite is the hardest due to the difficulty of the problems, longer duration, and unique cross-functional interviews. If you are interested, be sure to understand their culture and core values. Facebook Another giant that is still growing fast, and smaller and faster-paced compared to Google. With its product lines dominating the social network market and big investments in AI and VR, I can only see more growth potential for Facebook in the future. With stars like Yann LeCun and Yangqing Jia, it’s the perfect place if you are interested in machine learning. I interviewed at Building 20, the one with the rooftop garden and ocean view and also where Zuckerberg’s office is located. I’m not sure if the interviewers got instructions, but I didn’t get clear signs whether my solutions were correct, although I believed they were. By noon the prior four days started to take its toll, and I was having a headache. I persisted through the afternoon sessions but felt I didn’t do well at all. I was a bit surprised to learn that I was getting an offer from them as well. Generally I felt people there believe the company’s vision and are proud of what they are building. Being a company with half a trillion market cap and growing, Facebook is a perfect place to grow your career at. This is a big topic that I won’t cover in this post, but I found this article to be very helpful. Some things that I do think are important: All successes start with failures, including interviews. Before I started interviewing for these companies, I failed my interview at Databricks in May. Back in April, Xiangrui contacted me via LinkedIn asking me if I was interested in a position on the Spark MLlib team. I was extremely thrilled because 1) I use Spark and love Scala, 2) Databricks engineers are top-notch, and 3) Spark is revolutionizing the whole big data world. It is an opportunity I couldn’t miss, so I started interviewing after a few days. The bar is very high and the process is quite long, including one pre-screening questionnaire, one phone screening, one coding assignment, and one full onsite. I managed to get the onsite invitation, and visited their office in downtown San Francisco, where Treasure Island can be seen. My interviewer were incredibly intelligent yet equally modest. During the interviews I often felt being pushed to the limits. It was fine until one disastrous session, where I totally messed up due to insufficient skills and preparation, and it ended up a fiasco. Xiangrui was very kind and walked me to where I wanted to go after the interview was over, and I really enjoyed talking to him. I got the rejection several days later. It was expected but I felt frustrated for a few days nonetheless. Although I missed the opportunity to work there, I wholeheartedly wish they will continue to make greater impact and achievements. From the first interview in May to finally accepting the job offer in late September, my first career change was long and not easy. It was difficult for me to prepare because I needed to keep doing well at my current job. For several weeks I was on a regular schedule of preparing for the interview till 1am, getting up at 8:30am the next day and fully devoting myself to another day at work. Interviewing at five companies in five days was also highly stressful and risky, and I don’t recommend doing it unless you have a very tight schedule. But it does give you a good advantage during negotiation should you secure multiple offers. I’d like to thank all my recruiters who patiently walked me through the process, the people who spend their precious time talking to me, and all the companies that gave me the opportunities to interview and extended me offers. Lastly but most importantly, I want to thank my family for their love and support — my parents for watching me taking the first and every step, my dear wife for everything she has done for me, and my daughter for her warming smile. Thanks for reading through this long post. You can find me on LinkedIn or Twitter. Xiaohan Zeng 10/22/17 PS: Since the publication of this post, it has (unexpectedly) received some attention. I would like to thank everybody for the congratulations and shares, and apologize for not being able to respond to each of them. This post has been translated into some other languages: It has been reposted in Tech In Asia. Breaking Into Startups invited me to a live video streaming, together with Sophia Ciocca. CoverShr did a short QnA with me. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Critical Mind & Romantic Heart " Gil Fewster,3.3K,5,https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------1----------------,The mind-blowing AI announcement from Google that you probably missed.,"Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text. I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own. In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System. This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock. Luckily, I’m here to bring you up to speed. Here’s the deal. Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance. Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results. This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input. All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning. The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative. Google Translate invented its own language to help it translate more effectively. What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation. Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes) To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post: Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated. And this leads the Google engineers onto that truly astonishing discovery of creation: So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics. I just thought maybe you’d want to know. Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly. Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective. Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A tinkerer Our community publishes stories worth reading on development, design, and data science. " David Venturi,10.6K,20,https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------2----------------,"Every single Machine Learning course on the internet, ranked by your reviews","A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost. I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science. For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization. For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below. For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews. Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources. Each course must fit three criteria: We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only. There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out. We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. We made subjective syllabus judgment calls based on three factors: A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience. A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps: The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves. First off, let’s define deep learning. Here is a succinct description: As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article: My top three recommendations from that list would be: Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline. Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar. Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews. Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available. Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning. Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice: Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course. A few prominent reviewers noted the following: Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews. The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding). Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase. Below are a few of the aforementioned sparkling reviews: Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered. It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R. As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course. Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings. A few prominent reviewers noted the following: Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here. The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews. Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews. Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews. Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent. Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews. Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews. Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews. Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews. Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews. Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews. Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews. Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews. AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews. Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews. StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews. Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews. From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews. Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews. Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews. Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews. Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews. Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews. Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews. Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews. Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews. Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews. Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews. Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews. Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews. The following courses had one or no reviews as of May 2017. Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review. Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available. Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free. Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free. Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase. Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase. Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase. Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks. Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required. The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course. Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours. Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours. Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours. Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours. Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours. Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours. Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free. Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free. Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below). Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well. This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth. The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering. If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page. If you enjoyed reading this, check out some of Class Central’s other pieces: If you have suggestions for courses I missed, let me know in the responses! If you found this helpful, click the 💚 so more people will see it here on Medium. This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program. Our community publishes stories worth reading on development, design, and data science. " Vishal Maini,32K,10,https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------3----------------,A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium,"Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future. Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent. Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs. Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models. Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD). Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications. Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem. Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum. This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series. Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic. The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions. The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C. Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe). In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie. Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste. In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing. Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models. The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task. Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself. And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years. The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel. Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start. You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have: Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence. Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale. Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process. And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning! More from Machine Learning for Humans 🤖👶 A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact. " Tim Anglade,7K,23,https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------4----------------,"How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native","The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs. While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches. The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps. If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet. While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways. Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.” Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one). All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section. We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done. The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors: For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone. Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack. It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs). While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition. The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images. One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation. Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods. Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons. First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner. At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with. We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile. To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come. The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source). So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million. This has two advantages: There are tradeoffs of course: During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions. After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations. So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture. Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time. With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with. Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular: So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features. Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were: The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit. Our data augmentation rules were as follows: These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation. The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be. The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours. We trained the network in 3 phases: While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training. In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well. Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users. This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries. Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future. If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again. There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future: Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience. UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case. DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch). Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks. For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects. Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide. Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration. ... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com " Sophia Ciocca,53K,9,https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe?source=tag_archive---------5----------------,How Does Spotify Know You So Well? – Member Feature Stories – Medium,"Member Feature Story A software engineer explains the science behind personalized music recommendations Photo by studioEAST/Getty Images Photo by studioEAST/Getty Images This Monday — just like every Monday before it — over 100 million Spotify users found a fresh new playlist waiting for them called Discover Weekly. It’s a custom mixtape of 30 songs they’ve never listened to before but will probably love, and it’s pretty much magic. I’m a huge fan of Spotify, and particularly Discover Weekly. Why? It makes me feel seen. It knows my musical tastes better than any person in my entire life ever has, and I’m consistently delighted by how satisfyingly just right it is every week, with tracks I probably would never have found myself or known I would like. For those of you who live under a soundproof rock, let me introduce you to my virtual best friend: As it turns out, I’m not alone in my obsession with Discover Weekly. The user base goes crazy for it, which has driven Spotify to rethink its focus, and invest more resources into algorithm-based playlists. Ever since Discover Weekly debuted in 2015, I’ve been dying to know how it works (What’s more, I’m a Spotify fangirl, so I sometimes like to pretend that I work there and research their products.) After three weeks of mad Googling, I feel like I’ve finally gotten a glimpse behind the curtain. So how does Spotify do such an amazing job of choosing those 30 songs for each person each week? Let’s zoom out for a second to look at how other music services have tackled music recommendations, and how Spotify’s doing it better. Back in the 2000s, Songza kicked off the online music curation scene using manual curation to create playlists for users. This meant that a team of “music experts” or other human curators would put together playlists that they just thought sounded good, and then users would listen to those playlists. (Later, Beats Music would employ this same strategy.) Manual curation worked alright, but it was based on that specific curator’s choices, and therefore couldn’t take into account each listener’s individual music taste. Like Songza, Pandora was also one of the original players in digital music curation. It employed a slightly more advanced approach, instead manually tagging attributes of songs. This meant a group of people listened to music, chose a bunch of descriptive words for each track, and tagged the tracks accordingly. Then, Pandora’s code could simply filter for certain tags to make playlists of similar-sounding music. Around that same time, a music intelligence agency from the MIT Media Lab called The Echo Nest was born, which took a radical, cutting-edge approach to personalized music. The Echo Nest used algorithms to analyze the audio and textual content of music, allowing it to perform music identification, personalized recommendation, playlist creation, and analysis. Finally, taking another approach is Last.fm, which still exists today and uses a process called collaborative filtering to identify music its users might like, but more on that in a moment. So if that’s how other music curation services have handled recommendations, how does Spotify’s magic engine run? How does it seem to nail individual users’ tastes so much more accurately than any of the other services? Spotify doesn’t actually use a single revolutionary recommendation model. Instead, they mix together some of the best strategies used by other services to create their own uniquely powerful discovery engine. To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Let’s dive into how each of these recommendation models work! First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users. After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model. Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song. But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation: What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T. Collaborative filtering then uses that data to say: “Hmmm... You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.” Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right? But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences? With matrix math, done with Python libraries! In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database. Then, the Python library runs this long, complicated matrix factorization formula: When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile. Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared. To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question. Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP. The second type of recommendation models that Spotify employs are Natural Language Processing (NLP) models. The source data for these models, as the name suggests, are regular ol’ words: track metadata, news articles, blogs, and other text around the internet. Natural Language Processing, which is the ability of a computer to understand human speech as it is spoken, is a vast field unto itself, often harnessed through sentiment analysis APIs. The exact mechanisms behind NLP are beyond the scope of this article, but here’s what happens on a very high level: Spotify crawls the web constantly looking for blog posts and other written text about music to figure out what people are saying about specific artists and songs — which adjectives and what particular language is frequently used in reference to those artists and songs, and which other artists and songs are also being discussed alongside them. While I don’t know the specifics of how Spotify chooses to then process this scraped data, I can offer some insight based on how the Echo Nest used to work with them. They would bucket Spotify’s data up into what they call “cultural vectors” or “top terms.” Each artist and song had thousands of top terms that changed on the daily. Each term had an associated weight, which correlated to its relative importance — roughly, the probability that someone will describe the music or artist with that term. Then, much like in collaborative filtering, the NLP model uses these terms and weights to create a vector representation of the song that can be used to determine if two pieces of music are similar. Cool, right? First, a question. You might be thinking: First of all, adding a third model further improves the accuracy of the music recommendation service. But this model also serves a secondary purpose: unlike the first two types, raw audio models take new songs into account. Take, for example, a song your singer-songwriter friend has put up on Spotify. Maybe it only has 50 listens, so there are few other listeners to collaboratively filter it against. It also isn’t mentioned anywhere on the internet yet, so NLP models won’t pick it up. Luckily, raw audio models don’t discriminate between new tracks and popular tracks, so with their help, your friend’s song could end up in a Discover Weekly playlist alongside popular songs! But how can we analyze raw audio data, which seems so abstract? With convolutional neural networks! Convolutional neural networks are the same technology used in facial recognition software. In Spotify’s case, they’ve been modified for use on audio data instead of pixels. Here’s an example of a neural network architecture: This particular neural network has four convolutional layers, seen as the thick bars on the left, and three dense layers, seen as the more narrow bars on the right. The inputs are time-frequency representations of audio frames, which are then concatenated, or linked together, to form the spectrogram. The audio frames go through these convolutional layers, and after passing through the last one, you can see a “global temporal pooling” layer, which pools across the entire time axis, effectively computing statistics of the learned features across the time of the song. After processing, the neural network spits out an understanding of the song, including characteristics like estimated time signature, key, mode, tempo, and loudness. Below is a plot of data for a 30-second snippet of “Around the World” by Daft Punk. Ultimately, this reading of the song’s key characteristics allows Spotify to understand fundamental similarities between songs and therefore which users might enjoy them, based on their own listening history. That covers the basics of the three major types of recommendation models feeding Spotify’s Recommendations Pipeline, and ultimately powering the Discover Weekly playlist! Of course, these recommendation models are all connected to Spotify’s larger ecosystem, which includes giant amounts of data storage and uses lots of Hadoop clusters to scale recommendations and make these engines work on enormous matrices, endless online music articles, and huge numbers of audio files. I hope this was informative and piqued your curiosity like it did mine. For now, I’ll be working my way through my own Discover Weekly, finding my new favorite music while appreciating all the machine learning that’s going on behind the scenes. 🎶 Thanks also to ladycollective for reading this article and suggesting edits. Software engineer, writer, and generally creative human. Interested in art, feminism, mindfulness, and authenticity. http://sophiaciocca.com Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade " Dhruv Parthasarathy,4.3K,12,https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------6----------------,A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN,"At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com " Andrej Karpathy,35K,8,https://medium.com/@karpathy/software-2-0-a64152b37c35?source=tag_archive---------7----------------,Software 2.0 – Andrej Karpathy – Medium,"I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0. The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior. In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, our approach is to specify some goal on the behavior of a desirable program (e.g., “satisfy a dataset of input output pairs of examples”, or “win a game of Go”), write a rough skeleton of the code (e.g. a neural net architecture), that identifies a subset of program space to search, and use the computational resources at our disposal to search this space for a program that works. In the specific case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent. It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data (or more generally, identify a desirable behavior) than to explicitly write the program. In these cases, the programmers will often split into two. The 2.0 programmers manually curate, maintain, massage, clean and label datasets; each labeled example literally programs the final system because the dataset gets compiled into Software 2.0 code via the optimization. Meanwhile, the 1.0 programmers maintain the surrounding tools, analytics, visualizations, labeling interfaces, infrastructure, and the training code. Let’s briefly examine some concrete examples of this ongoing transition. In each of these areas we’ve seen improvements over the last few years when we give up on trying to address a complex problem by writing explicit code and instead transition the code into the 2.0 stack. Visual Recognition used to consist of engineered features with a bit of machine learning sprinkled on top at the end (e.g., an SVM). Since then, we discovered much more powerful visual features by obtaining large datasets (e.g. ImageNet) and searching in the space of Convolutional Neural Network architectures. More recently, we don’t even trust ourselves to hand-code the architectures and we’ve begun searching over those as well. Speech recognition used to involve a lot of preprocessing, gaussian mixture models and hidden markov models, but today consist almost entirely of neural net stuff. A very related, often cited humorous quote attributed to Fred Jelinek from 1985 reads “Every time I fire a linguist, the performance of our speech recognition system goes up”. Speech synthesis has historically been approached with various stitching mechanisms, but today the state of the art models are large ConvNets (e.g. WaveNet) that produce raw audio signal outputs. Machine Translation has usually been approaches with phrase-based statistical techniques, but neural networks are quickly becoming dominant. My favorite architectures are trained in the multilingual setting, where a single model translates from any source language to any target language, and in weakly supervised (or entirely unsupervised) settings. Games. Explicitly hand-coded Go playing programs have been developed for a long while, but AlphaGo Zero (a ConvNet that looks at the raw state of the board and plays a move) has now become by far the strongest player of the game. I expect we’re going to see very similar results in other areas, e.g. DOTA 2, or StarCraft. Databases. More traditional systems outside of Artificial Intelligence are also seeing early hints of a transition. For instance, “The Case for Learned Index Structures” replaces core components of a data management system with a neural network, outperforming cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory. You’ll notice that many of my links above involve work done at Google. This is because Google is currently at the forefront of re-writing large chunks of itself into Software 2.0 code. “One model to rule them all” provides an early sketch of what this might look like, where the statistical strength of the individual domains is amalgamated into one consistent understanding of the world. Why should we prefer to port complex programs into Software 2.0? Clearly, one easy answer is that they work better in practice. However, there are a lot of other convenient reasons to prefer this stack. Let’s take a look at some of the benefits of Software 2.0 (think: a ConvNet) compared to Software 1.0 (think: a production-level C++ code base). Software 2.0 is: Computationally homogeneous. A typical neural network is, to the first order, made up of a sandwich of only two operations: matrix multiplication and thresholding at zero (ReLU). Compare that with the instruction set of classical software, which is significantly more heterogenous and complex. Because you only have to provide Software 1.0 implementation for a small number of the core computational primitives (e.g. matrix multiply), it is much easier to make various correctness/performance guarantees. Simple to bake into silicon. As a corollary, since the instruction set of a neural network is relatively small, it is significantly easier to implement these networks much closer to silicon, e.g. with custom ASICs, neuromorphic chips, and so on. The world will change when low-powered intelligence becomes pervasive around us. E.g., small, inexpensive chips could come with a pretrained ConvNet, a speech recognizer, and a WaveNet speech synthesis network all integrated in a small protobrain that you can attach to stuff. Constant running time. Every iteration of a typical neural net forward pass takes exactly the same amount of FLOPS. There is zero variability based on the different execution paths your code could take through some sprawling C++ code base. Of course, you could have dynamic compute graphs but the execution flow is normally still significantly constrained. This way we are also almost guaranteed to never find ourselves in unintended infinite loops. Constant memory use. Related to the above, there is no dynamically allocated memory anywhere so there is also little possibility of swapping to disk, or memory leaks that you have to hunt down in your code. It is highly portable. A sequence of matrix multiplies is significantly easier to run on arbitrary computational configurations compared to classical binaries or scripts. It is very agile. If you had a C++ code and someone wanted you to make it twice as fast (at cost of performance if needed), it would be highly non-trivial to tune the system for the new spec. However, in Software 2.0 we can take our network, remove half of the channels, retrain, and there — it runs exactly at twice the speed and works a bit worse. It’s magic. Conversely, if you happen to get more data/compute, you can immediately make your program work better just by adding more channels and retraining. Modules can meld into an optimal whole. Our software is often decomposed into modules that communicate through public functions, APIs, or endpoints. However, if two Software 2.0 modules that were originally trained separately interact, we can easily backpropagate through the whole. Think about how amazing it could be if your web browser could automatically re-design the low-level system instructions 10 stacks down to achieve a higher efficiency in loading web pages. With 2.0, this is the default behavior. It is better than you. Finally, and most importantly, a neural network is a better piece of code than anything you or I can come up with in a large fraction of valuable verticals, which currently at the very least involve anything to do with images/video and sound/speech. The 2.0 stack also has some of its own disadvantages. At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how. Across many applications areas, we’ll be left with a choice of using a 90% accurate model we understand, or 99% accurate model we don’t. The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”, e.g., by silently adopting biases in their training data, which are very difficult to properly analyze and examine when their sizes are easily in the millions in most cases. Finally, we’re still discovering some of the peculiar properties of this stack. For instance, the existence of adversarial examples and attacks highlights the unintuitive nature of this stack. Software 1.0 is code we write. Software 2.0 is code we do not write, but seems to work well. It is likely that any setting where the program is not obvious but one can repeatedly evaluate the performance of it (e.g. — did you classify some images correctly? do you win games of Go?) will be subject to this transition, because the optimization can find much better code than what we can write. If you think of neural networks as a new software stack and not just a pretty good classifier, it quickly becomes apparent there is a lot of work to do. For example, from a systems perspective, in the 1.0 stack LLVM IR forms a middle layer between a number of front ends (languages) and back ends (architectures) and provides an opportunity for optimization. With neural networks we’re already seeing an explosion of front ends for specifying program subsets to search over (PyTorch, TF, Chainer, mxnet, etc) and back ends to run the training (“compilation”) and inference (CPU, GPU, TPU?, IPU?, ...), but what is a fitting IR, and how we can optimize it (Halide-like)? As another example, we’ve built up a vast amount of tooling that assists humans in writing 1.0 code, like powerful IDEs with features like syntax highlighting, debuggers, profilers, go to def, git integration, etc. Who is going to develop the first powerful Software 2.0 IDEs, which help with all of the workflows in accumulating, visualizing, cleaning, labeling, and sourcing datasets? There is a lot of room for a layer of intelligence assisting the 2.0 programmers, e.g. perhaps the IDE bubbles up images that the network suspects are mislabeled, or assists in labeling, or finds examples where the network is currently uncertain. Finally, in the long term, the future of Software 2.0 is bright because it is increasingly clear to many that when we develop AGI, it will certainly be written in Software 2.0. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets. " Sebastian Heinz,4.4K,13,https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------8----------------,A simple deep learning model for stock price prediction using TensorFlow,"For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind. Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB). Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format. The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values. A quick look at the S&P time series using pyplot.plot(data['SP500']): Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute. The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017. There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values. Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler. Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction. TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog): In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get ""filled"" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow: After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex. As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1). The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly. The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch. Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail. The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article. It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer. After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications. Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model. The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network. The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets. However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved. The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research. Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp. Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies. Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient. After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets. A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch. The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies. During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice! One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario. Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post. The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;) If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice. Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork. Lastly, follow me on: Twitter | LinkedIn From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts. " Netflix Technology Blog,25K,13,https://medium.com/netflix-techblog/artwork-personalization-c589f074ad76?source=tag_archive---------9----------------,Artwork Personalization at Netflix – Netflix TechBlog – Medium,"By Ashok Chandrashekar, Fernando Amat, Justin Basilico and Tony Jebara For many years, the main goal of the Netflix personalized recommendation system has been to get the right titles in front each of our members at the right time. With a catalog spanning thousands of titles and a diverse member base spanning over a hundred million accounts, recommending the titles that are just right for each member is crucial. But the job of recommendation does not end there. Why should you care about any particular title we recommend? What can we say about a new and unfamiliar title that will pique your interest? How do we convince you that a title is worth watching? Answering these questions is critical in helping our members discover great content, especially for unfamiliar titles. One avenue to address this challenge is to consider the artwork or imagery we use to portray the titles. If the artwork representing a title captures something compelling to you, then it acts as a gateway into that title and gives you some visual “evidence” for why the title might be good for you. The artwork may highlight an actor that you recognize, capture an exciting moment like a car chase, or contain a dramatic scene that conveys the essence of a movie or TV show. If we present that perfect image on your homepage (and as they say: an image is worth a thousand words), then maybe, just maybe, you will give it a try. This is yet another way Netflix differs from traditional media offerings: we don’t have one product but over a 100 million different products with one for each of our members with personalized recommendations and personalized visuals. In previous work, we discussed an effort to find the single perfect artwork for each title across all our members. Through multi-armed bandit algorithms, we hunted for the best artwork for a title, say Stranger Things, that would earn the most plays from the largest fraction of our members. However, given the enormous diversity in taste and preferences, wouldn’t it be better if we could find the best artwork for each of our members to highlight the aspects of a title that are specifically relevant to them? As inspiration, let us explore scenarios where personalization of artwork would be meaningful. Consider the following examples where different members have different viewing histories. On the left are three titles a member watched in the past. To the right of the arrow is the artwork that a member would get for a particular movie that we recommend for them. Let us consider trying to personalize the image we use to depict the movie Good Will Hunting. Here we might personalize this decision based on how much a member prefers different genres and themes. Someone who has watched many romantic movies may be interested in Good Will Hunting if we show the artwork containing Matt Damon and Minnie Driver, whereas, a member who has watched many comedies might be drawn to the movie if we use the artwork containing Robin Williams, a well-known comedian. In another scenario, let’s imagine how the different preferences for cast members might influence the personalization of the artwork for the movie Pulp Fiction. A member who watches many movies featuring Uma Thurman would likely respond positively to the artwork for Pulp Fiction that contains Uma. Meanwhile, a fan of John Travolta may be more interested in watching Pulp Fiction if the artwork features John. Of course, not all the scenarios for personalizing artwork are this clear and obvious. So we don’t enumerate such hand-derived rules but instead rely on the data to tell us what signals to use. Overall, by personalizing artwork we help each title put its best foot forward for every member and thus improve our member experience. At Netflix, we embrace personalization and algorithmically adapt many aspects of our member experience, including the rows we select for the homepage, the titles we select for those rows, the galleries we display, the messages we send, and so forth. Each new aspect that we personalize has unique challenges; personalizing the artwork we display is no exception and presents different personalization challenges. One challenge of image personalization is that we can only select a single piece of artwork to represent each title in each place we present it. In contrast, typical recommendation settings let us present multiple selections to a member where we can subsequently learn about their preferences from the item a member selects. This means that image selection is a chicken-and-egg problem operating in a closed loop: if a member plays a title it can only come from the image that we decided to present to that member. What we seek to understand is when presenting a specific piece of artwork for a title influenced a member to play (or not to play) a title and when a member would have played a title (or not) regardless of which image we presented. Therefore artwork personalization sits on top of the traditional recommendation problem and the algorithms need to work in conjunction with each other. Of course, to properly learn how to personalize artwork we need to collect a lot of data to find signals that indicate when one piece of artwork is significantly better for a member. Another challenge is to understand the impact of changing artwork that we show a member for a title between sessions. Does changing artwork reduce recognizability of the title and make it difficult to visually locate the title again, for example if the member thought was interested before but had not yet watched it? Or, does changing the artwork itself lead the member to reconsider it due to an improved selection? Clearly, if we find better artwork to present to a member we should probably use it; but continuous changes can also confuse people. Changing images also introduces an attribution problem as it becomes unclear which image led a member to be interested in a title. Next, there is the challenge of understanding how artwork performs in relation to other artwork we select in the same page or session. Maybe a bold close-up of the main character works for a title on a page because it stands out compared to the other artwork. But if every title had a similar image then the page as a whole may not seem as compelling. Looking at each piece of artwork in isolation may not be enough and we need to think about how to select a diverse set of images across titles on a page and across a session. Beyond the artwork for other titles, the effectiveness of the artwork for a title may depend on what other types of evidence and assets (e.g. synopses, trailers, etc.) we also display for that title. Thus, we may need a diverse selection where each can highlight complementary aspects of a title that may be compelling to a member. To achieve effective personalization, we also need a good pool of artwork for each title. This means that we need several assets where each is engaging, informative and representative of a title to avoid “clickbait”. The set of images for a title also needs to be diverse enough to cover a wide potential audience interested in different aspects of the content. After all, how engaging and informative a piece of artwork is truly depends on the individual seeing it. Therefore, we need to have artwork that highlights not only different themes in a title but also different aesthetics. Our teams of artists and designers strive to create images that are diverse across many dimensions. They also take into consideration the personalization algorithms which will select the images during their creative process for generating artwork. Finally, there are engineering challenges to personalize artwork at scale. One challenge is that our member experience is very visual and thus contains a lot of imagery. So using personalized selection for each asset means handling a peak of over 20 million requests per second with low latency. Such a system must be robust: failing to properly render the artwork in our UI brings a significantly degrades the experience. Our personalization algorithm also needs to respond quickly when a title launches, which means rapidly learning to personalize in a cold-start situation. Then, after launch, the algorithm must continuously adapt as the effectiveness of artwork may change over time as both the title evolves through its life cycle and member tastes evolve. Much of the Netflix recommendation engine is powered by machine learning algorithms. Traditionally, we collect a batch of data on how our members use the service. Then we run a new machine learning algorithm on this batch of data. Next we test this new algorithm against the current production system through an A/B test. An A/B test helps us see if the new algorithm is better than our current production system by trying it out on a random subset of members. Members in group A get the current production experience while members in group B get the new algorithm. If members in group B have higher engagement with Netflix, then we roll-out the new algorithm to the entire member population. Unfortunately, this batch approach incurs regret: many members over a long period of time did not benefit from the better experience. This is illustrated in the figure below. To reduce this regret, we move away from batch machine learning and consider online machine learning. For artwork personalization, the specific online learning framework we use is contextual bandits. Rather than waiting to collect a full batch of data, waiting to learn a model, and then waiting for an A/B test to conclude, contextual bandits rapidly figure out the optimal personalized artwork selection for a title for each member and context. Briefly, contextual bandits are a class of online learning algorithms that trade off the cost of gathering training data required for learning an unbiased model on an ongoing basis with the benefits of applying the learned model to each member context. In our previous unpersonalized image selection work, we used non-contextual bandits where we found the winning image regardless of the context. For personalization, the member is the context as we expect different members to respond differently to the images. A key property of contextual bandits is that they are designed to minimize regret. At a high level, the training data for a contextual bandit is obtained through the injection of controlled randomization in the learned model’s predictions. The randomization schemes can vary in complexity from simple epsilon-greedy formulations with uniform randomness to closed loop schemes that adaptively vary the degree of randomization as a function of model uncertainty. We broadly refer to this process as data exploration. The number of candidate artworks that are available for a title along with the size of the overall population for which the system will be deployed informs the choice of the data exploration strategy. With such exploration, we need to log information about the randomization for each artwork selection. This logging allows us to correct for skewed selection propensities and thereby perform offline model evaluation in an unbiased fashion, as described later. Exploration in contextual bandits typically has a cost (or regret) due to the fact that our artwork selection in a member session may not use the predicted best image for that session. What impact does this randomization have on the member experience (and consequently on our metrics)? With over a hundred millions members, the regret incurred by exploration is typically very small and is amortized across our large member base with each member implicitly helping provide feedback on artwork for a small portion of the catalog. This makes the cost of exploration per member negligible, which is an important consideration when choosing contextual bandits to drive a key aspect of our member experience. Randomization and exploration with contextual bandits would be less suitable if the cost of exploration were high. Under our online exploration scheme, we obtain a training dataset that records, for each (member, title, image) tuple, whether that selection resulted in a play of the title or not. Furthermore, we can control the exploration such that artwork selections do not change too often. This gives a cleaner attribution of the member’s engagement to specific artwork. We also carefully determine the label for each observation by looking at the quality of engagement to avoid learning a model that recommends “clickbait” images: ones that entice a member to start playing but ultimately result in low-quality engagement. In this online learning setting, we train our contextual bandit model to select the best artwork for each member based on their context. We typically have up to a few dozen candidate artwork images per title. To learn the selection model, we can consider a simplification of the problem by ranking images for a member independently across titles. Even with this simplification we can still learn member image preferences across titles because, for every image candidate, we have some members who were presented with it and engaged with the title and some members who were presented with it and did not engage. These preferences can be modeled to predict for each (member, title, image) tuple, the probability that the member will enjoy a quality engagement. These can be supervised learning models or contextual bandit counterparts with Thompson Sampling, LinUCB, or Bayesian methods that intelligently balance making the best prediction with data exploration. In contextual bandits, the context is usually represented as an feature vector provided as input to the model. There are many signals we can use as features for this problem. In particular, we can consider many attributes of the member: the titles they’ve played, the genre of the titles, interactions of the member with the specific title, their country, their language preferences, the device that the member is using, the time of day and the day of week. Since our algorithm selects images in conjunction with our personalized recommendation engine, we can also use signals regarding what our various recommendation algorithms think of the title, irrespective of what image is used to represent it. An important consideration is that some images are naturally better than others in the candidate pool. We observe the overall take rates for all the images in our data exploration, which is simply the number of quality plays divided by the number of impressions. Our previous work on unpersonalized artwork selection used overall differences in take rates to determine the single best image to select for a whole population. In our new contextual personalized model, the overall take rates are still important and personalization still recovers selections that agree on average with the unpersonalized model’s ranking. The optimal assignment of image artwork to a member is a selection problem to find the best candidate image from a title’s pool of available images. Once the model is trained as above, we use it to rank the images for each context. The model predicts the probability of play for a given image in a given a member context. We sort a candidate set of images by these probabilities and pick the one with the highest probability. That is the image we present to that particular member. To evaluate our contextual bandit algorithms prior to deploying them online on real members, we can use an offline technique known as replay [1]. This method allows us to answer counterfactual questions based on the logged exploration data (Figure 1). In other words, we can compare offline what would have happened in historical sessions under different scenarios if we had used different algorithms in an unbiased way. Replay allows us to see how members would have engaged with our titles if we had hypothetically presented images that were selected through a new algorithm rather than the algorithm used in production. For images, we are interested in several metrics, particularly the take fraction, as described above. Figure 2 shows how contextual bandit approach helps increase the average take fraction across the catalog compared to random selection or non-contextual bandits. After experimenting with many different models offline and finding ones that had a substantial increase in replay, we ultimately ran an A/B test to compare the most promising personalized contextual bandits against unpersonalized bandits. As we suspected, the personalization worked and generated a significant lift in our core metrics. We also saw a reasonable correlation between what we measured offline in replay and what we saw online with the models. The online results also produced some interesting insights. For example, the improvement of personalization was larger in cases where the member had no prior interaction with the title. This makes sense because we would expect that the artwork would be more important to someone when a title is less familiar. With this approach, we’ve taken our first steps in personalizing the selection of artwork for our recommendations and across our service. This has resulted in a meaningful improvement in how our members discover new content... so we’ve rolled it out to everyone! This project is the first instance of personalizing not just what we recommend but also how we recommend to our members. But there are many opportunities to expand and improve this initial approach. These opportunities include developing algorithms to handle cold-start by personalizing new images and new titles as quickly as possible, for example by using techniques from computer vision. Another opportunity is extending this personalization approach across other types of artwork we use and other evidence that describe our titles such as synopses, metadata, and trailers. There is also an even broader problem: helping artists and designers figure out what new imagery we should add to the set to make a title even more compelling and personalizable. If these types of challenges interest you, please let us know! We are always looking for great people to join our team, and, for these types of projects, we are especially excited by candidates with machine learning and/or computer vision expertise. [1] L. Li, W. Chu, J. Langford, and X. Wang, “Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms,” in Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, New York, NY, USA, 2011, pp. 297–306. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more. " Michael Jordan,34K,16,https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------0----------------,Artificial Intelligence — The Revolution Hasn’t Happened Yet,"Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.” We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills. Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities. While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws. And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically. Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems. This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions. Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon. Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon. One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits. For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering. Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction. As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory? A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms. Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved. IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean). It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.) But we need to move beyond the particular historical perspectives of McCarthy and Wiener. We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II. This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline. Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead. Michael I. Jordan From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley. " Blaise Aguera y Arcas,8.7K,15,https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------1----------------,Do algorithms reveal sexual orientation or just expose our stereotypes?,"by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft. " James Le,18.4K,11,https://towardsdatascience.com/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11?source=tag_archive---------2----------------,A Tour of The Top 10 Algorithms for Machine Learning Newbies,"In machine learning, there’s something called the “No Free Lunch” theorem. In a nutshell, it states that no one algorithm works best for every problem, and it’s especially relevant for supervised learning (i.e. predictive modeling). For example, you can’t say that neural networks are always better than decision trees or vice-versa. There are many factors at play, such as the size and structure of your dataset. As a result, you should try many different algorithms for your problem, while using a hold-out “test set” of data to evaluate performance and select the winner. Of course, the algorithms you try must be appropriate for your problem, which is where picking the right machine learning task comes in. As an analogy, if you need to clean your house, you might use a vacuum, a broom, or a mop, but you wouldn’t bust out a shovel and start digging. However, there is a common principle that underlies all supervised machine learning algorithms for predictive modeling. This is a general learning task where we would like to make predictions in the future (Y) given new examples of input variables (X). We don’t know what the function (f) looks like or its form. If we did, we would use it directly and we would not need to learn it from data using machine learning algorithms. The most common type of machine learning is to learn the mapping Y = f(X) to make predictions of Y for new X. This is called predictive modeling or predictive analytics and our goal is to make the most accurate predictions possible. For machine learning newbies who are eager to understand the basic of machine learning, here is a quick tour on the top 10 machine learning algorithms used by data scientists. Linear regression is perhaps one of the most well-known and well-understood algorithms in statistics and machine learning. Predictive modeling is primarily concerned with minimizing the error of a model or making the most accurate predictions possible, at the expense of explainability. We will borrow, reuse and steal algorithms from many different fields, including statistics and use them towards these ends. The representation of linear regression is an equation that describes a line that best fits the relationship between the input variables (x) and the output variables (y), by finding specific weightings for the input variables called coefficients (B). For example: y = B0 + B1 * x We will predict y given the input x and the goal of the linear regression learning algorithm is to find the values for the coefficients B0 and B1. Different techniques can be used to learn the linear regression model from data, such as a linear algebra solution for ordinary least squares and gradient descent optimization. Linear regression has been around for more than 200 years and has been extensively studied. Some good rules of thumb when using this technique are to remove variables that are very similar (correlated) and to remove noise from your data, if possible. It is a fast and simple technique and good first algorithm to try. Logistic regression is another technique borrowed by machine learning from the field of statistics. It is the go-to method for binary classification problems (problems with two class values). Logistic regression is like linear regression in that the goal is to find the values for the coefficients that weight each input variable. Unlike linear regression, the prediction for the output is transformed using a non-linear function called the logistic function. The logistic function looks like a big S and will transform any value into the range 0 to 1. This is useful because we can apply a rule to the output of the logistic function to snap values to 0 and 1 (e.g. IF less than 0.5 then output 1) and predict a class value. Because of the way that the model is learned, the predictions made by logistic regression can also be used as the probability of a given data instance belonging to class 0 or class 1. This can be useful for problems where you need to give more rationale for a prediction. Like linear regression, logistic regression does work better when you remove attributes that are unrelated to the output variable as well as attributes that are very similar (correlated) to each other. It’s a fast model to learn and effective on binary classification problems. Logistic Regression is a classification algorithm traditionally limited to only two-class classification problems. If you have more than two classes then the Linear Discriminant Analysis algorithm is the preferred linear classification technique. The representation of LDA is pretty straight forward. It consists of statistical properties of your data, calculated for each class. For a single input variable this includes: Predictions are made by calculating a discriminate value for each class and making a prediction for the class with the largest value. The technique assumes that the data has a Gaussian distribution (bell curve), so it is a good idea to remove outliers from your data before hand. It’s a simple and powerful method for classification predictive modeling problems. Decision Trees are an important type of algorithm for predictive modeling machinelearning. The representation of the decision tree model is a binary tree. This is your binary tree from algorithms and data structures, nothing too fancy. Each node represents a single input variable (x) and a split point on that variable (assuming the variable is numeric). The leaf nodes of the tree contain an output variable (y) which is used to make a prediction. Predictions are made by walking the splits of the tree until arriving at a leaf node and output the class value at that leaf node. Trees are fast to learn and very fast for making predictions. They are also often accurate for a broad range of problems and do not require any special preparation for your data. Naive Bayes is a simple but surprisingly powerful algorithm for predictive modeling. The model is comprised of two types of probabilities that can be calculated directly from your training data: 1) The probability of each class; and 2) The conditional probability for each class given each x value. Once calculated, the probability model can be used to make predictions for new data using Bayes Theorem. When your data is real-valued it is common to assume a Gaussian distribution (bell curve) so that you can easily estimate these probabilities. Naive Bayes is called naive because it assumes that each input variable is independent. This is a strong assumption and unrealistic for real data, nevertheless, the technique is very effective on a large range of complex problems. The KNN algorithm is very simple and very effective. The model representation for KNN is the entire training dataset. Simple right? Predictions are made for a new data point by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances. For regression problems, this might be the mean output variable, for classification problems this might be the mode (or most common) class value. The trick is in how to determine the similarity between the data instances. The simplest technique if your attributes are all of the same scale (all in inches for example) is to use the Euclidean distance, a number you can calculate directly based on the differences between each input variable. KNN can require a lot of memory or space to store all of the data, but only performs a calculation (or learn) when a prediction is needed, just in time. You can also update and curate your training instances over time to keep predictions accurate. The idea of distance or closeness can break down in very high dimensions (lots of input variables) which can negatively affect the performance of the algorithm on your problem. This is called the curse of dimensionality. It suggests you only use those input variables that are most relevant to predicting the output variable. A downside of K-Nearest Neighbors is that you need to hang on to your entire training dataset. The Learning Vector Quantization algorithm (or LVQ for short) is an artificial neural network algorithm that allows you to choose how many training instances to hang onto and learns exactly what those instances should look like. The representation for LVQ is a collection of codebook vectors. These are selected randomly in the beginning and adapted to best summarize the training dataset over a number of iterations of the learning algorithm. After learned, the codebook vectors can be used to make predictions just like K-Nearest Neighbors. The most similar neighbor (best matching codebook vector) is found by calculating the distance between each codebook vector and the new data instance. The class value or (real value in the case of regression) for the best matching unit is then returned as the prediction. Best results are achieved if you rescale your data to have the same range, such as between 0 and 1. If you discover that KNN gives good results on your dataset try using LVQ to reduce the memory requirements of storing the entire training dataset. Support Vector Machines are perhaps one of the most popular and talked about machine learning algorithms. A hyperplane is a line that splits the input variable space. In SVM, a hyperplane is selected to best separate the points in the input variable space by their class, either class 0 or class 1. In two-dimensions, you can visualize this as a line and let’s assume that all of our input points can be completely separated by this line. The SVM learning algorithm finds the coefficients that results in the best separation of the classes by the hyperplane. The distance between the hyperplane and the closest data points is referred to as the margin. The best or optimal hyperplane that can separate the two classes is the line that has the largest margin. Only these points are relevant in defining the hyperplane and in the construction of the classifier. These points are called the support vectors. They support or define the hyperplane. In practice, an optimization algorithm is used to find the values for the coefficients that maximizes the margin. SVM might be one of the most powerful out-of-the-box classifiers and worth trying on your dataset. Random Forest is one of the most popular and most powerful machine learning algorithms. It is a type of ensemble machine learning algorithm called Bootstrap Aggregation or bagging. The bootstrap is a powerful statistical method for estimating a quantity from a data sample. Such as a mean. You take lots of samples of your data, calculate the mean, then average all of your mean values to give you a better estimation of the true mean value. In bagging, the same approach is used, but instead for estimating entire statistical models, most commonly decision trees. Multiple samples of your training data are taken then models are constructed for each data sample. When you need to make a prediction for new data, each model makes a prediction and the predictions are averaged to give a better estimate of the true output value. Random forest is a tweak on this approach where decision trees are created so that rather than selecting optimal split points, suboptimal splits are made by introducing randomness. The models created for each sample of the data are therefore more different than they otherwise would be, but still accurate in their unique and different ways. Combining their predictions results in a better estimate of the true underlying output value. If you get good results with an algorithm with high variance (like decision trees), you can often get better results by bagging that algorithm. Boosting is an ensemble technique that attempts to create a strong classifier from a number of weak classifiers. This is done by building a model from the training data, then creating a second model that attempts to correct the errors from the first model. Models are added until the training set is predicted perfectly or a maximum number of models are added. AdaBoost was the first really successful boosting algorithm developed for binary classification. It is the best starting point for understanding boosting. Modern boosting methods build on AdaBoost, most notably stochastic gradient boosting machines. AdaBoost is used with short decision trees. After the first tree is created, the performance of the tree on each training instance is used to weight how much attention the next tree that is created should pay attention to each training instance. Training data that is hard to predict is given more weight, whereas easy to predict instances are given less weight. Models are created sequentially one after the other, each updating the weights on the training instances that affect the learning performed by the next tree in the sequence. After all the trees are built, predictions are made for new data, and the performance of each tree is weighted by how accurate it was on training data. Because so much attention is put on correcting mistakes by the algorithm it is important that you have clean data with outliers removed. A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?” The answer to the question varies depending on many factors, including: (1) The size, quality, and nature of data; (2) The available computational time; (3) The urgency of the task; and (4) What you want to do with the data. Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms. Although there are many other Machine Learning algorithms, these are the most popular ones. If you’re a newbie to Machine Learning, these would be a good starting point to learn. — — If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blue Ocean Thinker (https://jameskle.com/) Sharing concepts, ideas, and codes. " Emmanuel Ameisen,12.8K,13,https://blog.insightdatascience.com/how-to-solve-90-of-nlp-problems-a-step-by-step-guide-fda605278e4e?source=tag_archive---------3----------------,How to solve 90% of NLP problems: a step-by-step guide,"For more content like this, follow Insight and Emmanuel on Twitter. Whether you are an established company or working to launch a new service, you can always leverage text data to validate, improve, and expand the functionalities of your product. The science of extracting meaning and learning from text data is an active topic of research called Natural Language Processing (NLP). NLP produces new and exciting results on a daily basis, and is a very large field. However, having worked with hundreds of companies, the Insight team has seen a few key practical applications come up much more frequently than any other: While many NLP papers and tutorials exist online, we have found it hard to find guidelines and tips on how to approach these problems efficiently from the ground up. After leading hundreds of projects a year and gaining advice from top teams all over the United States, we wrote this post to explain how to build Machine Learning solutions to solve problems like the ones mentioned above. We’ll begin with the simplest method that could work, and then move on to more nuanced solutions, such as feature engineering, word vectors, and deep learning. After reading this article, you’ll know how to: We wrote this post as a step-by-step guide; it can also serve as a high level overview of highly effective standard approaches. This post is accompanied by an interactive notebook demonstrating and applying all these techniques. Feel free to run the code and follow along! Every Machine Learning problem starts with data, such as a list of emails, posts, or tweets. Common sources of textual information include: “Disasters on Social Media” dataset For this post, we will use a dataset generously provided by CrowdFlower, called “Disasters on Social Media”, where: Our task will be to detect which tweets are about a disastrous event as opposed to an irrelevant topic such as a movie. Why? A potential application would be to exclusively notify law enforcement officials about urgent emergencies while ignoring reviews of the most recent Adam Sandler film. A particular challenge with this task is that both classes contain the same search terms used to find the tweets, so we will have to use subtler differences to distinguish between them. In the rest of this post, we will refer to tweets that are about disasters as “disaster”, and tweets about anything else as “irrelevant”. We have labeled data and so we know which tweets belong to which categories. As Richard Socher outlines below, it is usually faster, simpler, and cheaper to find and label enough data to train a model on, rather than trying to optimize a complex unsupervised method. One of the key skills of a data scientist is knowing whether the next step should be working on the model or the data. A good rule of thumb is to look at the data first and then clean it up. A clean dataset will allow a model to learn meaningful features and not overfit on irrelevant noise. Here is a checklist to use to clean your data: (see the code for more details): After following these steps and checking for additional errors, we can start using the clean, labelled data to train models! Machine Learning models take numerical values as input. Models working on images, for example, take in a matrix representing the intensity of each pixel in each color channel. Our dataset is a list of sentences, so in order for our algorithm to extract patterns from the data, we first need to find a way to represent it in a way that our algorithm can understand, i.e. as a list of numbers. A natural way to represent text for computers is to encode each character individually as a number (ASCII for example). If we were to feed this simple representation into a classifier, it would have to learn the structure of words from scratch based only on our data, which is impossible for most datasets. We need to use a higher level approach. For example, we can build a vocabulary of all the unique words in our dataset, and associate a unique index to each word in the vocabulary. Each sentence is then represented as a list that is as long as the number of distinct words in our vocabulary. At each index in this list, we mark how many times the given word appears in our sentence. This is called a Bag of Words model, since it is a representation that completely ignores the order of words in our sentence. This is illustrated below. We have around 20,000 words in our vocabulary in the “Disasters of Social Media” example, which means that every sentence will be represented as a vector of length 20,000. The vector will contain mostly 0s because each sentence contains only a very small subset of our vocabulary. In order to see whether our embeddings are capturing information that is relevant to our problem (i.e. whether the tweets are about disasters or not), it is a good idea to visualize them and see if the classes look well separated. Since vocabularies are usually very large and visualizing data in 20,000 dimensions is impossible, techniques like PCA will help project the data down to two dimensions. This is plotted below. The two classes do not look very well separated, which could be a feature of our embeddings or simply of our dimensionality reduction. In order to see whether the Bag of Words features are of any use, we can train a classifier based on them. When first approaching a problem, a general best practice is to start with the simplest tool that could solve the job. Whenever it comes to classifying data, a common favorite for its versatility and explainability is Logistic Regression. It is very simple to train and the results are interpretable as you can easily extract the most important coefficients from the model. We split our data in to a training set used to fit our model and a test set to see how well it generalizes to unseen data. After training, we get an accuracy of 75.4%. Not too shabby! Guessing the most frequent class (“irrelevant”) would give us only 57%. However, even if 75% precision was good enough for our needs, we should never ship a model without trying to understand it. A first step is to understand the types of errors our model makes, and which kind of errors are least desirable. In our example, false positives are classifying an irrelevant tweet as a disaster, and false negatives are classifying a disaster as an irrelevant tweet. If the priority is to react to every potential event, we would want to lower our false negatives. If we are constrained in resources however, we might prioritize a lower false positive rate to reduce false alarms. A good way to visualize this information is using a Confusion Matrix, which compares the predictions our model makes with the true label. Ideally, the matrix would be a diagonal line from top left to bottom right (our predictions match the truth perfectly). Our classifier creates more false negatives than false positives (proportionally). In other words, our model’s most common error is inaccurately classifying disasters as irrelevant. If false positives represent a high cost for law enforcement, this could be a good bias for our classifier to have. To validate our model and interpret its predictions, it is important to look at which words it is using to make decisions. If our data is biased, our classifier will make accurate predictions in the sample data, but the model would not generalize well in the real world. Here we plot the most important words for both the disaster and irrelevant class. Plotting word importance is simple with Bag of Words and Logistic Regression, since we can just extract and rank the coefficients that the model used for its predictions. Our classifier correctly picks up on some patterns (hiroshima, massacre), but clearly seems to be overfitting on some meaningless terms (heyoo, x1392). Right now, our Bag of Words model is dealing with a huge vocabulary of different words and treating all words equally. However, some of these words are very frequent, and are only contributing noise to our predictions. Next, we will try a way to represent sentences that can account for the frequency of words, to see if we can pick up more signal from our data. In order to help our model focus more on meaningful words, we can use a TF-IDF score (Term Frequency, Inverse Document Frequency) on top of our Bag of Words model. TF-IDF weighs words by how rare they are in our dataset, discounting words that are too frequent and just add to the noise. Here is the PCA projection of our new embeddings. We can see above that there is a clearer distinction between the two colors. This should make it easier for our classifier to separate both groups. Let’s see if this leads to better performance. Training another Logistic Regression on our new embeddings, we get an accuracy of 76.2%. A very slight improvement. Has our model started picking up on more important words? If we are getting a better result while preventing our model from “cheating” then we can truly consider this model an upgrade. The words it picked up look much more relevant! Although our metrics on our test set only increased slightly, we have much more confidence in the terms our model is using, and thus would feel more comfortable deploying it in a system that would interact with customers. Our latest model managed to pick up on high signal words. However, it is very likely that if we deploy this model, we will encounter words that we have not seen in our training set before. The previous model will not be able to accurately classify these tweets, even if it has seen very similar words during training. To solve this problem, we need to capture the semantic meaning of words, meaning we need to understand that words like ‘good’ and ‘positive’ are closer than ‘apricot’ and ‘continent.’ The tool we will use to help us capture meaning is called Word2Vec. Using pre-trained words Word2Vec is a technique to find continuous embeddings for words. It learns from reading massive amounts of text and memorizing which words tend to appear in similar contexts. After being trained on enough data, it generates a 300-dimension vector for each word in a vocabulary, with words of similar meaning being closer to each other. The authors of the paper open sourced a model that was pre-trained on a very large corpus which we can leverage to include some knowledge of semantic meaning into our model. The pre-trained vectors can be found in the repository associated with this post. A quick way to get a sentence embedding for our classifier is to average Word2Vec scores of all words in our sentence. This is a Bag of Words approach just like before, but this time we only lose the syntax of our sentence, while keeping some semantic information. Here is a visualization of our new embeddings using previous techniques: The two groups of colors look even more separated here, our new embeddings should help our classifier find the separation between both classes. After training the same model a third time (a Logistic Regression), we get an accuracy score of 77.7%, our best result yet! Time to inspect our model. Since our embeddings are not represented as a vector with one dimension per word as in our previous models, it’s harder to see which words are the most relevant to our classification. While we still have access to the coefficients of our Logistic Regression, they relate to the 300 dimensions of our embeddings rather than the indices of words. For such a low gain in accuracy, losing all explainability seems like a harsh trade-off. However, with more complex models we can leverage black box explainers such as LIME in order to get some insight into how our classifier works. LIME LIME is available on Github through an open-sourced package. A black-box explainer allows users to explain the decisions of any classifier on one particular example by perturbing the input (in our case removing words from the sentence) and seeing how the prediction changes. Let’s see a couple explanations for sentences from our dataset. However, we do not have time to explore the thousands of examples in our dataset. What we’ll do instead is run LIME on a representative sample of test cases and see which words keep coming up as strong contributors. Using this approach we can get word importance scores like we had for previous models and validate our model’s predictions. Looks like the model picks up highly relevant words implying that it appears to make understandable decisions. These seem like the most relevant words out of all previous models and therefore we’re more comfortable deploying in to production. We’ve covered quick and efficient approaches to generate compact sentence embeddings. However, by omitting the order of words, we are discarding all of the syntactic information of our sentences. If these methods do not provide sufficient results, you can utilize more complex model that take in whole sentences as input and predict labels without the need to build an intermediate representation. A common way to do that is to treat a sentence as a sequence of individual word vectors using either Word2Vec or more recent approaches such as GloVe or CoVe. This is what we will do below. Convolutional Neural Networks for Sentence Classification train very quickly and work well as an entry level deep learning architecture. While Convolutional Neural Networks (CNN) are mainly known for their performance on image data, they have been providing excellent results on text related tasks, and are usually much quicker to train than most complex NLP approaches (e.g. LSTMs and Encoder/Decoder architectures). This model preserves the order of words and learns valuable information on which sequences of words are predictive of our target classes. Contrary to previous models, it can tell the difference between “Alex eats plants” and “Plants eat Alex.” Training this model does not require much more work than previous approaches (see code for details) and gives us a model that is much better than the previous ones, getting 79.5% accuracy! As with the models above, the next step should be to explore and explain the predictions using the methods we described to validate that it is indeed the best model to deploy to users. By now, you should feel comfortable tackling this on your own. Here is a quick recap of the approach we’ve successfully used: These approaches were applied to a particular example case using models tailored towards understanding and leveraging short text such as tweets, but the ideas are widely applicable to a variety of problems. I hope this helped you, we’d love to hear your comments and questions! Feel free to comment below or reach out to @EmmanuelAmeisen here or on Twitter. Want to learn applied Artificial Intelligence from top professionals in Silicon Valley or New York? Learn more about the Artificial Intelligence program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Lead at Insight AI @EmmanuelAmeisen Insight Fellows Program - Your bridge to a career in data " Mybridge,10.1K,6,https://medium.mybridge.co/30-amazing-machine-learning-projects-for-the-past-year-v-2018-b853b8621ac7?source=tag_archive---------4----------------,30 Amazing Machine Learning Projects for the Past Year (v.2018),"For the past year, we’ve compared nearly 8,800 open source Machine Learning projects to pick Top 30 (0.3% chance). This is an extremely competitive list and it carefully picks the best open source Machine Learning libraries, datasets and apps published between January and December 2017. Mybridge AI evaluates the quality by considering popularity, engagement and recency. To give you an idea about the quality, the average number of Github stars is 3,558. Open source projects can be useful for data scientists. You can learn by reading the source code and build something on top of the existing projects. Give a plenty of time to play around with Machine Learning projects you may have missed for the past year. A) Neural Networks Deep Learning A-ZTM: Hands-On Artificial Neural Networks [68,745 recommends, 4.5/5 stars] B) TensorFlow Complete Guide to TensorFlow for Deep Learning with Python [17,834 recommends, 4.6/5 stars] (Click the numbers below. Credit given to the biggest contributor.) FastText: Library for fast text representation and classification. [11786 stars on Github]. Courtesy of Facebook Research ........... [ Muse: Multilingual Unsupervised or Supervised word Embeddings, based on Fast Text. 695 stars on Github] Deep-photo-styletransfer: Code and data for paper “Deep Photo Style Transfer” [9747 stars on Github]. Courtesy of Fujun Luan, Ph.D. at Cornell University The world’s simplest facial recognition api for Python and the command line [8672 stars on Github]. Courtesy of Adam Geitgey Magenta: Music and Art Generation with Machine Intelligence [8113 stars on Github]. Sonnet: TensorFlow-based neural network library [5731 stars on Github]. Courtesy of Malcolm Reynolds at Deepmind deeplearn.js: A hardware-accelerated machine intelligence library for the web [5462 stars on Github]. Courtesy of Nikhil Thorat at Google Brain Fast Style Transfer in TensorFlow [4843 stars on Github]. Courtesy of Logan Engstrom at MIT Pysc2: StarCraft II Learning Environment [3683 stars on Github]. Courtesy of Timo Ewalds at DeepMind AirSim: Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research [3861 stars on Github]. Courtesy of Shital Shah at Microsoft Facets: Visualizations for machine learning datasets [3371 stars on Github]. Courtesy of Google Brain Style2Paints: AI colorization of images [3310 stars on Github]. Tensor2Tensor: A library for generalized sequence to sequence models — Google Research [3087 stars on Github]. Courtesy of Ryan Sepassi at Google Brain Image-to-image translation in PyTorch (e.g. horse2zebra, edges2cats, and more) [2847 stars on Github]. Courtesy of Jun-Yan Zhu, Ph.D at Berkeley Faiss: A library for efficient similarity search and clustering of dense vectors. [2629 stars on Github]. Courtesy of Facebook Research Fashion-mnist: A MNIST-like fashion product database [2780 stars on Github]. Courtesy of Han Xiao, Research Scientist Zalando Tech ParlAI: A framework for training and evaluating AI models on a variety of openly available dialog datasets [2578 stars on Github]. Courtesy of Alexander Miller at Facebook Research Fairseq: Facebook AI Research Sequence-to-Sequence Toolkit [2571 stars on Github]. Pyro: Deep universal probabilistic programming with Python and PyTorch [2387 stars on Github]. Courtesy of Uber AI Labs iGAN: Interactive Image Generation powered by GAN [2369 stars on Github]. Deep-image-prior: Image restoration with neural networks but without learning [2188 stars on Github]. Courtesy of Dmitry Ulyanov, Ph.D at Skoltech Face_classification: Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV. [1967 stars on Github]. Speech-to-Text-WaveNet : End-to-end sentence level English speech recognition using DeepMind’s WaveNet and tensorflow [1961 stars on Github]. Courtesy of Namju Kim at Kakao Brain StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [1954 stars on Github]. Courtesy of Yunjey Choi at Korea University Ml-agents: Unity Machine Learning Agents [1658 stars on Github]. Courtesy of Arthur Juliani, Deep Learning at Unity3D DeepVideoAnalytics: A distributed visual search and visual data analytics platform [1494 stars on Github]. Courtesy of Akshay Bhat, Ph.D at Cornell University OpenNMT: Open-Source Neural Machine Translation in Torch [1490 stars on Github]. Pix2pixHD: Synthesizing and manipulating 2048x1024 images with conditional GANs [1283 stars on Github]. Courtesy of Ming-Yu Liu at AI Research Scientist at Nvidia Horovod: Distributed training framework for TensorFlow. [1188 stars on Github]. Courtesy of Uber Engineering AI-Blocks: A powerful and intuitive WYSIWYG interface that allows anyone to create Machine Learning models [899 stars on Github]. Deep neural networks for voice conversion (voice style transfer) in Tensorflow [845 stars on Github]. Courtesy of Dabi Ahn, AI Research at Kakao Brain That’s it for Machine Learning open source projects. If you like this curation, read best daily articles based on your programming skills on our website. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. We rank articles for professionals Read more and achieve more " David Foster,12.8K,11,https://medium.com/applied-data-science/how-to-build-your-own-alphazero-ai-using-python-and-keras-7f664945c188?source=tag_archive---------5----------------,How to build your own AlphaZero AI using Python and Keras,"In this article I’ll attempt to cover three things: In March 2016, Deepmind’s AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished. This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further. The paper ‘Mastering the Game of Go without Human Knowledge’ unveiled a new variant of the algorithm, AlphaGo Zero, that had defeated AlphaGo 100–0. Incredibly, it had done so by learning solely through self-play, starting ‘tabula rasa’ (blank state) and gradually finding strategies that would beat previous incarnations of itself. No longer was a database of human expert games required to build a super-human AI . A mere 48 days later, on 5th December 2017, DeepMind released another paper ‘Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm’ showing how AlphaGo Zero could be adapted to beat the world-champion programs StockFish and Elmo at chess and shogi. The entire learning process, from being shown the games for the first time, to becoming the best computer program in the world, had taken under 24 hours. With this, AlphaZero was born — the general algorithm for getting good at something, quickly, without any prior knowledge of human expert strategy. There are two amazing things about this achievement: It cannot be overstated how important this is. This means that the underlying methodology of AlphaGo Zero can be applied to ANY game with perfect information (the game state is fully known to both players at all times) because no prior expertise is required beyond the rules of the game. This is how it was possible for DeepMind to publish the chess and shogi papers only 48 days after the original AlphaGo Zero paper. Quite literally, all that needed to change was the input file that describes the mechanics of the game and to tweak the hyper-parameters relating to the neural network and Monte Carlo tree search. If AlphaZero used super-complex algorithms that only a handful of people in the world understood, it would still be an incredible achievement. What makes it extraordinary is that a lot of the ideas in the paper are actually far less complex than previous versions. At its heart, lies the following beautifully simple mantra for learning: Doesn’t that sound a lot like how you learn to play games? When you play a bad move, it’s either because you misjudged the future value of resulting positions, or you misjudged the likelihood that your opponent would play a certain move, so didn’t think to explore that possibility. These are exactly the two aspects of gameplay that AlphaZero is trained to learn. Firstly, check out the AlphaGo Zero cheat sheet for a high level understanding of how AlphaGo Zero works. It’s worth having that to refer to as we walk through each part of the code. There’s also a great article here that explains how AlphaZero works in more detail. Clone this Git repository, which contains the code I’ll be referencing. To start the learning process, run the top two panels in the run.ipynb Jupyter notebook. Once it’s built up enough game positions to fill its memory the neural network will begin training. Through additional self-play and training, it will gradually get better at predicting the game value and next moves from any position, resulting in better decision making and smarter overall play. We’ll now have a look at the code in more detail, and show some results that demonstrate the AI getting stronger over time. N.B — This is my own understanding of how AlphaZero works based on the information available in the papers referenced above. If any of the below is incorrect, apologies and I’ll endeavour to correct it! The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go... but there are still 4,531,985,219,092 game positions in total. The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row — each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn. Here’s a summary of the key files that make up the codebase: This file contains the game rules for Connect4. Each squares is allocated a number from 0 to 41, as follows: The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting player’s piece at the bottom of the centre column. You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it. This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages: There are two agents involved in this loop, the best_player and the current_player. The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again. This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree. The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree. The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move. The replay method retrains the neural network, using memories from previous games. This file contains the Residual_CNN class, which defines how to build an instance of the neural network. It uses a condensed version of the neural network architecture in the AlphaGoZero paper — i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head. The depth and number of convolutional filters can be specified in the config file. The Keras library is used to build the network, with a backend of Tensorflow. To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook: This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree. The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move. This is where you set the key parameters that influence the algorithm. Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead. Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents. To play against your creation, run the following code (it’s also in the run.ipynb notebook) When you run the algorithm, all model and memory files are saved in the run folder, in the root directory. To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint. An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player. This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function. The locations of the run and run_archive folders. Log files are saved to the log folder inside the run folder. To turn on logging, set the values of the logger_disabled variables to False inside this file. Viewing the log files will help you to understand how the algorithm works and see inside its ‘mind’. For example, here is a sample from the logger.mcts file. Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase: Training over a couple of days produces the following chart of loss against mini-batch iteration number: The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two. Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first. Here are the final standings: Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasn’t yet saturated — with further training time, the players would continue to get stronger, learning more and more intricate strategies. As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version: 1st neural network version 30th neural network version This is a good strategy as many lines require the centre column — claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input. There is a game.py file for a game called ‘Metasquares’ in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins. If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead. Hopefully you find this article useful — let me know in the comments below if you find any typos or have questions about anything in the codebase or article and I’ll get back to you as soon as possible. If you would like to learn more about how our company, Applied Data Science develops innovative data science solutions for businesses, feel free to get in touch through our website or directly through LinkedIn. ... and if you like this, feel free to leave a few hearty claps :) Applied Data Science is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you’re looking to do more with your data, let’s talk. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Applied Data Science Cutting edge data science news and projects " George Seif,11.4K,11,https://towardsdatascience.com/the-5-clustering-algorithms-data-scientists-need-to-know-a36d136ef68?source=tag_archive---------6----------------,The 5 Clustering Algorithms Data Scientists Need to Know,"Clustering is a Machine Learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Clustering is a method of unsupervised learning and is a common technique for statistical data analysis used in many fields. In Data Science, we can use clustering analysis to gain some valuable insights from our data by seeing what groups the data points fall into when we apply a clustering algorithm. Today, we’re going to look at 5 popular clustering algorithms that data scientists need to know and their pros and cons! K-Means is probably the most well know clustering algorithm. It’s taught in a lot of introductory data science and machine learning classes. It’s easy to understand and implement in code! Check out the graphic below for an illustration. K-Means has the advantage that it’s pretty fast, as all we’re really doing is computing the distances between points and group centers; very few computations! It thus has a linear complexity O(n). On the other hand, K-Means has a couple of disadvantages. Firstly, you have to select how many groups/classes there are. This isn’t always trivial and ideally with a clustering algorithm we’d want it to figure those out for us because the point of it is to gain some insight from the data. K-means also starts with a random choice of cluster centers and therefore it may yield different clustering results on different runs of the algorithm. Thus, the results may not be repeatable and lack consistency. Other cluster methods are more consistent. K-Medians is another clustering algorithm related to K-Means, except instead of recomputing the group center points using the mean we use the median vector of the group. This method is less sensitive to outliers (because of using the Median) but is much slower for larger datasets as sorting is required on each iteration when computing the Median vector. Mean shift clustering is a sliding-window-based algorithm that attempts to find dense areas of data points. It is a centroid-based algorithm meaning that the goal is to locate the center points of each group/class, which works by updating candidates for center points to be the mean of the points within the sliding-window. These candidate windows are then filtered in a post-processing stage to eliminate near-duplicates, forming the final set of center points and their corresponding groups. Check out the graphic below for an illustration. An illustration of the entire process from end-to-end with all of the sliding windows is show below. Each black dot represents the centroid of a sliding window and each gray dot is a data point. In contrast to K-means clustering there is no need to select the number of clusters as mean-shift automatically discovers this. That’s a massive advantage. The fact that the cluster centers converge towards the points of maximum density is also quite desirable as it is quite intuitive to understand and fits well in a naturally data-driven sense. The drawback is that the selection of the window size/radius “r” can be non-trivial. DBSCAN is a density based clustered algorithm similar to mean-shift, but with a couple of notable advantages. Check out another fancy graphic below and let’s get started! DBSCAN poses some great advantages over other clustering algorithms. Firstly, it does not require a pe-set number of clusters at all. It also identifies outliers as noises unlike mean-shift which simply throws them into a cluster even if the data point is very different. Additionally, it is able to find arbitrarily sized and arbitrarily shaped clusters quite well. The main drawback of DBSCAN is that it doesn’t perform as well as others when the clusters are of varying density. This is because the setting of the distance threshold ε and minPoints for identifying the neighborhood points will vary from cluster to cluster when the density varies. This drawback also occurs with very high-dimensional data since again the distance threshold ε becomes challenging to estimate. One of the major drawbacks of K-Means is its naive use of the mean value for the cluster center. We can see why this isn’t the best way of doing things by looking at the image below. On the left hand side it looks quite obvious to the human eye that there are two circular clusters with different radius’ centered at the same mean. K-Means can’t handle this because the mean values of the clusters are a very close together. K-Means also fails in cases where the clusters are not circular, again as a result of using the mean as cluster center. Gaussian Mixture Models (GMMs) give us more flexibility than K-Means. With GMMs we assume that the data points are Gaussian distributed; this is a less restrictive assumption than saying they are circular by using the mean. That way, we have two parameters to describe the shape of the clusters: the mean and the standard deviation! Taking an example in two dimensions, this means that the clusters can take any kind of elliptical shape (since we have standard deviation in both the x and y directions). Thus, each Gaussian distribution is assigned to a single cluster. In order to find the parameters of the Gaussian for each cluster (e.g the mean and standard deviation) we will use an optimization algorithm called Expectation–Maximization (EM). Take a look at the graphic below as an illustration of the Gaussians being fitted to the clusters. Then we can proceed on to the process of Expectation–Maximization clustering using GMMs. There are really 2 key advantages to using GMMs. Firstly GMMs are a lot more flexible in terms of cluster covariance than K-Means; due to the standard deviation parameter, the clusters can take on any ellipse shape, rather than being restricted to circles. K-Means is actually a special case of GMM in which each cluster’s covariance along all dimensions approaches 0. Secondly, since GMMs use probabilities, they can have multiple clusters per data point. So if a data point is in the middle of two overlapping clusters, we can simply define its class by saying it belongs X-percent to class 1 and Y-percent to class 2. I.e GMMs support mixed membership. Hierarchical clustering algorithms actually fall into 2 categories: top-down or bottom-up. Bottom-up algorithms treat each data point as a single cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all data points. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering or HAC. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. Check out the graphic below for an illustration before moving on to the algorithm steps Hierarchical clustering does not require us to specify the number of clusters and we can even select which number of clusters looks best since we are building a tree. Additionally, the algorithm is not sensitive to the choice of distance metric; all of them tend to work equally well whereas with other clustering algorithms, the choice of distance metric is critical. A particularly good use case of hierarchical clustering methods is when the underlying data has a hierarchical structure and you want to recover the hierarchy; other clustering algorithms can’t do this. These advantages of hierarchical clustering come at the cost of lower efficiency, as it has a time complexity of O(n3), unlike the linear complexity of K-Means and GMM. There are your top 5 clustering algorithms that a data scientist should know! We’ll end off with an awesome visualization of how well these algorithms and a few others perform, courtesy of Scikit Learn! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Certified Nerd. AI / Machine Learning Engineer. Sharing concepts, ideas, and codes. " Mybridge,6.6K,6,https://medium.mybridge.co/30-amazing-python-projects-for-the-past-year-v-2018-9c310b04cdb3?source=tag_archive---------7----------------,30 Amazing Python Projects for the Past Year (v.2018),"For the past year, we’ve compared nearly 15,000 open source Python projects to pick Top 30 (0.2% chance). This is an extremely competitive list and it carefully picks the best open source Python libraries, tools and programs published between January and December 2017. Mybridge AI evaluates the quality by considering popularity, engagement and recency. To give you an idea about the quality, the average number of Github stars is 3,707. Open source projects can be useful for programmers. You can learn by reading the source code and build something on top of the existing projects. Give a plenty of time to play around with Python projects you may have missed for the past year. A) Beginner The Python Bible: Build 11 Projects and Go from Beginner to Pro [27,672 recommends, 4.7/5 stars] B) Data Science Python for Data Science and Machine Learning Bootcamp: Use NumPy, Pandas, Seaborn , Matplotlib , Plotly [90,212 recommends, 4.6/5 stars] (Click the numbers below. Credit given to the biggest contributor.) Home-assistant (v0.6+): Open-source home automation platform running on Python 3 [11357 stars on Github]. Courtesy of Paulus Schoutsen Pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration [11019 stars on Github]. Courtesy of Adam Paszke and others at PyTorch Team Grumpy: A Python to Go source code transcompiler and runtime. [8367 stars on Github]. Courtesy of Dylan Trotter and others at Google Sanic: Async Python 3.5+ web server that’s written to go fast [8028 stars on Github]. Courtesy of Channel Cat and Eli Uriegas Python-fire: A library for automatically generating command line interfaces (CLIs) from absolutely any Python object. [7775 stars on Github]. Courtesy of David Bieber and others at Google Brain. spaCy (v2.0): Industrial-strength Natural Language Processing (NLP) with Python and Cython [7633 stars on Github]. Courtesy of Matthew Honnibal Pipenv: Python Development Workflow for Humans [7273 stars on Github]. Courtesy of Kenneth Reitz MicroPython: A lean and efficient Python implementation for microcontrollers and constrained systems [5728 stars on Github]. Prophet: Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth [4369 stars on Github]. Courtesy of Facebook SerpentAI: Game Agent Framework in Python. Helping you create AIs / Bots to play any game [3411 stars on Github]. Courtesy of Nicholas Brochu Dash: Interactive, reactive web apps in pure python [3281 stars on Github]. Courtesy of Chris P InstaPy: Instagram Bot. Like/Comment/Follow Automation Script. [3179 stars on Github]. Courtesy of TimG Apistar: A fast and expressive API framework. For Python [3024 stars on Github]. Courtesy of Tom Christie Faiss: A library for efficient similarity search and clustering of dense vectors [2717 stars on Github]. Courtesy of Matthijs Douze and others at Facebook Research MechanicalSoup: A Python library for automating interaction with websites [2244 stars on Github]. Better-exceptions: Pretty and useful exceptions in Python, automatically [2121 stars on Github]. Courtesy of Qix Flashtext: Extract Keywords from sentence or Replace keywords in sentences [2019 stars on Github]. Courtesy of Vikash Singh Maya: Datetime for Humans in Python [1828 stars on Github]. Kenneth Reitz Mimesis (v1.0): Python library, which helps generate mock data in different languages for various purposes. These data can be especially useful at various stages of software development and testing [1732 stars on Github]. Courtesy of Líkið Geimfari Open-paperless: Scan, index, and archive all of your paper documents. A document management system. [1717 stars on Github]. Courtesy of Tina Zhou Fsociety: Hacking Tools Pack. A Penetration Testing Framework. [1585 stars on Github]. Courtesy of Manis Manisso LivePython: Visually trace Python code in real-time [1577 stars on Github]. Courtesy of Anastasis Germanidis Hatch: A modern project, package, and virtual env manager for Python [1537 stars on Github]. Courtesy of Ofek Lev Tangent: Source-to-Source Debuggable Derivatives in Pure Python [1433 stars on Github]. Courtesy of Alex Wiltschko and others at Google Brain Clairvoyant: A Python program that identifies and monitors historical cues for short term stock movement [1159 stars on Github]. Courtesy of Anthony Federico MonkeyType: A system for Python that generates static type annotations by collecting runtime types. [1143 stars on Github]. Courtesy of Carl Meyer at Instagram Engineering Eel: A little Python library for making simple Electron-like HTML/JS GUI apps [1137 stars on Github]. Surprise v1.0: A Python scikit for building and analyzing recommender systems [1103 stars on Github]. Gain: Web crawling framework for everyone. [1009 stars on Github]. Courtesy of 高久力 PDFTabExtract: A set of tools for extracting tables from PDF files helping to do data mining on scanned documents. [722 stars on Github]. That’s it for Python Open Source of the Year. If you like this curation, read best daily articles based on your programming skills on our website. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. We rank articles for professionals Read more and achieve more " Simon Greenman,10.2K,16,https://towardsdatascience.com/who-is-going-to-make-money-in-ai-part-i-77a2f30b8cef?source=tag_archive---------8----------------,Who Is Going To Make Money In AI? Part I – Towards Data Science,"We are in the midst of a gold rush in AI. But who will reap the economic benefits? The mass of startups who are all gold panning? The corporates who have massive gold mining operations? The technology giants who are supplying the picks and shovels? And which nations have the richest seams of gold? We are currently experiencing another gold rush in AI. Billions are being invested in AI startups across every imaginable industry and business function. Google, Amazon, Microsoft and IBM are in a heavyweight fight investing over $20 billion in AI in 2016. Corporates are scrambling to ensure they realise the productivity benefits of AI ahead of their competitors while looking over their shoulders at the startups. China is putting its considerable weight behind AI and the European Union is talking about a $22 billion AI investment as it fears losing ground to China and the US. AI is everywhere. From the 3.5 billion daily searches on Google to the new Apple iPhone X that uses facial recognition to Amazon Alexa that cutely answers our questions. Media headlines tout the stories of how AI is helping doctors diagnose diseases, banks better assess customer loan risks, farmers predict crop yields, marketers target and retain customers, and manufacturers improve quality control. And there are think tanks dedicated to studying the physical, cyber and political risks of AI. AI and machine learning will become ubiquitous and woven into the fabric of society. But as with any gold rush the question is who will find gold? Will it just be the brave, the few and the large? Or can the snappy upstarts grab their nuggets? Will those providing the picks and shovel make most of the money? And who will hit pay dirt? As I started thinking about who was going to make money in AI I ended up with seven questions. Who will make money across the (1) chip makers, (2) platform and infrastructure providers, (3) enabling models and algorithm providers, (4) enterprise solution providers, (5) industry vertical solution providers, (6) corporate users of AI and (7) nations? While there are many ways to skin the cat of the AI landscape, hopefully below provides a useful explanatory framework — a value chain of sorts. The companies noted are representative of larger players in each category but in no way is this list intended to be comprehensive or predictive. Even though the price of computational power has fallen exponentially, demand is rising even faster. AI and machine learning with its massive datasets and its trillions of vector and matrix calculations has a ferocious and insatiable appetite. Bring on the chips. NVIDIA’s stock is up 1500% in the past two years benefiting from the fact that their graphical processing unit (GPU) chips that were historically used to render beautiful high speed flowing games graphics were perfect for machine learning. Google recently launched its second generation of Tensor Processing Units (TPUs). And Microsoft is building its own Brainwave AI machine learning chips. At the same time startups such as Graphcore, who has raised over $110M, is looking to enter the market. Incumbents chip providers such as IBM, Intel, Qualcomm and AMD are not standing still. Even Facebook is rumoured to be building a team to design its own AI chips. And the Chinese are emerging as serious chip players with Cambricon Technology announcing the first cloud AI chip this past week. What is clear is that the cost of designing and manufacturing chips then sustaining a position as a global chip leader is very high. It requires extremely deep pockets and a world class team of silicon and software engineers. This means that there will be very few new winners. Just like the gold rush days those that provide the cheapest and most widely used picks and shovels will make a lot of money. The AI race is now also taking place in the cloud. Amazon realised early that startups would much rather rent computers and software than buy it. And so it launched Amazon Web Services (AWS) in 2006. Today AI is demanding so much compute power that companies are increasingly turning to the cloud to rent hardware through Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. The fight is on among the tech giants. Microsoft is offering their hybrid public and private Azure cloud service that allegedly has over one million computers. And in the past few weeks they announced that their Brainwave hardware solutionsdramatically accelerate machine learning with their own Bing search engine performance improving by a factor of ten. Google is rushing to play catchup with its own GoogleCloud offering. And we are seeing the Chinese Alibaba starting to take global share. Amazon — Microsoft — Google and IBM are going to continue to duke this one out. And watch out for the massively scaled cloud players from China. The big picks and shovels guys will win again. Today Google is the world’s largest AI company attracting the best AI minds, spending small country size GDP budgets on R&D, and sitting on the best datasets gleamed from the billions of users of their services. AI is powering Google’s search, autonomous vehicles, speech recognition, intelligent reasoning, massive search and even its own work on drug discovery and disease diangosis. And the incredible AI machine learning software and algorithms that are powering all of Google’s AI activity — TensorFlow — is now being given away for free. Yes for free! TensorFlow is now an open source software project available to the world. And why are they doing this? As Jeff Dean, head of Google Brain, recently said there are 20 million organisations in the world that could benefit from machine learning today. If millions of companies use this best in class free AI software then they are likely to need lots of computing power. And who is better served to offer that? Well Google Cloud is of course optimised for TensorFlow and related AI services. And once you become reliant on their software and their cloud you become a very sticky customer for many years to come. No wonder it is a brutal race for global AI algorithm dominance with Amazon — Microsoft — IBM also offering their own cheap or free AI software services. We are also seeing a fight for not only machine learning algorithms but cognitive algorithms that offer services for conversational agents and bots, speech, natural language processing (NLP) and semantics, vision, and enhanced core algorithms. One startup in this increasingly contested space is Clarifai who provides advanced image recognition systems for businesses to detect near-duplicates and visual searches. It has raised nearly $40M over the past three years. The market for vision related algorithms and services is estimated to be a cumulative $8 billion in revenue between 2016 and 2025. The giants are not standing still. IBM, for example, is offering its Watson cognitive products and services. They have twenty or so APIs for chatbots, vision, speech, language, knowledge management and empathy that can be simply be plugged into corporate software to create AI enabled applications. Cognitive APIs are everywhere. KDnuggets lists here over 50 of the top cognitive services from the giants and startups. These services are being put into the cloud as AI as a Service (AIaaS) to make them more accessible. Just recently Microsoft’s CEO Satya Nadella claimed that a million developers are using their AI APIs, services and tools for building AI-powered apps and nearly 300,000 developers are using their tools for chatbots. I wouldn’t want to be a startup competing with these Goliaths. The winners in this space are likely to favour the heavyweights again. They can hire the best research and engineering talent, spend the most money, and have access to the largest datasets. To flourish startups are going to have to be really well funded, supported by leading researchers with a whole battery of IP patents and published papers, deep domain expertise, and have access to quality datasets. And they should have excellent navigational skills to sail ahead of the giants or sail different races. There will many startup casualties, but those that can scale will find themselves as global enterprises or quickly acquired by the heavyweights. And even if a startup has not found a path to commercialisation, then they could become acquihires (companies bought for their talent) if they are working on enabling AI algorithms with a strong research oriented team. We saw this in 2014 when DeepMind, a two year old London based company that developed unique reinforcement machine learning algorithms, was acquired by Google for $400M. Enterprise software has been dominated by giants such as Salesforce, IBM, Oracle and SAP. They all recognise that AI is a tool that needs to be integrated into their enterprise offerings. But many startups are rushing to become the next generation of enterprise services filling in gaps where the incumbents don’t currently tread or even attempting to disrupt them. We analysed over two hundred use cases in the enterprise space ranging from customer management to marketing to cybersecurity to intelligence to HR to the hot area of Cognitive Robotic Process Automation (RPA). The enterprise field is much more open than previous spaces with a veritable medley of startups providing point solutions for these use cases. Today there are over 200 AI powered companies just in the recruitment space, many of them AI startups. Cybersecurity leader DarkTrace and RPA leader UiPathhave war chests in the $100 millions. The incumbents also want to make sure their ecosystems stay on the forefront and are investing in startups that enhance their offering. Salesforce has invested in Digital Genius a customer management solution and similarly Unbable that offers enterprise translation services. Incumbents also often have more pressing problems. SAP, for example, is rushing to play catchup in offering a cloud solution, let alone catchup in AI. We are also seeing tools providers trying to simplify the tasks required to create, deploy and manage AI services in the enterprise. Machine learning training, for example, is a messy business where 80% of time can be spent on data wrangling. And an inordinate amount of time is spent on testing and tuning of what is called hyperparameters. Petuum, a tools provider based in Pittsburgh in the US, has raised over $100M to help accelerate and optimise the deployment of machine learning models. Many of these enterprise startup providers can have a healthy future if they quickly demonstrate that they are solving and scaling solutions to meet real world enterprise needs. But as always happens in software gold rushes there will be a handful of winners in each category. And for those AI enterprise category winners they are likely to be snapped up, along with the best in-class tool providers, by the giants if they look too threatening. AI is driving a race for the best vertical industry solutions. There are a wealth of new AI powered startups providing solutions to corporate use cases in the healthcare, financial services, agriculture, automative, legal and industrial sectors. And many startups are taking the ambitious path to disrupt the incumbent corporate players by offering a service directly to the same customers. It is clear that many startups are providing valuable point solutions and can succeed if they have access to (1) large and proprietary data training sets, (2) domain knowledge that gives them deep insights into the opportunities within a sector, (3) a deep pool of talent around applied AI and (4) deep pockets of capital to fund rapid growth. Those startups that are doing well generally speak the corporate commercial language of customers, business efficiency and ROI in the form of well developed go-to-market plans. For example, ZestFinance has raised nearly $300M to help improve credit decision making that will provide fair and transparent credit to everyone. They claim they have the world’s best data scientists. But they would, wouldn’t they? For those startups that are looking to disrupt existing corporate players they need really deep pockets. For example, Affirm, that offers loans to consumers at the point of sale, has raised over $700M. These companies quickly need to create a defensible moat to ensure they remain competitive. This can come from data network effects where more data begets better AI based services and products that gets more revenue and customers that gets more data. And so the flywheel effect continues. And while corporates might look to new vendors in their industry for AI solutions that could enhance their top and bottom line, they are not going to sit back and let upstarts muscle in on their customers. And they are not going to sit still and let their corporate competitors gain the first advantage through AI. There is currently a massive race for corporate innovation. Large companies have their own venture groups investing in startups, running accelerators and building their own startups to ensure that they are leaders in AI driven innovation. Large corporates are in a strong position against the startups and smaller companies due to their data assets. Data is the fuel for AI and machine learning. Who is better placed to take advantage of AI than the insurance company that has reams of historic data on underwriting claims? The financial services company that knows everything about consumer financial product buying behaviour? Or the search company that sees more user searches for information than any other? Corporates large and small are well positioned to extract value from AI. In fact Gartner research predicts AI-derived business value is projected to reach up to $3.9 trillion by 2022. There are hundreds if not thousands of valuable use cases that AI can addresses across organisations. Corporates can improve their customer experience, save costs, lower prices, drive revenues and sell better products and services powered by AI. AI will help the big get bigger often at the expense of smaller companies. But they will need to demonstrate strong visionary leadership, an ability to execute, and a tolerance for not always getting technology enabled projects right on the first try. Countries are also also in a battle for AI supremacy. China has not been shy about its call to arms around AI. It is investing massively in growing technical talent and developing startups. Its more lax regulatory environment, especially in data privacy, helps China lead in AI sectors such as security and facial recognition. Just recently there was an example of Chinese police picking out one most wanted face in a crowd of 50,000 at a music concert. And SenseTime Group Ltd, that analyses faces and images on a massive scale, reported it raised $600M becoming the most valuable global AI startup. The Chinese point out that their mobile market is 3x the size of the US and there are 50x more mobile payments taking place — this is a massive data advantage. The European focus on data privacy regulation could put them at a disadvantage in certain areas of AI even if the Union is talking about a $22B investment in AI. The UK, Germany, France and Japan have all made recent announcements about their nation state AI strategies. For example, President Macron said the French government will spend $1.85 billion over the next five years to support the AI ecosystem including the creation of large public datasets. Companies such as Google’s DeepMind and Samsung have committed to open new Paris labs and Fujitsu is expanding its Paris research centre. The British just announced a $1.4 billion push into AI including funding of 1000 AI PhDs. But while nations are investing in AI talent and the ecosystem, the question is who will really capture the value. Will France and the UK simply be subsidising PhDs who will be hired by Google? And while payroll and income taxes will be healthy on those six figure machine learning salaries, the bulk of the economic value created could be with this American company, its shareholders, and the smiling American Treasury. AI will increase productivity and wealth in companies and countries. But how will that wealth be distributed when the headlines suggest that 30 to 40% of our jobs will be taken by the machines? Economists can point to lessons from hundreds of years of increasing technology automation. Will there be net job creation or net job loss? The public debate often cites Geoffrey Hinton, the godfather of machine learning, who suggested radiologists will lose their jobs by the dozen as machines diagnose diseases from medical images. But then we can look to the Chinese who are using AI to assist radiologists in managing the overwhelming demand to review 1.4 billion CT scans annually for lung cancer. The result is not job losses but an expanded market with more efficient and accurate diagnosis. However there is likely to be a period of upheaval when much of the value will go to those few companies and countries that control AI technology and data. And lower skilled countries whose wealth depends on jobs that are targets of AI automation will likely suffer. AI will favour the large and the technologically skilled. In examining the landscape of AI it has became clear that we are now entering a truly golden era for AI. And there are few key themes appearing as to where the economic value will migrate: In short it looks like the AI gold rush will favour the companies and countries with control and scale over the best AI tools and technology, the data, the best technical workers, the most customers and the strongest access to capital. Those with scale will capture the lion’s share of the economic value from AI. In some ways ‘plus ça change, plus c’est la même chose.’ But there will also be large golden nuggets that will be found by a few choice brave startups. But like any gold rush many startups will hit pay dirt. And many individuals and societies will likely feel like they have not seen the benefits of the gold rush. This is the first part in a series of articles I intend to write on the topic of the economics of AI. I welcome your feedback. Written by Simon Greenman I am a lover of technology and how it can be applied in the business world. I run my own advisory firm Best Practice AI helping executives of enterprises and startups accelerate the adoption of ROI based AI applications . Please get in touch to discuss this. If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. And please post your comments or you can email me directly or find me on LinkedIn or twitter or follow me at Simon Greenman. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI guy. MapQuest guy. Grow, innovate and transform companies with tech. Start-up investor, mentor and geek. Sharing concepts, ideas, and codes. " Eugenio Culurciello,6.4K,8,https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0?source=tag_archive---------9----------------,The fall of RNN / LSTM – Towards Data Science,"We fell for Recurrent neural networks (RNN), Long-short term memory (LSTM), and all their variants. Now it is time to drop them! It is the year 2014 and LSTM and RNN make a great come-back from the dead. We all read Colah’s blog and Karpathy’s ode to RNN. But we were all young and unexperienced. For a few years this was the way to solve sequence learning, sequence translation (seq2seq), which also resulted in amazing results in speech to text comprehension and the raise of Siri, Cortana, Google voice assistant, Alexa. Also let us not forget machine translation, which resulted in the ability to translate documents into different languages or neural machine translation, but also translate images into text, text into images, and captioning video, and ... well you got the idea. Then in the following years (2015–16) came ResNet and Attention. One could then better understand that LSTM were a clever bypass technique. Also attention showed that MLP network could be replaced by averaging networks influenced by a context vector. More on this later. It only took 2 more years, but today we can definitely say: But do not take our words for it, also see evidence that Attention based networks are used more and more by Google, Facebook, Salesforce, to name a few. All these companies have replaced RNN and variants for attention based models, and it is just the beginning. RNN have the days counted in all applications, because they require more resources to train and run than attention-based models. See this post for more info. Remember RNN and LSTM and derivatives use mainly sequential processing over time. See the horizontal arrow in the diagram below: This arrow means that long-term information has to sequentially travel through all cells before getting to the present processing cell. This means it can be easily corrupted by being multiplied many time by small numbers < 0. This is the cause of vanishing gradients. To the rescue, came the LSTM module, which today can be seen as multiple switch gates, and a bit like ResNet it can bypass units and thus remember for longer time steps. LSTM thus have a way to remove some of the vanishing gradients problems. But not all of it, as you can see from the figure above. Still we have a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it. No question LSTM and GRU and derivatives are able to learn a lot of longer term information! See results here; but they can remember sequences of 100s, not 1000s or 10,000s or more. And one issue of RNN is that they are not hardware friendly. Let me explain: it takes a lot of resources we do not have to train these network fast. Also it takes much resources to run these model in the cloud, and given that the demand for speech-to-text is growing rapidly, the cloud is not scalable. We will need to process at the edge, right into the Amazon Echo! See note below for more details. If sequential processing is to be avoided, then we can find units that “look-ahead” or better “look-back”, since most of the time we deal with real-time causal data where we know the past and want to affect future decisions. Not so in translating sentences, or analyzing recorded videos, for example, where we have all data and can reason on it more time. Such look-back/ahead units are neural attention modules, which we previously explained here. To the rescue, and combining multiple neural attention modules, comes the “hierarchical neural attention encoder”, shown in the figure below: A better way to look into the past is to use attention modules to summarize all past encoded vectors into a context vector Ct. Notice there is a hierarchy of attention modules here, very similar to the hierarchy of neural networks. This is also similar to Temporal convolutional network (TCN), reported in Note 3 below. In the hierarchical neural attention encoder multiple layers of attention can look at a small portion of recent past, say 100 vectors, while layers above can look at 100 of these attention modules, effectively integrating the information of 100 x 100 vectors. This extends the ability of the hierarchical neural attention encoder to 10,000 past vectors. But more importantly look at the length of the path needed to propagate a representation vector to the output of the network: in hierarchical networks it is proportional to log(N) where N are the number of hierarchy layers. This is in contrast to the T steps that a RNN needs to do, where T is the maximum length of the sequence to be remembered, and T >> N. This architecture is similar to a neural Turing machine, but lets the neural network decide what is read out from memory via attention. This means an actual neural network will decide which vectors from the past are important for future decisions. But what about storing to memory? The architecture above stores all previous representation in memory, unlike neural Turning machines. This can be rather inefficient: think about storing the representation of every frame in a video — most times the representation vector does not change frame-to-frame, so we really are storing too much of the same! What can we do is add another unit to prevent correlated data to be stored. For example by not storing vectors too similar to previously stored ones. But this is really a hack, the best would be to be let the application guide what vectors should be saved or not. This is the focus of current research studies. Stay tuned for more information. Tell your friends! It is very surprising to us to see so many companies still use RNN/LSTM for speech to text, many unaware that these networks are so inefficient and not scalable. Please tell them about this post. About training RNN/LSTM: RNN and LSTM are difficult to train because they require memory-bandwidth-bound computation, which is the worst nightmare for hardware designer and ultimately limits the applicability of neural networks solutions. In short, LSTM require 4 linear layer (MLP layer) per cell to run at and for each sequence time-step. Linear layers require large amounts of memory bandwidth to be computed, in fact they cannot use many compute unit often because the system has not enough memory bandwidth to feed the computational units. And it is easy to add more computational units, but hard to add more memory bandwidth (note enough lines on a chip, long wires from processors to memory, etc). As a result, RNN/LSTM and variants are not a good match for hardware acceleration, and we talked about this issue before here and here. A solution will be compute in memory-devices like the ones we work on at FWDNXT. See this repository for a simple example of these techniques. Note 1: Hierarchical neural attention is similar to the ideas in WaveNet. But instead of a convolutional neural network we use hierarchical attention modules. Also: Hierarchical neural attention can be also bi-directional. Note 2: RNN and LSTM are memory-bandwidth limited problems (see this for details). The processing unit(s) need as much memory bandwidth as the number of operations/s they can provide, making it impossible to fully utilize them! The external bandwidth is never going to be enough, and a way to slightly ameliorate the problem is to use internal fast caches with high bandwidth. The best way is to use techniques that do not require large amount of parameters to be moved back and forth from memory, or that can be re-used for multiple computation per byte transferred (high arithmetic intensity). Note 3: Here is a paper comparing CNN to RNN. Temporal convolutional network (TCN) “outperform canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory”. Note 4: Related to this topic, is the fact that we know little of how our human brain learns and remembers sequences. “We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks” — these chunks remind me of small convolutional or attention like networks on smaller sequences, that then are hierarchically strung together like in the hierarchical neural attention encoder and Temporal convolutional network (TCN). More studies make me think that working memory is similar to RNN networks that uses recurrent real neuron networks, and their capacity is very low. On the other hand both the cortex and hippocampus give us the ability to remember really long sequences of steps (like: where did I park my car at airport 5 days ago), suggesting that more parallel pathways may be involved to recall long sequences, where attention mechanism gate important chunks and force hops in parts of the sequence that is not relevant to the final goal or task. Note 5: The above evidence shows we do not read sequentially, in fact we interpret characters, words and sentences as a group. An attention-based or convolutional module perceives the sequence and projects a representation in our mind. We would not be misreading this if we processed this information sequentially! We would stop and notice the inconsistencies! I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more... If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I dream and build new technology Sharing concepts, ideas, and codes. " WiseWolf Fund,14.2K,8,https://medium.com/@wisewolf_fund/unique-trends-to-look-out-for-with-artificial-intelligence-1db3de178463?source=---------0----------------,GAME-CHANGING TRENDS TO LOOK OUT FOR WITH AI – WiseWolf Fund – Medium,"Artificial Intelligence is a state-of-the-art technological trend that many companies are trying to integrate into their business. A recent report by McKinsey states that Baidu, the Chinese equivalent of Alphabet, invested $20 billion in AI last year. At the same time, Alphabet invested roughly $30 billion in developing AI technologies. The Chinese government has been actively pursuing AI technology in an attempt to control a future cornerstone innovation. Companies in the US are also investing time, money and energy into advancing AI technology. The reason for such interest towards artificial intelligence is that artificial intelligence can enhance any product or function. This is why companies and governments make considerable investments in the research and development of this technology. Its role in increasing the production performance while simultaneously reducing the costs cannot be underestimated. Since some of the largest entities in the world are focused on promoting the AI technology, it would be wise to understand and follow the trend. AI is already shaping the economy, and in the near future, its effect may be even more significant. Ignoring the new technology and its influence on the global economic situation is a recipe for failure. Despite the huge public interest and attention towards AI, its evolution is still somewhat halted by the objective causes. As any new and fast-developing industry, AI is quickly outgrowing its environment. According to Adam Temper, an author of many creative researches on artificial intelligence, the development of AI is mostly limited by the “lack of employees with relevant expertise, very few mature standard industry tools, limited high quality training material available, few options for easy access to preconfigured machine learning environments, and the general focus in the industry on implementation rather than design”. With any new complex technology, the learning curve is steep. Our educational institutions are several steps behind the commercial applications of this technology. It is important that AI scientists work collaboratively, sharing knowledge and best practice, to address this deficiency. AI is rapidly increasing its impact on society; we need to ensure that the power of AI doesn’t remain with the elite few. Another factor that may be hindering the progress of AI is the cautious stance that people tend to take towards it. Artificial intelligence is still too sci-fi, too strange and, therefore, sometimes scary. When people learn to trust AI, it will make a true quantum leap in the way of general adoption and application. Adam Temper supports this point, too, describing the possible ways for AI technology to gain public trust as At the same time, if we analyze the primary purpose of AI, we will see it for what it really is — a tool to perform the routine tasks relieving humans for something more creative or innovative. When asked about the current trends and opportunities of AI, Aaron Edell, CEO and co-founder of Machine Box, and one of the top writers on AI, described them as follows: AI has also become a political talking point in recent years. There have been arguments that AI will help to create jobs, but that it will also cause certain workers to lose their jobs. For example, estimations prove that self-driving vehicles will cause 25,000 truck drivers to lose their jobs each month. Also, as much as 1 million pickers and packers working in US warehouses could be out of a job. This is due to the fact that by implementing AI, factories can operate with as few as a dozen of workers. Naturally, companies gladly implement artificial intelligence, as it ensures considerable savings. At the same time, governments are concerned about the current employment situation as well as the short-term and long-term predictions. Some countries have already begun to plan measures about the new AI technology that are intended to keep the economy stable. In fact, it would not be fair to say that artificial intelligence causes people to lose jobs. True, the whole point of automation is making machines do what people used to do before. However, it would be more correct if we said that artificial intelligence reshapes the employment situation. Together with taking over human functions, it creates other jobs, forces people to master new skills, encourages workers to increase productivity. But it is obvious that AI is going to turn the regular sequence of events upside down. Therefore, the best approach is not to wait until AI leaves you unemployed, but rather proactively embrace it and learn to live with it. As we said already, AI can also create jobs, so a wise move would be to learn to manage AI-based tools. With the advance of AI products, learning to work with them may secure you a job and even promote your career. Your future largely depends on your current and expected income. However, another important factor is the way you manage your finances. Of course, investing in your own or your children’s knowledge is one of the best investments you can ever make. At the same time, if you need some financial cushion to secure your family’s welfare, you should look at the available investment opportunities. And this is where artificial intelligence may become your best friend, professional consultant and investment manager. In the recent years, in addition to the traditional banks and financial institutions, we have witnessed the appearance of a totally new and innovative investment system. We are talking about the blockchain technology and the cryptocurrencies that it supports. Millions of people all over the world have already appreciated the transparency and flexibility of the blockchain networks. By watching the cryptocurrency trends carefully and trading wisely, individual investors have made fortunes within a very short time. Nowadays, the cryptocurrency opportunities are open for everyone, not only for the industry experts. There are investment funds running on artificial intelligence that are available for individual investors. With such funds, you are, on one hand, protected by the blockchain technology. It ensures proper safety of your funds and the security of your transactions. On the other hand, you do not need to be an investment expert to make wise decisions. This is where artificial intelligence is at your service. It analyzes the existing trends on the extremely volatile cryptocurrency market and shows you the best opportunities. The main point is that we should not regard AI as a threat to our careers and a danger to our well-being. Instead, we should analyze the investment openings created by AI technology that can secure our prosperity. For example, Wolf Coin is using AI technology to create a seamless investment channel for savvy individuals. This robust channel opens great opportunities that investors can use to become new rich kids on the block. Most noteworthy, the low entry cost of $10 has made it one offer that will enjoy a huge buzz. The focus on this new market opening will help people build a solid financial nest egg that will keep them safe even in the face of the storm. Wisewolf Fund launching the Wolf Coin focused its effort on creating a great opportunity for people who wish to benefit from cryptocurrency trading but are new to this trend. With artificial intelligence and advanced analytical algorithms, the fund arranges the most favorable conditions for individual investors. Mainstream manufacturers, companies, and factories are embracing AI technology to change the mode of their operations. Therefore, it is critical to keep tabs on this reality as it can bring many benefits that cannot be found elsewhere. AI is one of the hottest topics of discussion, however, it is now clear that AI is here to stay. So, people should accept the obvious in order to create the future that they desire. The wisest strategy is to embrace artificial intelligence and let it work to maintain our well-being. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The WiseWolf Crypto Fund provides an easy way to enter the cryptocurrency market even for non-techies. " Justin Lee,8.3K,11,https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=---------1----------------,Chatbots were the next big thing: what happened? – The Startup – Medium,"Oh, how the headlines blared: Chatbots were The Next Big Thing. Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines. And why wouldn’t they be? All the road signs pointed towards insane success. At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’. In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place: One year on, we have an answer to that question. No. Because there isn’t even an ecosystem for a platform to dominate. Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly. The age-old hype cycle unfolded in familiar fashion... Expectations built, built, and then..... It all kind of fizzled out. The predicted paradim shift didn’t materialize. And apps are, tellingly, still alive and well. We look back at our breathless optimism and turn to each other, slightly baffled: “is that it? THAT was the chatbot revolution we were promised?” Digit’s Ethan Bloch sums up the general consensus: According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them. Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word. Users had to type commands manually into a machine to get anything done. Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too! Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language. Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised: The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system: Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants. It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations. Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms. Basically, we’re still trying to achieve the same innovations we were 30 years ago. Here’s where I think we’re going wrong: An oversized assumption has been that apps are ‘over’, and would be replaced by bots. By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development. You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet? It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least. Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app. Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail. A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition. That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems. Modern-day apps benefit from decades of research and experimentation. Why would we throw this away? But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting. Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements. The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response. Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these. For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed. Building a bot for the sake of it, letting it loose and hoping for the best will never end well: The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input. The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too. That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. Problems arise when life refuses to fit into those boxes. According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus. When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties. Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly. A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like. In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy. Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.) As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers. And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish. Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet. And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked: Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type. Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true. We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication. Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative. Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI. Aiming for a human dimension in business interactions makes sense. If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms. Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be. A conversation encompasses so much more than just text. Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory. As HubSpot team pinpointed: People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users). And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison. And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans. But is that how humans prefer to interact with machines? Not necessarily. At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure. In a way, those early-adopters weren’t entirely wrong. People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16. Not even close. Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information. Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel. That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence. For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon. But that’s not the whole story. Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial. As Bill Gates once said: The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone. I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology. Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day. Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing. And I can’t wait to see what happens next. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Michael Solana,680,5,https://medium.com/s/story/artificial-intelligence-is-humanitys-rorschach-test-6fb1ef9c0ce4?source=---------2----------------,Artificial Intelligence Is Humanity's Rorschach Test,"Member Feature Story Slime Sunday / Founders Fund Slime Sunday / Founders Fund I don’t fear artificial intelligence, I fear people who fear artificial intelligence. It’s the 1960s. A psychologist stares at his patient — a balding, middle-aged foreman with a cigarette in his hand, and a curl of smoke around him like a halo on an acid trip. The psychologist holds up an inkblot, an ambiguous, black splatter on a white flashcard, and asks his patient what he sees. The thinking is his patient, not willing or otherwise able to express his feelings, his thoughts, his motivations, might inadvertently reveal some piece of his inner self while describing the ambiguous. The foreman doesn’t see a nondescript swiggle, or stain. He sees a man and woman making love, perhaps violently. He sees a mother holding her child. He sees a grisly murder. While the descriptions of these inkblots reveal very little about the world, they reveal a great deal about the man describing them, because when faced with an inscrutable abstract he projects himself onto the ambiguous. Let’s look at this in the context of artificial intelligence. I’m not talking about self-driving cars, or algorithms serving ads for wallpaper and nice leather boots on Gmail. I’m not talking about the stuff we call artificial intelligence to raise money from bewildered venture capitalists on Sand Hill Road. I’m talking about general artificial intelligence, which is a computer that wants stuff, and chiefly to live. I’m talking about building a conscious machine just smart enough to make itself smarter. From here, the thought experiment runs like this: the conscious machine does make itself smarter, and once it’s smarter, it learns how to make itself smarter, which it does for good measure. The smarter the machine becomes, the faster this pattern repeats itself, and the intelligence of the machine begins to increase exponentially. In this way, a conscious artificial intelligence born on a Tuesday morning might be twice as smart as the smartest man who ever lived by Wednesday afternoon, and omnipotent by Friday. This is how we invent the thing that invents God. In nerd lore, it’s known as the Singularity. The question — the only question that could possibly matter to a human no longer at the top of the intellectual food chain — is what does an exponential intelligence want? Conventional wisdom: it extremely wants to murder you. The dystopian version of superintelligence is illustrated with frequency by leaders in the technology industry, and is famously depicted by Hollywood in films like Terminator, or more recently Ex Machina, and even the Avengers. The “angry god A.I.” is a story you know, because it is the story you are constantly told: we build the thinking machine, it surpasses our abilities in every way, and it destroys us for one of any number of reasons. Maybe it perceives us as a threat. Maybe we’re just in its way, and it hardly perceives us at all — humanity, a disposable insect race. There are of course many arguments in opposition to the now ubiquitous concept of our apocalypse by artificial intelligence. I myself have called into question the logic of such dystopian arguments in Anatomy of Next. But our subject here is less pertaining to the nature of the conscious machine than it is to the way we talk about this subject, and what it means. First, consider that most of the artificial intelligence depicted in culture looks human, a representation with no basis in technological reality. Then, the true scope of the Singularity is almost impossible to predict, which begs a question: where are these opinions about the broadly unknowable coming from? There’s an obvious difficulty in trying to understand the hypothetical motivations of a hypothetically god-like intelligence. To your beloved labradoodle, you are a being of immense magic with near unfathomable motivations. You summon light and sound from inanimate matter, soar through the streets on angry metal, cast fire from your hands! The labradoodle’s conception of man is distorted because there is a vast difference between the intelligence of a dog, and the intelligence of a human. Let us name this difference ‘x.’ Now, as we try and understand the difference between the most intelligent human who has ever lived and a hypothetical god-like intelligence born of the Singularity, let us set our difference in intelligence at a conservative ‘1000x.’ How does one even begin to conceive of a being this smart? Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power? Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror. Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot. technology, liberty, teenagers with superpowers. vp @foundersfund. creator + producer #anatomyofnext. Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade " Emmanuel Ameisen,935,11,https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------3----------------,Reinforcement Learning from scratch – Insight Data,"Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below. In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone. Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here! Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer. This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in. For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here. Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems. 1/It all starts with slot machines Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine. Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests. While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best. In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor. 2/Adding different states The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits. Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world. 3/Learning about the consequences of our actions There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning. First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate? We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward. Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates. 3+/Introducing Monte Carlo Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them. 4/The world is rarely discrete The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function! A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently. The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space. 4+/Leveraging deep learning for representations In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations. One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients. 5/Learning directly from the screen There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment). Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat! As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand. 6/Nuanced actions So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger. We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :). 7/Next steps for the brave There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored: That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here. Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Lead at Insight AI @EmmanuelAmeisen Insight Fellows Program - Your bridge to a career in data " Irhum Shafkat,2K,15,https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------4----------------,Intuitively Understanding Convolutions for Deep Learning,"The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel. The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer. Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature. This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion. Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides. Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input. The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on. More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes. Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others. So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique. Each filter in a convolution layer produces one and only one output channel, and they do it like so: Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others). Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel. Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel. And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process. Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data. Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer: And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be: (Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2]) The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0. It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer. In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class. Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters. So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good? The answer lies in the feature combinations the prior leads the parameters to learn. Early on in this article, we discussed that: So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image. If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid! Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality. And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution: For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image! So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc. There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning: One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize. Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in. A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see. The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1]. We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field: This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area. (Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween) This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer. Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a. The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge. Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds. The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image. Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train. The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts. By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data. With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model. Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare. Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable. CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding. Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curious programmer, tinkers around in Python and deep learning. Sharing concepts, ideas, and codes. " Sam Drozdov,2.3K,6,https://uxdesign.cc/an-intro-to-machine-learning-for-designers-5c74ba100257?source=---------5----------------,An intro to Machine Learning for designers – UX Collective,"There is an ongoing debate about whether or not designers should write code. Wherever you fall on this issue, most people would agree that designers should know about code. This helps designers understand constraints and empathize with developers. It also allows designers to think outside of the pixel perfect box when problem solving. For the same reasons, designers should know about machine learning. Put simply, machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Even though Arthur Samuel coined the term over fifty years ago, only recently have we seen the most exciting applications of machine learning — digital assistants, autonomous driving, and spam-free email all exist thanks to machine learning. Over the past decade new algorithms, better hardware, and more data have made machine learning an order of magnitude more effective. Only in the past few years companies like Google, Amazon, and Apple have made some of their powerful machine learning tools available to developers. Now is the best time to learn about machine learning and apply it to the products you are building. Since machine learning is now more accessible than ever before, designers today have the opportunity to think about how machine learning can be applied to improve their products. Designers should be able to talk with software developers about what is possible, how to prepare, and what outcomes to expect. Below are a few example applications that should serve as inspiration for these conversations. Machine learning can help create user-centric products by personalizing experiences to the individuals who use them. This allows us to improve things like recommendations, search results, notifications, and ads. Machine learning is effective at finding abnormal content. Credit card companies use this to detect fraud, email providers use this to detect spam, and social media companies use this to detect things like hate speech. Machine learning has enabled computers to begin to understand the things we say (natural-language processing) and the things we see (computer vision). This allows Siri to understand “Siri, set a reminder...”, Google Photos to create albums of your dog, and Facebook to describe a photo to those visually impaired. Machine learning is also helpful in understanding how users are grouped. This insight can then be used to look at analytics on a group-by-group basis. From here, different features can be evaluated across groups or be rolled out to only a particular group of users. Machine learning allows us to make predictions about how a user might behave next. Knowing this, we can help prepare for a user’s next action. For example, if we can predict what content a user is planning on viewing, we can preload that content so it’s immediately ready when they want it. Depending on the application and what data is available, there are different types of machine learning algorithms to choose from. I’ll briefly cover each of the following. Supervised learning allows us to make predictions using correctly labeled data. Labeled data is a group of examples that has informative tags or outputs. For example, photos with associated hashtags or a house’s features (eq. number of bedrooms, location) and its price. By using supervised learning we can fit a line to the labelled data that either splits the data into categories or represents the trend of the data. Using this line we are able to make predictions on new data. For example, we can look at new photos and predict hashtags or look at a new house’s features and predict its price. If the output we are trying to predict is a list of tags or values we call it classification. If the output we are trying to predict is a number we call it regression. Unsupervised learning is helpful when we have unlabeled data or we are not exactly sure what outputs (like an image’s hashtags or a house’s price) are meaningful. Instead we can identify patterns among unlabeled data. For example, we can identify related items on an e-commerce website or recommend items to someone based on others who made similar purchases. If the pattern is a group we call it a cluster. If the pattern is a rule (e.q. if this, then that) we call it an association. Reinforcement learning doesn’t use an existing data set. Instead we create an agent to collect its own data through trial-and-error in an environment where it is reinforced with a reward. For example, an agent can learn to play Mario by receiving a positive reward for collecting coins and a negative reward for walking into a Goomba. Reinforcement learning is inspired by the way that humans learn and has turned out to be an effective way to teach computers. Specifically, reinforcement has been effective at training computers to play games like Go and Dota. Understanding the problem you are trying to solve and the available data will constrain the types of machine learning you can use (e.q. identifying objects in an image with supervised learning requires a labeled data set of images). However, constraints are the fruit of creativity. In some cases, you can set out to collect data that is not already available or consider other approaches. Even though machine learning is a science, it comes with a margin of error. It is important to consider how a user’s experience might be impacted by this margin of error. For example, when an autonomous car fails to recognize its surroundings people can get hurt. Even though machine learning has never been as accessible as it is today, it still requires additional resources (developers and time) to be integrated into a product. This makes it important to think about whether the resulting impact justifies the amount of resources needed to implement. We have barely covered the tip of the iceberg, but hopefully at this point you feel more comfortable thinking about how machine learning can be applied to your product. If you are interested in learning more about machine learning, here are some helpful resources: Thanks for reading. Chat with me on Twitter @samueldrozdov From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Digital Product Designer samueldrozdov.com Curated stories on user experience, usability, and product design. By @fabriciot and @caioab. " Conor Dewey,252,10,https://towardsdatascience.com/the-big-list-of-ds-ml-interview-resources-2db4f651bd63?source=---------6----------------,The Big List of DS/ML Interview Resources – Towards Data Science,"Data science interviews certainly aren’t easy. I know this first hand. I’ve participated in over 50 individual interviews and phone screens while applying for competitive internships over the last calendar year. Through this exciting and somewhat (at times, very) painful process, I’ve accumulated a plethora of useful resources that helped me prepare for and eventually pass data science interviews. Long story short, I’ve decided to sort through all my bookmarks and notes in order to deliver a comprehensive list of data science resources. With this list by your side, you should have more than enough effective tools at your disposal next time you’re prepping for a big interview. It’s worth noting that many of these resources are naturally going to geared towards entry-level and intern data science positions, as that’s where my expertise lies. Keep that in mind and enjoy! Here’s some of the more general resources covering data science as a whole. Specifically, I highly recommend checking out the first two links regarding 120 Data Science Interview Questions. While the ebook itself is a couple bucks out of pocket, the answers themselves are free on Quora. These were some of my favorite full-coverage questions to practice with right before an interview. Even Data Scientists cannot escape the dreaded algorithmic coding interview. In my experience, this isn’t the case 100% of the time, but chances are you’ll be asked to work through something similar to an easy or medium question on LeetCode or HackerRank. As far as language goes, most companies will let you use whatever language you want. Personally, I did almost all of my algorithmic coding in Java even though the positions were targeted at Python and R programmers. If I had to recommend one thing, it’s to break out your wallet and invest in Cracking the Coding Interview. It absolutely lives up to the hype. I plan to continue using it for years to come. Once the interviewer knows that you can think-through problems and code effectively, chances are that you’ll move onto some more data science specific applications. Depending on the interviewer and the position, you will likely be able to choose between Python and R as your tool of choice. Since I’m partial to Python, my resources below will primarily focus on effectively using Pandas and NumPy for data analysis. A data science interview typically isn’t complete without checking your knowledge of SQL. This can be done over the phone or through a live coding question, more likely the latter. I’ve found that the difficulty level of these questions can vary a good bit, ranging from being painfully easy to requiring complex joins and obscure functions. Our good friend, statistics is still crucial for Data Scientists and it’s reflected as such in interviews. I had many interviews begin by seeing if I can explain a common statistics or probability concept in simple and concise terms. As positions get more experienced, I suspect this happens less and less as traditional statistical questions begin to take the more practical form of A/B testing scenarios, covered later in the post. You’ll notice that I’ve compiled a few more resources here than in other sections. This isn’t a mistake. Machine learning is a complex field that is a virtual guarantee in data science interviews today. The way that you’ll be tested on this is no guarantee however. It may come up as a conceptual question regarding cross validation or bias-variance tradeoff, or it may take the form of a take home assignment with a dataset attached. I’ve seen both several times, so you’ve got to be prepared for anything. Specifically, check out the Machine Learning Flashcards below, they’re only a couple bucks and were my by far my favorite way to quiz myself on any conceptual ML stuff. This won’t be covered in every single data science interview, but it’s certainly not uncommon. Most interviews will have atleast one section solely dedicated to product thinking which often lends itself to A/B testing of some sort. Make sure your familiar with the concepts and statistical background necessary in order to be prepared when it comes up. If you have time to spare, I took the free online course by Udacity and overall, I was pretty impressed. Lastly, I wanted to call out all of the posts related to data science jobs and interviewing that I read over and over again to understand, not only how to prepare, but what to expect as well. If you only check out one section here, this is the one to focus on. This is the layer that sits on top of all the technical skills and application. Don’t overlook it. I hope you find these resources useful during your next interview or job search. I know I did, truthfully I’m just glad that I saved these links somewhere. Lastly, this post is part of an ongoing initiative to ‘open-source’ my experience applying and interviewing at data science positions, so if you enjoyed this content then be sure to follow me for more stuff like this. If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below! If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge. This article was originally published on conordewey.com From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist & Writer | www.conordewey.com Sharing concepts, ideas, and codes. " Abhishek Parbhakar,937,6,https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------7----------------,Must know Information Theory concepts in Deep Learning (AI),"Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields: In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences: It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first? Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”. Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence. Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences: Also called Information Entropy or Shannon Entropy. Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments: If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy. Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy. A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments. Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it. For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero. For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as: where p(x_i) is the probability of i^th outcome of X. Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are. Cross entropy between two probability distributions p and q defined over same set of outcomes is given by: Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable. Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship. Mutual information of two discrete random variables X and Y is defined as: where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively. Also called Relative Entropy. KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other. Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence. KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’. KL divergence of a probability distribution Q from another probability distribution P is defined as: KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders. Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948. Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings. In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes. " Aman Dalmia,2.3K,17,https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------8----------------,What I learned from interviewing at multiple AI companies and start-ups,"Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work. This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation. NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger. To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps: a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview: As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be: • Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional). Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine: Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles. All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself. b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about. c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise. I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors. Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are. It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile. There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one. This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not: There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow: Problem > 1 or 2 previous approaches > Our approach > Result > Intuition The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly. Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling. Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life. That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄 We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says: This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said: We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely. Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning. If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough. Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA). Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future. Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for. I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills. This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love. Follow our publication to see more product & design stories featured by the Journal team. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Fanatic • Math Lover • Dreamer The official Journal blog " Lance Ulanoff,15.1K,5,https://medium.com/@LanceUlanoff/did-google-duplex-just-pass-the-turing-test-ffcfe6868b02?source=---------9----------------,Did Google Duplex just pass the Turing Test? – Lance Ulanoff – Medium,"I think it was the first “Um.” That was the moment when I realized I was hearing something extraordinary: A computer carrying out a completely natural and very human-sounding conversation with a real person. And it wasn’t just a random talk. This conversation had a purpose, a destination: to make an appointment at a hair salon. The entity making the call and appointment was Google Assistant running Duplex, Google’s still experimental AI voice system and the venue was Google I/O, Google’s yearly developer conference, which this year focused heavily on the latest developments in AI, Machine- and Deep-Learning. Google CEO Sundar Pichai explained that what we were hearing was a real phone call made to a hair salon that didn’t know it was part of an experiment or that they were talking to a computer. He launched Duplex by asking Google Assistant to book a haircut appointment for Tuesday morning. The AI did the rest. Duplex made the call and, when someone at the salon picked up, the voice AI started the conversation with: “Hi, I’m calling to book a woman’s hair cut appointment for a client, um, I’m looking for something on May third?” When the attendant asked Duplex to give her one second, Duplex responded with: “Mmm-hmm.” The conversation continued as the salon representative presented various dates and times and the AI asked about other options. Eventually, the AI and the salon worker agreed on an appointment date and time. What I heard was so convincing I had trouble discerning who was the salon worker and who (what) was the Duplex AI. It was stunning and somewhat disconcerting. I liken it to the feeling you’d get if a store mannequin suddenly smiled at you. It was easily the most remarkable human-computer conversation I’d ever heard and the closest thing I’ve seen a voice AI passing the Turing Test, which is the AI threshold suggested by Computer Scientist Alan Turing in the 1950s. Turing posited that by 2000 computers would be able to fool humans into thinking they were conversing with other humans at least 30% of the time. He was right. In 2014, a chatbot named Eugene Goostman successfully impersonated a wise-ass 14-year old programmer during lengthy text-based chats with unsuspecting humans. Turing, however hadn’t necessarily considered voice-based systems and, for obvious reasons, talking computers are somewhat less adept at fooling humans. Spend a few minutes conversing with your voice assistant of choice and you’ll soon discover their limitations. Their speech can be stilted, pronunciations off and response times can be slow (especially if they’re trying to access a cloud-based server) and forget about conversations. Most can handle two consecutive queries at most and they virtually all require a trigger phrase like “Alexa” or “Hey Siri.” (Google is working on removing unnecessary “Okay Googles” in short back and forth convos with the digital assistant). Google Assistant running Duplex didn’t exhibit any of those short comings. It sounded like a young female assistant carefully scheduling her boss’s haircut. In addition to the natural cadence, Google added speech disfluencies (the verbal ticks, “ums,” “uhs,” and “mm-hmms”) and latency or pauses that naturally occur when people are speaking. The result is a perfectly human voice produced entirely by a computer. The second call demonstration, where a male-voiced Duplex tried to make restaurant reservations, was even more remarkable. The human call participant didn’t entirely understand Duplex’s verbal requests and then told Duplex that, for the number of people it wanted to bring to the restaurant, they didn’t need a reservation. Duplex handled all this without missing a beat. “The amazing thing is that the assistant can actually understand the nuances of conversation,” said Pichai during the keynote. That ability comes by way of neural network technology and intensive machine learning, For as accomplished as Duplex is in making hair appointments and restaurant reservations, it might stumble in deeper or more abstract conversations. In a blog post on Duplex development, Google engineers explained that they constrained Duplex’s training to “closed domains” or well-defined topics (like dinner reservations and hair appointments) This gave them the ability to perform intense exploration of the topics and focus training. Duplex was guided during training within the domain by “experienced operators” who could keep track of mistakes and worked with engineers to improve responses. In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider. Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex? Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more. I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test. It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar. But that’s the future. For now, Duplex’s performance stands as a powerful proof of concept for our long-imagined future of conversational AI’s capable of helping, entertaining and engaging with us. It’s the first major step on the path to the AI depicted in the movie Her where Joaquin Phoenix starred as a man who falls in love with his chatty voice assistant played by the disembodied voice of Scarlett Johansson. So, no, Duplex didn’t pass the Turing test, but I do wonder what Alan Turing would think of it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan. " Sophia Arakelyan,7,4,https://buzzrobot.com/from-ballerina-to-ai-researcher-part-i-46fce67f809b?source=---------2----------------,From Ballerina to AI Researcher: Part I – buZZrobot,"Last year, I published the article “From Ballerina to AI writer” where I described how I embraced the technical part of AI without a technical background. But having love and passion for AI, I educated myself and was able to build a neural net classifier and do projects in Deep RL. Recently, I’ve become a participant in the OpenAI Scholarship Program (OpenAI is a non-profit that gathers top AI researchers to ensure the safety of AI to benefit humanity). Every week for the next three months I’ll publish blog posts sharing my story of transformation from a person dedicated to 15 years of professional dancing and then writing about tech and AI to actually conducting AI research. Finding your true calling — the key component of happiness My primary goal with the series of blog posts “From Ballerina to AI researcher” is to show that it’s never too late to embrace a new field, start over again, and find your true calling. Finding work you love is one of the most important components of happiness - — something that you do every day and invest your time in to grow; that makes you feel fulfilled, gives you energy; something that is a refuge for your soul. Great things never come easy. We have to be able to fight to make great things happen. But you can’t fight for something you don’t believe in, especially if you don’t feel like it’s really important for you and humanity. Finding that thing is a real challenge. I feel lucky that I found my true passion — AI. To me, the technology itself and the AI community — researchers, scientists, people who dedicate their lives to building the most powerful technology of all time with the mission to benefit humanity and make it safe for us — is a great source of energy. The structure of the blog post series Today, I’m giving an overall intro of what I’m going to cover in my “From Ballerina to AI Researcher” series. I’ll dedicate the sequence of blog posts during the OpenAI Scholars program to several aspects of AI technology. I’ll cover those areas that concern me a lot, like AI and automation, bias in ML, dual use of AI, etc. Also, the structure of my posts will include some insights on what I’m working on right now (the final technical project will be available by the end of August and will be open-sourced). I feel very lucky to have Alec Radford, an experienced researcher, as my mentor who guides me in the NLP and NLU research area. First week of my scholarship I’ve dedicated my first week within the program to learning about the Transformer architecture that performs much better on sequential data compared to RNNs, LSTMs. The novelty of the architecture is its multi-head self-attention mechanism. According to the original paper, experiments with the transformer on two machine translation tasks showed the model to be superior in quality while being more parallelizable and requiring significantly less time to train. More concretely, when RNNs or CNNs take a sequence as an input, it goes through sentences word by word, which is a huge obstacle toward parallelization of the process (takes more time to train models). Moreover, if sequences are too long, the model tends to forget the content of distant positions in sequence or mixes it with the following positions’ content — this is the fundamental problem in dealing with sequential data. The transformer architecture reduced this problem thanks to the multi-head self-attention mechanism. I digged into RNN, LSTM models to catch up with the background information. To that end, I’ve found Andrew Ng’s course on Deep Learning along with the papers extremely useful. To develop insights regarding the transformer, I went through the following resources: the video by Łukasz Kaiser from Google Brain, one of the model’s creators; a blog post with very well elaborated content re: the model, ran the code tensor2tensor and the code using the PyTorch framework from this paper to “feel” the difference between the TF and PyTorch frameworks. Overall, the goal within the program is to develop deep comprehension of the NLU research area: challenges, current state of the art; and to formulate and test hypotheses that tackle the most important problems of the field. I’ll share more on what I’m working on in my future articles. Meanwhile, if you have questions/feedback, please leave a comment. If you want to learn more about me, here are my Facebook and Twitter accounts. I’d appreciate your feedback on my posts, such as what topics are most interesting to you that I should consider further coverage on. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com The publication aims to cover practical aspects of AI technology, use cases along with interviews with notable people in the AI field. " Matt Schlicht,5K,11,https://chatbotsmagazine.com/the-complete-beginner-s-guide-to-chatbots-8280b7b906ca?source=tag_archive---------3----------------,The Complete Beginner’s Guide To Chatbots – Chatbots Magazine,"What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots? These are the questions we’re going to answer for you right now. Ready? Let’s do this. (Do you work in ecommerce? Stop reading and click here, we made something for you.) (p.s. here is where I believe the future of bots is headed, you will probably disagree with me at first.) (p.p.s. My newest guide about conversational commerce is up, I think you’ll find it super interesting.) A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.). If you haven’t wrapped your head around it yet, don’t worry. Here’s an example to help you visualize a chatbot. If you wanted to buy shoes from Nordstrom online, you would go to their website, look around until you find the shoes you wanted, and then you would purchase them. If Nordstrom makes a bot, which I’m sure they will, you would simply be able to message Nordstrom on Facebook. It would ask you what you’re looking for and you would simply... tell it. Instead of browsing a website, you will have a conversation with the Nordstrom bot, mirroring the type of experience you would get when you go into the retail store. Watch this video from Facebook’s recent F8 conference (where they make their major announcements). At the 7:30 mark, David Marcus, the Vice President of Messaging Products at Facebook, explains what it looks like to buy shoes in a Facebook Messenger bot. Buying shoes isn’t the only thing chatbots can be used for. Here are a couple of other examples: See? With bots, the possibilities are endless. You can build anything imaginable, and I encourage you to do just that. But why make a bot? Sure, it looks cool, it’s using some super advanced technology, but why should someone spend their time and energy on it? It’s a huge opportunity. HUGE. Scroll down and I’ll explain. You are probably wondering “Why does anyone care about chatbots? They look like simple text based services... what’s the big deal?” Great question. I’ll tell you why people care about chatbots. It’s because for the first time ever people are using messenger apps more than they are using social networks. Let that sink in for a second. People are using messenger apps more than they are using social networks. So, logically, if you want to build a business online, you want to build where the people are. That place is now inside messenger apps. This is why chatbots are such a big deal. It’s potentially a huge business opportunity for anyone willing to jump headfirst and build something people want. But, how do these bots work? How do they know how to talk to people and answer questions? Isn’t that artificial intelligence and isn’t that insanely hard to do? Yes, you are correct, it is artificial intelligence, but it’s something that you can totally do yourself. Let me explain. There are two types of chatbots, one functions based on a set of rules, and the other more advanced version uses machine learning. What does this mean? Chatbot that functions based on rules: Chatbot that functions using machine learning: Bots are created with a purpose. A store will likely want to create a bot that helps you purchase something, where someone like Comcast might create a bot that can answer customer support questions. You start to interact with a chatbot by sending it a message. Click here to try sending a message to the CNN chatbot on Facebook. So, if these bots use artificial intelligence to make them work well... isn’t that really hard to do? Don’t I need to be an expert at artificial intelligence to be able to build something that has artificial intelligence? Short answer? No, you don’t have to be an expert at artificial intelligence to create an awesome chatbot that has artificial intelligence. Just make sure to not over promise on your application’s abilities. If you can’t make the product good with artificial intelligence right now, it might be best to not put it in yet. However, over the past decade quite a bit of advancements have been made in the area of artificial intelligence, so much in fact that anyone who knows how to code can incorporate some level of artificial intelligence into their products. How do you build artificial intelligence into your bot? Don’t worry, I’ve got you covered, I’ll tell you how to do it in the next section of this post. Building a chatbot can sound daunting, but it’s totally doable. You’ll be creating an artificial intelligence powered chatting machine in no time (or, of course, you can always build a basic chat bot that doesn’t have a fancy AI brain and strictly follows rules). You will need to figure out what problem you are going to solve with your bot, choose which platform your bot will live on (Facebook, Slack, etc), set up a server to run your bot from, and choose which service you will use to build your bot. Here are a ton of resources to get you started. Platform documentation: Other Resources: Don’t want to build your own? Now that you’ve got your chatbot and artificial intelligence resources, maybe it’s time you met other people who are also interested in chatbots. Chatbots have been around for decades, but because of the recent advancements in artificial intelligence and machine learning, there is a big opportunity for people to create bots that are better, faster, and stronger. If you’re reading this, you probably fall into one of these categories: Wouldn’t it be awesome if you had a place to meet, learn, and share information with other people interested in chatbots? Yeah, we thought so too. That’s why I created a forum called “Chatbot News”, and it has quickly become the largest community related to Chatbots. The members of the Chatbots group are investors who manage well over $2 billion in capital, employees at Facebook, Instagram, Fitbit, Nike, and Ycombinator companies, and hackers from around the world. We would love if you joined. Click here to request an invite private chatbots community. I have also created the Silicon Valley Chatbots Meetup, register here to be notified when we schedule our first event. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO of Octane AI, Founder of Chatbots Magazine, YC Alum, Forbes 30 Under 30, product at Ustream for 4 years (sold for $130mil), did digital for Lil Wayne. Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more. " Gil Fewster,3.3K,5,https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------4----------------,The mind-blowing AI announcement from Google that you probably missed.,"Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text. I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own. In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System. This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock. Luckily, I’m here to bring you up to speed. Here’s the deal. Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance. Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results. This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input. All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning. The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative. Google Translate invented its own language to help it translate more effectively. What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation. Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes) To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post: Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated. And this leads the Google engineers onto that truly astonishing discovery of creation: So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics. I just thought maybe you’d want to know. Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly. Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective. Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A tinkerer Our community publishes stories worth reading on development, design, and data science. " Adam Geitgey,10.4K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------5----------------,Machine Learning is Fun! Part 2 – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话. In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!). This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out! Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished. Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this: We ended up with this simple estimation function: In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value. Instead of using code, let’s represent that same function as a simple diagram: However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model? To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases: Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)! Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model. Let’s combine our four attempts to guess into one big diagram: This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions. There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click: It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together: The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm. In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time. Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess? I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter. Our model might look like this: But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem. Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example: What letter is going to come next? You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing. In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English. To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently. Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters. This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory. Predicting the next letter in a story might seem pretty useless. What’s the point? One cool use might be auto-predict for a mobile phone keyboard: But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us! We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway. To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github. We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example. As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training: You can see that it has figured out that sometimes words have spaces between them, but that’s about it. After about 1000 iterations, things are looking more promising: The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense. But after several thousand more training iterations, it looks pretty good: At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense. Compare that with some real text from the book: Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing! We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters. For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”: Not bad! But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern. In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system. This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers. Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels? First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985: This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those. To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime. Here’s the first level from the game (which you probably remember if you ever played it): If we look closely, we can see the level is made of a simple grid of objects: We could just as easily represent this grid as a sequence of characters with one character representing each object: We’ve replaced each object in the level with a letter: ...and so on, using a different letter for each different kind of object in the level. I ended up with text files that looked like this: Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line: The patterns in a level really emerge when you think of the level as a series of columns: So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well. To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up: Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it. After a little training, our model is generating junk: It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet. After several thousand iterations, it’s starting to look like something: The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool! With a lot more training, the model gets to the point where it generates perfectly valid data: Let’s sample an entire level’s worth of data from our model and rotate it back horizontal: This data looks great! There are several awesome things to notice: Finally, let’s take this level and recreate it in Super Mario Maker: Play it yourself! If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3. The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model. If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free. As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly! For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate. But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been. In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach. Readers have sent me links to other interesting approaches to generating Super Mario levels: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 3! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " David Venturi,10.6K,20,https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------6----------------,"Every single Machine Learning course on the internet, ranked by your reviews","A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost. I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science. For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization. For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below. For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews. Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources. Each course must fit three criteria: We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only. There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out. We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. We made subjective syllabus judgment calls based on three factors: A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience. A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps: The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves. First off, let’s define deep learning. Here is a succinct description: As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article: My top three recommendations from that list would be: Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline. Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar. Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews. Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available. Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning. Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice: Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course. A few prominent reviewers noted the following: Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews. The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding). Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase. Below are a few of the aforementioned sparkling reviews: Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered. It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R. As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course. Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings. A few prominent reviewers noted the following: Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here. The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews. Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews. Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews. Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent. Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews. Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews. Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews. Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews. Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews. Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews. Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews. Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews. AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews. Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews. StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews. Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews. From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews. Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews. Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews. Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews. Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews. Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews. Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews. Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews. Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews. Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews. Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews. Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews. Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews. The following courses had one or no reviews as of May 2017. Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review. Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available. Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free. Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free. Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase. Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase. Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase. Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks. Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required. The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course. Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours. Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours. Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours. Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours. Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours. Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours. Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free. Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free. Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below). Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well. This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth. The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering. If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page. If you enjoyed reading this, check out some of Class Central’s other pieces: If you have suggestions for courses I missed, let me know in the responses! If you found this helpful, click the 💚 so more people will see it here on Medium. This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program. Our community publishes stories worth reading on development, design, and data science. " Michael Jordan,34K,16,https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------7----------------,Artificial Intelligence — The Revolution Hasn’t Happened Yet,"Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.” We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills. Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities. While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws. And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically. Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems. This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions. Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon. Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon. One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits. For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering. Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction. As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory? A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms. Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved. IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean). It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.) But we need to move beyond the particular historical perspectives of McCarthy and Wiener. We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II. This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline. Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead. Michael I. Jordan From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley. " Milo Spencer-Harper,7.8K,6,https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1?source=tag_archive---------8----------------,How to build a simple neural network in 9 lines of Python code,"As part of my quest to learn about AI, I set myself the goal of building a simple neural network in Python. To ensure I truly understand it, I had to build it from scratch without using a neural network library. Thanks to an excellent blog post by Andrew Trask I achieved my goal. Here it is in just 9 lines of code: In this blog post, I’ll explain how I did it, so you can build your own. I’ll also provide a longer, but more beautiful version of the source code. But first, what is a neural network? The human brain consists of 100 billion cells called neurons, connected together by synapses. If sufficient synaptic inputs to a neuron fire, that neuron will also fire. We call this process “thinking”. We can model this process by creating a neural network on a computer. It’s not necessary to model the biological complexity of the human brain at a molecular level, just its higher level rules. We use a mathematical technique called matrices, which are grids of numbers. To make it really simple, we will just model a single neuron, with three inputs and one output. We’re going to train the neuron to solve the problem below. The first four examples are called a training set. Can you work out the pattern? Should the ‘?’ be 0 or 1? You might have noticed, that the output is always equal to the value of the leftmost input column. Therefore the answer is the ‘?’ should be 1. Training process But how do we teach our neuron to answer the question correctly? We will give each input a weight, which can be a positive or negative number. An input with a large positive weight or a large negative weight, will have a strong effect on the neuron’s output. Before we start, we set each weight to a random number. Then we begin the training process: Eventually the weights of the neuron will reach an optimum for the training set. If we allow the neuron to think about a new situation, that follows the same pattern, it should make a good prediction. This process is called back propagation. Formula for calculating the neuron’s output You might be wondering, what is the special formula for calculating the neuron’s output? First we take the weighted sum of the neuron’s inputs, which is: Next we normalise this, so the result is between 0 and 1. For this, we use a mathematically convenient function, called the Sigmoid function: If plotted on a graph, the Sigmoid function draws an S shaped curve. So by substituting the first equation into the second, the final formula for the output of the neuron is: You might have noticed that we’re not using a minimum firing threshold, to keep things simple. Formula for adjusting the weights During the training cycle (Diagram 3), we adjust the weights. But how much do we adjust the weights by? We can use the “Error Weighted Derivative” formula: Why this formula? First we want to make the adjustment proportional to the size of the error. Secondly, we multiply by the input, which is either a 0 or a 1. If the input is 0, the weight isn’t adjusted. Finally, we multiply by the gradient of the Sigmoid curve (Diagram 4). To understand this last one, consider that: The gradient of the Sigmoid curve, can be found by taking the derivative: So by substituting the second equation into the first equation, the final formula for adjusting the weights is: There are alternative formulae, which would allow the neuron to learn more quickly, but this one has the advantage of being fairly simple. Constructing the Python code Although we won’t use a neural network library, we will import four methods from a Python mathematics library called numpy. These are: For example we can use the array() method to represent the training set shown earlier: The ‘.T’ function, transposes the matrix from horizontal to vertical. So the computer is storing the numbers like this. Ok. I think we’re ready for the more beautiful version of the source code. Once I’ve given it to you, I’ll conclude with some final thoughts. I have added comments to my source code to explain everything, line by line. Note that in each iteration we process the entire training set simultaneously. Therefore our variables are matrices, which are grids of numbers. Here is a complete working example written in Python: Also available here: https://github.com/miloharper/simple-neural-network Final thoughts Try running the neural network using this Terminal command: python main.py You should get a result that looks like: We did it! We built a simple neural network using Python! First the neural network assigned itself random weights, then trained itself using the training set. Then it considered a new situation [1, 0, 0] and predicted 0.99993704. The correct answer was 1. So very close! Traditional computer programs normally can’t learn. What’s amazing about neural networks is that they can learn, adapt and respond to new situations. Just like the human mind. Of course that was just 1 neuron performing a very simple task. But what if we hooked millions of these neurons together? Could we one day create something conscious? I’ve been inspired by the huge response this article has received. I’m considering creating an online course. Click here to tell me what topic to cover. I’d love to hear your feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Greg Fish,1,4,https://worldofweirdthings.com/looking-for-a-ghost-in-the-machine-4c997c4da45b?source=tag_archive---------0----------------,looking for a ghost in the machine – [ weird things ],"A short while ago, I wrote about some of the challenges involved in creating artificial intelligence and raised the question of how exactly a machine would spontaneously attain self-awareness. While I’ve gotten plenty of feedback about how far technology has come so far and how it’s imminent that machines will become much smarter than us, I never got any specifics as to how exactly this would happen. To me, it’s not a philosophical question because I’m used to looking at technology from a design and development standpoint. When I ask for specifics, I’m talking about functional requirements. So far, the closest thing to outlining the requirements for a super-intelligent computer is a paper by University of Oxford philosopher and futurist Nick Bostrom. The first thing Bostrom tries to do is to establish a benchmark by how to grade what he calls a super-intellect and qualifying his definition. According to him, this super-intellect would be smarter than any human mind in every capacity from the scientific to the creative. It’s a pretty lofty goal because designing something smarter than yourself requires that you build something you don’t fully understand. You might have a sudden stroke of luck and succeed, but it’s more than likely that you’ll build a defective product instead. Imagine building a DNA helix from scratch and with no detailed manual to go by. Even if you have all the tools and know where to find some bits of information to guide you, when you don’t know exactly what you’re doing, the task becomes very challenging and you end up making a lot of mistakes along the way. There’s also the question of how exactly we evaluate what the term smarter means. In Bostrom’s projections, when you have an intelligent machine become fully proficient in a certain area of expertise like say, medicine, it could combine with another machine which has an excellent understanding of physics and so on until all this consolidation leads to a device that knows all that we know and can use all that cross-disciplinary knowledge to gain insights we just don’t have yet. Technologically that should be possible, but the question is whether a machine like that would really be smarter than humans per se. It would be far more knowledgeable than any individual human, granted. But it’s not as if there aren’t experts in particular fields coming together to make all sorts of cross-disciplinary connections and discoveries. What Bostrom calls a super-intellect is actually just a massive knowledge base that can mine itself for information. The paper was last revised in 1998 when we didn’t have the enormous digital libraries we take for granted in today’s world. Those libraries seem a fair bit like Bostrom’s super-intellect in their function and if we were to combine them to mine their depths with sophisticated algorithms which look for cross-disciplinary potential, we’d bring his concept to life. But there’s not a whole lot of intelligence there. Just a lot of data, much of which would be subject to change or revision as research and discovery continue. Just like Bostrom says, it would be a very useful tool for scientists and researchers. However, it wouldn’t be thinking on its own and giving the humans advice, even if we put all this data on supercomputers which could live up to the paper’s ambitious hardware requirements. Rev it up to match the estimated capacity of our brain, it says, and watch a new kind of intellect start waking up and take shape with the proper software. According to Bostrom, the human brain operates at 100 teraflops, or 100 trillion floating point operations per second. Now, as he predicted, computers have reached this speed by 2004 and went far beyond that. In fact, we have supercomputers which are as much as ten times faster. Supposedly, at these operating speeds, we should be able to write software which allows supercomputers to learn by interacting with humans and sifting through our digitized knowledge. But the reality is that we’d be trying to teach an intimate object made of metal and plastic how to think and solve problems, something we’re already born with and hone over our lifetimes. You can teach someone how to ride a bike and how to balance, but how exactly would you teach someone to understand the purpose of riding a bike? How would you tell someone with no emotion, no desires, no wants and no needs why he should go anywhere? That deep layer of motivation and wiring has taken several billion years to appear and was honed over a 600 million additional years of evolution. When we start trying to make an AI system comparable to ours, we’re effectively way behind from the get-go. To truly create an intelligent computer which doesn’t just act as if it’s thinking or do mechanical actions which are easy to predict and program, we’d need to impart in all that information in trillions of lines of code and trick circuitry into deducing it needs to behave like a living being. And that’s a job that couldn’t be done in less than century, much less in the next 20 to 30 years as projected by Ray Kurzweil and his fans. [ eerie illustration by Neil Blevins ] From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities " Oliver Lindberg,1,7,https://medium.com/the-lindberg-interviews/interview-with-googles-alfred-spector-on-voice-search-hybrid-intelligence-and-more-2f6216aa480c?source=tag_archive---------1----------------,"Interview with Google’s Alfred Spector on voice search, hybrid intelligence and more","Google’s a pretty good search engine, right? Well, you ain’t seen nothing yet. VP of research Alfred Spector talks to Oliver Lindberg about the technologies emerging from Google Labs — from voice search to hybrid intelligence and beyond This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Google has always been tight-lipped about products that haven’t launched yet. It’s no secret, however, that thanks to the company’s bottom-up culture, its engineers are working on tons of new projects at the same time. Following the mantra of ‘release early, release often’, the speed at which the search engine giant is churning out tools is staggering. At the heart of it all is Alfred Spector, Google’s Vice President of Research and Special Initiatives. One of the areas Google is making significant advances in is voice search. Spector is astounded by how rapidly it’s come along. The Google Mobile App features ‘search by voice’ capabilities that are available for the iPhone, BlackBerry, Windows Mobile and Android. All versions understand English (including US, UK, Australian and Indian-English accents) but the latest addition, for Nokia S60 phones, even introduces Mandarin speech recognition, which — because of its many different accents and tonal characteristics — posed a huge engineering challenge. It’s the most spoken language in the world, but as it isn’t exactly keyboard-friendly, voice search could become immensely popular in China. “Voice is one of these grand technology challenges in computer science,” Spector explains. “Can a computer understand the human voice? It’s been worked on for many decades and what we’ve realised over the last couple of years is that search, particularly on handheld devices, is amenable to voice as an import mechanism. “It’s very valuable to be able to use voice. All of us know that no matter how good the keyboard, it’s tricky to type exactly the right thing into a searchbar, while holding your backpack and everything else.” To get a computer to take account of your voice is no mean feat, of course. “One idea is to take all of the voices that the system hears over time into one huge pan-human voice model. So, on the one hand we have a voice that’s higher and with an English accent, and on the other hand my voice, which is deeper and with an American accent. They both go into one model, or it just becomes personalised to the individual; voice scientists are a little unclear as to which is the best approach.” The research department is also making progress in machine translation. Google Translate already features 51 languages, including Swahili and Yiddish. The latest version introduces instant, real-time translation, phonetic input and text-to-speech support (in English). “We’re able to go from any language to any of the others, and there are 51 times 50, so 2,550 possibilities,” Spector explains. “We’re focusing on increasing the number of languages because we’d like to handle even those languages where there’s not an enormous volume of usage. It will make the web far more valuable to more people if they can access the English-or Chinese language web, for example. “But we also continue to focus on quality because almost always the translations are valuable but imperfect. Sometimes it comes from training our translation system over more raw data, so we have, say, EU documents in English and French and can compare them and learn rules for translation. The other approach is to bring more knowledge into translation. For example, we’re using more syntactic knowledge today and doing automated parsing with language. It’s been a grand challenge of the field since the late 1950s. Now it’s finally achieved mass usage.” The team, led by scientist Franz Josef Och, has been collecting data for more than 100 languages, and the Google Translator Toolkit, which makes use of the ‘wisdom of the crowds’, now even supports 345 languages, many of which are minority languages. The editor enables users to translate text, correct the automatic translation and publish it. Spector thinks that this approach is the future. As computers become even faster, handling more and more data — a lot of it in the cloud — machines learn from users and thus become smarter. He calls this concept ‘hybrid intelligence’. “It’s very difficult to solve these technological problems without human input,” he says. “It’s hard to create a robot that’s as clever, smart and knowledgeable of the world as we humans are. But it’s not as tough to build a computational system like Google, which extends what we do greatly and gradually learns something about the world from us, but that requires our interpretation to make it really successful. “We need to get computers and people communicating in both directions, so the computer learns from the human and makes the human more effective.” Examples of ‘hybrid intelligence’ are Google Suggest, which instantly offers popular searches as you type a search query, and the ‘did you mean?’ feature in Google search, which corrects you when you misspell a query in the search bar. The more you use it, the better the system gets. Training computers to become seemingly more intelligent poses major hurdles for Google’s engineers. “Computers don’t train as efficiently as people do,” Spector explains. “Let’s take the chess example. If a Kasparov was the educator, we could count on almost anything he says as being accurate. But if you tried to learn from a million chess players, you learn from my children as well, who play chess but they’re 10 and eight. They’ll be right sometimes and not right other times. There’s noise in that, and some of the noise is spam. One also has to have careful regard for privacy issues.” By collecting enormous amounts of data, Google hopes to create a powerful database that eventually will understand the relationship between words (for example, ‘a dog is an animal’ and ‘a dog has four legs’). The challenge is to try to establish these relationships automatically, using tons of information, instead of having experts teach the system. This database would then improve search results and language translations because it would have a better understanding of the meaning of the words. There’s also a lot of research around ‘conceptual search’. “Let’s take a video of a couple in front of the Empire State Building. We watch the video and it’s clear they’re on their honeymoon. But what is the video about? Is it about love or honeymoons, or is it about renting office space? It’s a fundamentally challenging problem.” One example of conceptual search is Google Image Swirl, which was added to Labs in November. Enter a keyword and you get a list of 12 images; clicking on each one brings up a cluster of related pictures. Click on any of them to expand the ‘wonder wheel’ further. Google notes that they’re not just the most relevant images; the algorithm determines the most relevant group of images with similar appearance and meaning. To improve the world’s data, Google continues to focus on the importance of the open internet. Another Labs project, Google Fusion Tables facilitates data management in the cloud. It enables users to create tables, filter and aggregate data, merge it with other data sources and visualise it with Google Maps or the Google Visualisation API. The data sets can then be published, shared or kept private and commented on by people around the world. “It’s an example of open collaboration,” Spector says. “If it’s public, we can crawl it to make it searchable and easily visible to people. We hired one of the best database researchers in the world, Alon Halevy, to lead it.” Google is aiming to make more information available more easily across multiple devices, whether it’s images, videos, speech or maps, no matter which language we’re using. Spector calls the impact “totally transparent processing — it revolutionises the role of computation in day-today life. The computer can break down all these barriers to communication and knowledge. No matter what device we’re using, we have access to things. We can do translations, there are books or government documents, and some day we hope to have medical records. Whatever you want, no matter where you are, you can find it.” Spector retired in early 2015 and now serves as the CTO of Two Sigma Investments This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Photography by Andy Short From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of www.GenerateConf.com. Former editor of @netmag. Interviews with leading tech entrepreneurs and web designers, conducted by @oliverlindberg at @netmag. " Greg Fish,1,4,https://worldofweirdthings.com/the-technical-trouble-with-humanoid-robots-2c712649f3c5?source=tag_archive---------5----------------,the technical trouble with humanoid robots – [ weird things ],"If you’ve been reading this blog long enough, you may recall that I’m not a big fan of humanoid robots. There’s no need to invoke the uncanny valley effect, even though some attempts to build humanoid robots managed to produce rather creepy entities which try to look as human as possible to goad future users into some kind of social bond with them, presumably to gain their trust and get into a perfect position to kill the inferior things made of flesh. No, the reason why I’m not sure that humanoid robots will be invaluable to us in the future is a very pragmatic one. Simply put, emulating bipedalism is a huge computational overhead as well as a major, and unavoidable engineering and maintenance headache. And with the limits on size and weight of would be robot butlers, as well as the patience of its users, humanoid bot designers may be aiming a bit too high... We walk, run, and perform complicated tasks with our hands and feet so easily, we only notice the amount of effort and coordination this takes after an injury that limits our mobility. The reason why we can do that lies in a small, squishy mass of neurons coordinating a firestorm of constant activity. Unlike old-standing urban myths imply, we actually use all of our brainpower, and we need it to help coordinate and execute the same motions that robots struggle to repeat. Of course our brains are cheating when compared to a computer because with tens of billions of neurons and trillions of synapses, our brains are like screaming fast supercomputers. They can calculate what it will take to catch a ball in mid-air in less than a few hundred milliseconds and make the most minute adjustments to our muscles in order to keep us balanced and upright just as quickly. Likewise, our bodies can heal the constant wear and tear on our joints, wear and tear we will accumulate from walking, running, and bumping into things. Bipedal robots navigating our world wouldn’t have these assets. Humanoid machines would need to be constantly maintained just to keep up with us in a mechanical sense, and carry the equivalent of Red Storm in their heads, or at least be linked to something like it, to even hope to coordinate themselves as quickly as we do cognitively and physically. Academically, this is a lofty goal which could yield new algorithms and robotic designs. Practically? Not so much. While last month’s feature in Pop Sci bemoaned the lack of interest in humanoid robots in the U.S., it also failed to demonstrate why such an incredibly complicated machine would be needed for basic household chores that could be done by robotic systems functioning independently, and without the need to move on two legs. Instead, we got the standard Baby Boomers’ caretaker argument which goes somewhat like this... Or, alternatively, a computer could book your appointments via e-mail, or a system that lets patients make an appointment with their doctors on the web, a smart dispenser that gives you the right amount of pills, checks for potential interactions based on public medical databases, and beeps to remind you to take your medicine, and a programmable walker with actuators and a few buttons could do these jobs while costing far less than the tens of millions a humanoid robot would cost by 2025, and requiring much less coordination or learning than a programmable humanoid. Why wouldn’t we want to pursue immediate fixes to what’s being described as a looming caretaker shortage choosing instead to invest billions of dollars into E-Jeeves, which may take an entire decade or two just to learn how to go about daily human life, ready to tackle the problem only after it was no longer an issue, even if we started right now? If anything, harping on the need for a robotic hand for Baby Boomers’ future medical woes would only prompt more R&D cash into immediate solutions and rules- based intelligent agents we already employ rather than long-term academic research. There’s a huge gap between human abilities and machinery because we have the benefit of having evolved over hundreds of millions of years of trial and error. Machines, even though they’re advancing at an ever faster pace, only had a few decades by comparison. It will take decades more to build self-repairing machines and computer chips that can boast the same performance as a supercomputer while being small enough to fit in human-sized robots’ heads before robotic butlers become practical and feasible. And even then, we might go with distinctly robotic versions because they’d be cheaper to maintain and operate. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities " Frank Diana,50,10,https://medium.com/@frankdiana/the-evolving-role-of-business-analytics-76818e686e39?source=tag_archive---------2----------------,The Evolving Role of Business Analytics – Frank Diana – Medium,"An older post that seems to be getting a lot of attention. Appreciation for analytics rising? Business Analytics refers to the skills, technologies, applications and practices for the continuous exploration of data to gain insight that drive business decisions. Business Analytics is multi-faceted. It combines multiple forms of analytics and applies the right method to deliver expected results. It focuses on developing new insights using techniques including, data mining, predictive analytics, natural language processing, artificial intelligence, statistical analysis and quantitative analysis. In addition, domain knowledge is a key component of the business analytics portfolio. Business Analytics can then be viewed as the combination of domain knowledge and all forms of analytics in a way that creates analytic applications focused on enabling specific business outcomes. Analytic applications have a set of business outcomes that they must enable. For fraud, its reducing loss, for quality & safety, it might be avoiding expensive recalls. Understanding how to enable these outcomes is the first step in determining the make-up of each specific application. For example, in the case of insurance fraud, it’s not enough to use statistical analysis to predict fraud. You need a strong focus on text, domain expertise, and the ability to visually portray organized crime rings. Insight gained through this analysis may be used as input for human decisions or may drive fully automated decisions. Database capacity, processor speeds and software enhancements will continue to drive even more sophisticated analytic applications. The key components of business analytics are: There is a massive explosion of data occurring on a number of levels. The notion of data overload was echoed in a previous 2010 IBM CEO study titled “Capitalizing on Complexity”. In this study, a large number of CEOs described their organizations as data rich, but insight poor. Many voiced frustration over their inability to transform available data into feasible action plans. This notion of turning data into insight, and insight to action is a common and growing theme. According to Pricewaterhouse-Coopers, there are approximately 75 to 100 million blogs and 10–20 million Internet discussion boards and forums in the English language alone. As the Forrester diagram describes, more consumers are moving up the ladder and becoming creators of content. In addition, estimates show the volume of unstructured data (email, audio, video, Web pages, etc.) doubles every three months. Effectively managing and harnessing this vast amount of information presents both a great challenge and a great opportunity. Data is flowing through medical devices, scientific devices, sensors, monitors, detectors, other supply chain devices, instrumented cars and roads, instrumented domestic appliances, etc. Everything will be instrumented and from this instrumentation comes data. This data will be analyzed to find insight that drives smarter decisions. The utility sector provides a great example of the growing need for analytics. The smart grid and the gradual installation of intelligent endpoints, smart meters and other devices will generate volumes of data. Smart grid utilities are evolving into brokers of information. The data tsunami that will wash over utilities in the coming years is a formidable IT challenge, but it is also a huge opportunity to move beyond simple meter-to-cash functions and into real-time optimization of their operations. This type of instrumentation is playing out in many industries. As this occurs, industry players will be challenged to leverage the data generated by these devices. Inside the enterprise, consider the increasing volumes of emails, Word documents, PDFs, Excel worksheets and free form text fields that contain everything from budgets and forecasts to customer proposals, contracts, call center notes and expense reports. Outside the enterprise, the growth of web-based content, which is primarily unstructured, continues to accelerate –everything from social media, comments in blogs, forums and social networks, to survey verbatim and wiki pages. Most industry analysts estimate more than 80% of the intelligence required to make smarter decisions is contained in unstructured data or text. The survey results in a recent MIT Sloan report support both an aggressive adoption of analytics and a shift in the analytic footprint. According to the report, many traditional forms of analytics will be surpassed in the next 24 months. The authors produced a very effective visual that shows this shift from today’s analytic footprint to the future footprint. Although listed as number one, the authors describe visualization as dashboards and scorecards — the traditional methods of visualization. New and emerging methods help accelerate time-to-insight. These new approaches help us absorb insight from large volumes of data in rapid fashion. The analytics identified as creating the most value in 24 months are: Companies and organizations continue to invest millions of dollars capturing, storing and maintaining all types of business data to drive sales and revenue, optimize operations, manage risk and ensure compliance. Most of this investment has been in technologies and applications that manage structured data — coded information residing in relational data base management systems in the form of rows and columns. Current methods such as traditional business intelligence (BI) are more about querying and reporting and focus on answering questions such as what happened, how many, how often, and what actions are needed. New forms of advanced analytics are required to address the business imperatives described earlier. Business Analytics focuses on answering questions such as why is this happening, what if these trends continue, what will happen next (predict), what is the best that can happen (optimize). There is a growing view that prescribing outcomes is the ultimate role of analytics; that is, identifying those actions that deliver the right business outcomes. Organizations should first define the insights needed to meet business objectives, and then identify data that provides that insight. Too often, companies start with data. The previously mentioned IBM study also revealed that analytics-driven organizations had 33 percent more revenue growth with 32 percent more return on capital invested. Organizations expect value from emerging analytic techniques to soar. The growth of innovative analytic applications will serve as a means to help individuals across the organization consume and act upon insights derived through complex analysis. Some examples of innovative use: A recent MIT Sloan report effectively uses the maturity model concept to describe how organizations typically evolve to analytic excellence. The authors point out that organizations begin with efficiency goals and then address growth objectives after experience is gained. The authors believe this is a common practice, but not necessarily a best practice. They see the traditional analytic adoption path starting in data-intensive areas like financial management, operations, and sales and marketing. As companies move up the maturity curve, they branch out into new functions, such as strategy, product research, customer service, and customer experience. In the opinion of the authors, these patterns suggest that success in one area stimulates adoption in others. They suggest that this allows organizations to increase their level of sophistication. The authors of the MIT Sloan special report through their analysis of survey results have created three levels of analytic capabilities: The report provides a very nice matrix that describes these levels in the context of a maturity model. In reviewing business challenges outlined in the matrix, there is one very interesting dynamic: the transition from cost and efficiencies to revenue growth, customer retention and customer acquisition. The authors of the report found that as the value of analytics grows, organizations are likely to seek a wider range of capabilities — and more advanced use of existing ones. The survey indicated that this dynamic is leading some organizations to create a centralized analytics unit that makes it possible to share analytic resources efficiently and effectively. These centralized enterprise units are the primary source of analytics, providing a home for more advanced skills within the organization. This same dynamic will lead to the appointment of Chief Analytics Officers (CAO) starting in 2011. The availability of strong business-focused analytical talent will be the greatest constraint on a company’s analytical capabilities. The Outsourcing of analytics will become an attractive alternative as the need for specialized skills will lead organizations to look for outside help. Outsourcing analytics allows a company to focus on taking action based on insights delivered by the outsourcer. The outsourcer can leverage these specialized resources across multiple clients. As the importance of analytics grows, organizations will have an option to outsource. Expect to see more of this in 2011. We will see more organizations establish enterprise data management functions to coordinate data across business units. We will also see smarter approaches such as information lifecycle management as opposed to the common approach of throwing more hardware at the growing data problem. The information management challenge will grow as millions of next-generation tech-savvy users use feeds and mash-ups to bring data together into usable parts so they can answer their own questions. This gives rise to new challenges, including data security and governance. Originally published at frankdiana.net on March 20, 2011. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. TCS Executive focused on the rapid evolution of society and business. Fascinated by the view of the world in the next decade and beyond https://frankdiana.net/ " Paul Christiano,43,31,https://ai-alignment.com/a-formalization-of-indirect-normativity-7e44db640160?source=tag_archive---------0----------------,Formalizing indirect normativity – AI Alignment,"This post outlines a formalization of what Nick Bostrom calls “indirect normativity.” I don’t think it’s an adequate solution to the AI control problem; but to my knowledge it was the first precise specification of a goal that meets the “not terrible” bar, i.e. which does not lead to terrible consequences if pursued without any caveats or restrictions. The proposal outlined here was sketched in early 2012 while I was visiting FHI, and was my first serious foray into AI control. When faced with the challenge of writing down precise moral principles, adhering to the standards demanded in mathematics, moral philosophers encounter two serious difficulties: In light of these difficulties, a moral philosopher might simply declare: “It is not my place to aspire to mathematical standards of precision. Ethics as a project inherently requires shared language, understanding, and experience; it becomes impossible or meaningless without them.” This may be a defensible philosophical position, but unfortunately the issue is not entirely philosophical. In the interest of building institutions or machines which reliably pursue what we value, we may one day be forced to describe precisely “what we value” in a way that does not depend on charitable or “common sense” interpretation (in the same way that we today must describe “what we want done” precisely to computers, often with considerable effort). If some aspects of our values cannot be described formally, then it may be more difficult to use institutions or machines to reliably satisfy them. This is not to say that describing our values formally is necessary to satisfying them, merely that it might make it easier. Since we are focusing on finding any precise and satisfactory moral theory, rather than resolving disputes in moral philosophy, we will adopt a consequentialist approach without justification and focus on axiology. Moreover, we will begin from the standpoint of expected utility maximization, and leave aside questions about how or over what space the maximization is performed. We aim to mathematically define a utility function U such that we would be willing to build a hypothetical machine which exceptionlessly maximized U, possibly at the catastrophic expense of any other values. We will assume that the machine has an ability to reason which at least rivals that of humans, and is willing to tolerate arbitrarily complex definitions of U (within its ability to reason about them). We adopt an indirect approach. Rather than specifying what exactly we want, we specify a process for determining what we want. This process is extremely complex, so that any computationally limited agent will always be uncertain about the process’ output. However, by reasoning about the process it is possible to make judgments about which action has the highest expected utility in light of this uncertainty. For example, I might adopt the principle: “a state of affairs is valuable to the extent that I would judge it valuable after a century of reflection.” In general I will be uncertain about what I would say after a century, but I can act on the basis of my best guesses: after a century I will probably prefer worlds with more happiness, and so today I should prefer worlds with more happiness. After a century I have only a small probability of valuing trees’ feelings, and so today I should go out of my way to avoid hurting them if it is either instrumentally useful or extremely easy. As I spend more time thinking, my beliefs about what I would say after a century may change, and I will start to pursue different states of affairs even though the formal definition of my values is static. Similarly, I might desire to think about the value of trees’ feelings, if I expect that my opinions are unstable: if I spend a month thinking about trees, my current views will then be a much better predictor of my views after a hundred years, and if I know better whether or not trees’ feelings are valuable, I can make better decisions. This example is quite informal, but it communicates the main idea of the approach. We stress that the value of our contribution, if any, is in the possibility of a precise formulation. (Our proposal itself will be relatively informal; instead it is a description of how you would arrive at a precise formulation.) The use of indirection seems to be necessary to achieve the desired level of precision. Our proposal contains only two explicit steps: Each of these steps requires substantial elaboration, but we must also specify what we expect the human to do with these tools. This proposal is best understood in the context of other fantastic-seeming proposals, such as “my utility is whatever I would write down if I reflected for a thousand years without interruption or biological decay.” The counterfactual events which take place within the definition are far beyond the realm our intuition recognizes as “realistic,” and have no place except in thought experiments. But to the extent that we can reason about these counterfactuals and change our behavior on the basis of that reasoning (if so motivated), we can already see how such fantastic situations could affect our more prosaic reality. The remainder of this document consists of brief elaboration of some of these steps, and a few arguments about why this is a desirable process. The first step of our proposal is a high-fidelity mathematical model of human cognition. We will set aside philosophical troubles, and assume that the human brain is a purely physical system which may be characterized mathematically. Even granting this, it is not clear how we can realistically obtain such a characterization. The most obvious approach to characterizing a brain is to combine measurements of its behavior or architecture with an understanding of biology, chemistry, and physics. This project represents a massive engineering effort which is currently just beginning. Most pessimistically, our proposal could be postponed until this project’s completion. This could still be long before the mathematical characterization of the brain becomes useful for running experiments or automating human activities: because we are interested only in a definition, we do not care about having the computational resources necessary to simulate the brain. An impractical mathematical definition, however, may be much easier to obtain. We can define a model of a brain in terms of exhaustive searches which could never be practically carried out. For example, given some observations of a neuron, we can formally define a brute force search for a model of that neuron. Similarly, given models of individual neurons we may be able to specify a brute force search over all ways of connecting those neurons which account for our observations of the brain (say, some data acquired through functional neuroimaging). It may be possible to carry out this definition without exploiting any structural knowledge about the brain, beyond what is necessary to measure it effectively. By collecting imaging data for a human exposed to a wide variety of stimuli, we can recover a large corpus of data which must be explained by any model of a human brain. Moreover, by using our explicit knowledge of human cognition we can algorithmically generate an extensive range of tests which identify a successful simulation, by probing responses to questions or performance on games or puzzles. In fact, this project may be possible using existing resources. The complexity of the human brain is not as unapproachable as it may at first appear: though it may contain 1014synapses, each described by many parameters, it can be specified much more compactly. A newborn’s brain can be specified by about 109bits of genetic information, together with a recipe for a physical simulation of development. The human brain appears to form new long-term memories at a rate of 1–2 bits per second, suggesting that it may be possible to specify an adult brain using 109additional bits of experiential information. This suggests that it may require only about 1010bits of information to specify a human brain, which is at the limits of what can be reasonably collected by existing technology for functional neuroimaging. This discussion has glossed over at least one question: what do we mean by ‘brain emulation’? Human cognition does not reside in a physical system with sharp boundaries, and it is not clear how you would define or use a simulation of the “input-output” behavior of such an object. We will focus on some system which does have precisely defined input-output behavior, and which captures the important aspects of human cognition. Consider a system containing a human, a keyboard, a monitor, and some auxiliary instruments, well-insulated from the environment except for some wires carrying inputs to the monitor and outputs from the keyboard and auxiliary instruments (and wires carrying power). The inputs to this system are simply screens to be displayed on the monitor (say delivered as a sequence to be displayed one after another at 30 frames per second), while the outputs are the information conveyed from the keyboard and the other measuring apparatuses (also delivered as a sequence of data dumps, each recording activity from the last 30th of a second). This “human in a box” system can be easily formally defined if a precise description of a human brain and coarse descriptions of the human body and the environment are available. Alternatively, the input-output behavior of the human in a box can be directly observed, and a computational model constructed for the entire system. Let H be a mathematical definition of the resulting (randomized) function from input sequences (In(1), In(2), ..., In(K)) to the next output Out(K). H is, by design, a good approximation to what the human “would output” if presented with any particular input sequence. Using H, we can mathematically define what “would happen” if the human interacted with a wide variety of systems. For example, if we deliver Out(K) as the input to an abstract computer running some arbitrary software, and then define In(K+1) as what the screen would next display, we can mathematically define the distribution over transcripts which would have arisen if the human had interacted with the abstract computer. This computer could be running an interactive shell, a video game, or a messaging client. Note that H reflects the behavior of a particular human, in a particular mental state. This state is determined by the process used to design H, or the data used to learn it. In general, we can control H by choosing an appropriate human and providing appropriate instructions / training. More emulations could be produced by similar measures if necessary. Using only a single human may seem problematic, but we will not rely on this lone individual to make all relevant ethical judgments. Instead, we will try to select a human with the motivational stability to carry out the subsequent steps faithfully, which will define U using the judgment of a community consisting of many humans. This discussion has been brief and has necessarily glossed over several important difficulties. One difficulty is the danger of using computationally unbounded brute force search, given the possibility of short programs which exhibit goal-oriented behavior. Another difficulty is that, unless the emulation project is extremely conservative, the models it produces are not likely to be fully-functional humans. Their thoughts may be blurred in various ways, they may be missing many memories or skills, and they may lack important functionalities such as long-term memory formation or emotional expression. The scope of these issues depends on the availability of data from which to learn the relevant aspects of human cognition. Realistic proposals along these lines will need to accommodate these shortcomings, relying on distorted emulations as a tool to construct increasingly accurate models. For any idealized “software”, with a distinguished instruction return, we can use H to mathematically define the distribution over return values which would result, if the human were to interact with that software. We will informally define a particular program T which provides a rich environment, in which the remainder of our proposal can be implemented. From a technical perspective this will be the last step of our proposal. The remaining steps will be reflected only in the intentions and behavior of the human being simulated in H. Fix a convenient and adequately expressive language (say a dialect of Python designed to run on an abstract machine). T implements a standard interface for an interactive shell in this language: the user can look through all of the past instructions that have been executed and their return values (rendered as strings) or execute a new instruction. We also provide symbols representing H and T themselves (as functions from sequences of K inputs to a value for the Kth output). We also provide some useful information (such as a snapshot of the Internet, and some information about the process used to create H and T), which we encode as a bit string and store in a single environment variable data. We assume that our language of choice has a return instruction, and we have T return whenever the user executes this instruction. Some care needs to be taken to define the behavior if T enters an infinite loop–we want to minimize the probability that the human accidentally hangs the terminal, with catastrophic consequences, but we cannot provide a complete safety-net without running into unresolvable issues with self-reference. We define U to be the value returned by H interacting with T. If H represented an unfortunate mental state, then this interaction could be short and unproductive: the simulated human could just decide to type ‘return 0’ and be done with it. However, by choosing an appropriate human to simulate and inculcating an appropriate mental state, we can direct the process further. We intend for H to use the resources in T to initiate a larger deliberative process. For example, the first step of this process may be to instantiate many copies of H, interacting with variants of messaging clients which are in contact with each other. The return value from the original process could then be defined as the value returned by a designated ‘leader’ from this community, or as a majority vote amongst the copies of H, or so on. Another step might be to create appropriate realistic virtual environments for simulated brains, rather than confining them to boxes. For motivational stability, it may be helpful to design various coordination mechanisms, involving frameworks for interaction, “cached” mental states which are frequently re-instantiated, or sanity checks whereby one copy of H monitors the behavior of another. The resulting communities of simulated brains then engage in a protracted planning process, ensuring that subsequent steps can be carried out safely or developing alternative approaches. The main priority of this community is to reduce the probability of errors as far as possible (exactly what constitutes an ‘error’ will be discussed at more length later). At the end of this process, we obtain a formal definition of a new protocol H+, which submits its inputs for consideration to a large community and then produces its outputs using some deliberation mechanism (democratic vote, one leader using the rest of the community as advisors, etc.) The next step requires our community of simulated brains to construct a detailed simulation of Earth which they can observe and manipulate. Once they have such a simulation, they have access to all of the data which would have been available on Earth. In particular, they can now explore many possible futures and construct simulations for each living human. In order to locate Earth, we will again leverage an exhaustive search. First, H+ decides on informal desiderata for an “Earth simulation.” These are likely to be as follows: Once H+ has decided on the desiderata, it uses a brute force search to find a simulation satisfying them: for each possible program it instantiates a new copy of H+ tasked with evaluating whether that program is an acceptable simulation. We then define E to be a uniform distribution over programs which pass this evaluation. We might have doubts about whether this process produces the “real” Earth–perhaps even once we have verified that it is identical according to a laundry list of measures, it may still be different in other important ways. There are two reasons why we might care about such differences. First, if the simulated Earth has a substantially different set of people than the real Earth, then a different set of people will be involved in the subsequent decision making. If we care particularly about the opinions of the people who actually exist (which the reader might well, being amongst such people!) then this may be unsatisfactory. Second, if events transpire significantly differently on the simulated Earth than the real Earth, value judgments designed to guide behavior appropriately in the simulated Earth may lead to less appropriate behaviors in the real Earth. (This will not be a problem if our ultimate definition of U consists of universalizable ethical principles, but we will see that U might take other forms.) These concerns are addressed by a few broad arguments. First, checking a detailed but arbitrary ‘laundry list’ actually provides a very strong guarantee. For example, if this laundry list includes verifying a snapshot of the Internet, then every event or person documented on the Internet must exist unchanged, and every keystroke of every person composing a document on the Internet must not be disturbed. If the world is well interconnected, then it may be very difficult to modify parts of the world without having substantial effects elsewhere, and so if a long enough arbitrary list of properties is fixed, we expect nearly all of the world to be the same as well. Second, if the essential character of the world is fixed but detailed are varied, we should expect the sort of moral judgments reached by consensus to be relatively constant. Finally, if the system whose behavior depends on these moral judgments is identical between the real and simulated worlds, then outputting a U which causes that system to behave a certain way in the simulated world will also cause that system to behave that way in the real world. Once H+ has defined a simulation of the world which permits inspection and intervention, by careful trial and error H+ can inspect a variety of possible futures. In particular, they can find interventions which cause the simulated human society to conduct a real brain emulation project and produce high-fidelity brain scans for all living humans. Once these scans have been obtained, H+ can use them to define U as the output of a new community, H++, which draws on the expertise of all living humans operating under ideal conditions. There are two important degrees of flexibility: how to arrange the community for efficient communication and deliberation, and how to delegate the authority to define U. In terms of organization, the distinction between different approaches is probably not very important. For example, it would probably be perfectly satisfactory to start from a community of humans interacting with each other over something like the existing Internet (but on abstract, secure infrastructure). More important are the safety measures which would be in place, and the mechanism for resolving differences of value between different simulated humans. The basic approach to resolving disputes is to allow each human to independently create a utility function U, each bounded in the interval [0, 1], and then to return their average. This average can either be unweighted, or can be weighted by a measure of each individual’s influence in the real world, in accordance with a game-theoretic notion like the Shapley value applied to abstract games or simulations of the original world. More sophisticated mechanisms are also possible, and may be desirable. Of course these questions can and should be addressed in part by H+ during its deliberation in the previous step. After all, H+ has access to an unlimited length of time to deliberate and has infinitely powerful computational aids. The role of our reasoning at this stage is simply to suggest that we can reasonably expect H+ to discover effective solutions. As when discussing discovering a brain simulation by brute force, we have skipped over some critical issues in this section. In general, brute force searches (particularly over programs which we would like to run) are quite dangerous, because such searches will discover many programs with destructive goal-oriented behaviors. To deal with these issues, in both cases, we must rely on patience and powerful safety measures. Once we have a formal description of a community of interacting humans, given as much time as necessary to deliberate and equipped with infinitely powerful computational aids, it becomes increasingly difficult to make coherent predictions about their behavior. Critically, though, we can also become increasingly confident that the outcome of their behavior will reflect their intentions. We sketch some possibilities, to illustrate the degree of flexibility available. Perhaps the most natural possibility is for this community to solve some outstanding philosophical problems and to produce a utility function which directly captures their preferences. However, even if they quickly discovered a formulation which appeared to be attractive, they would still be wise to spend a great length of time and to leverage some of these other techniques to ensure that their proposed solution was really satisfactory. Another natural possibility is to eschew a comprehensive theory of ethics, and define value in terms of the community’s judgment. We can define a utility function in terms of the hypothetical judgments of astronomical numbers of simulated humans, collaboratively evaluating the goodness of a state of affairs by examining its history at the atomic level, understanding the relevant higher-order structure, and applying human intuitions. It seems quite likely that the community will gradually engage in self-modifications, enlarging their cognitive capacity along various dimensions as they come to understand the relevant aspects of cognition and judge such modifications to preserve their essential character. Either independently or as an outgrowth of this process, they may (gradually or abruptly) pass control to machine intelligences which they are suitably confident expresses their values. This process could be used to acquire the power necessary to define a utility function in one of the above frameworks, or understanding value-preserving self-modification or machine intelligence may itself prove an important ingredient in formalizing what it is we value. Any of these operations would be performed only after considerable analysis, when the original simulated humans were extremely confident in the desirability of the results. Whatever path they take and whatever coordination mechanisms they use, eventually they will output a utility function U’. We then define U = 0 if U’ < 0, U = 1 if U’ > 1, and U = U’ otherwise. At this point we have offered a proposal for formally defining a function U. We have made some general observations about what this definition entails. But now we may wonder to what extent U reflects our values, or more relevantly, to what extent our values are served by the creation of U-maximizers. Concerns may be divided into a few natural categories: We respond to each of these objections in turn. If the process works as intended, we will reach a stage in which a large community of humans reflects on their values, undergoes a process of discovery and potentially self-modification, and then outputs its result. We may be concerned that this dynamic does not adequately capture what we value. For example, we may believe that some other extrapolation dynamic captures our values, or that it is morally desirable to act on the basis of our current beliefs without further reflection, or that the presence of realistic disruptions, such as the threat of catastrophe, has an important role in shaping our moral deliberation. The important observation, in the defense of our proposal, is that whatever objections we could think of today, we could think of within the simulation. If, upon reflection, we decide that too much reflection is undesirable, we can simply change our plans appropriately. If we decide that realistic interference is important for moral deliberation, we can construct a simulation in which such interference occurs, or determine our moral principles by observing moral judgments in our own world’s possible futures. There is some chance that this proposal is inadequate for some reason which won’t be apparent upon reflection, but then by definition this is a fact which we cannot possibly hope to learn by deliberating now. It therefore seems quite difficult to maintain objections to the proposal along these lines. One aspect of the proposal does get “locked in,” however, after being considered by only one human rather than by a large civilization: the distribution of authority amongst different humans, and the nature of mechanisms for resolving differing value judgments. Here we have two possible defenses. One is that the mechanism for resolving such disagreements can be reflected on at length by the individual simulated in H. This individual can spend generations of subjective time, and greatly expand her own cognitive capacities, while attempting to determine the appropriate way to resolve such disagreements. However, this defense is not completely satisfactory: we may be able to rely on this individual to produce a very technically sound and generally efficient proposal, but the proposal itself is quite value laden and relying on one individual to make such a judgment is in some sense begging the question. A second, more compelling, defense, is that the structure of our world has already provided a mechanism for resolving value disagreements. By assigning decision-making weight in a way that depends on current influence (for example, as determined by the simulated ability of various coalitions to achieve various goals), we can generate a class of proposals which are at a minimum no worse than the status quo. Of course, these considerations will also be shaped by the conditions surrounding the creation or maintenance of systems which will be guided by U–for example, if a nation were to create a U-maximizer, they might first adopt an internal policy for assigning influence on U. By performing this decision making in an idealized environment, we can also reduce the likelihood of destructive conflict and increase the opportunities for mutually beneficial bargaining. We may have moral objections to codifying this sort of “might makes right” policy, favoring a more democratic proposal or something else entirely, but as a matter of empirical fact a more ‘cosmopolitan’ proposal will be adopted only if it is supported by those with the appropriate forms of influence, a situation which is unchanged by precisely codifying existing power structure. Finally, the values of the simulations in this process may diverge from the values of the original human models, for one reaosn or another. For example, the simulated humans may predictably disagree with the original models about ethical questions by virtue of (probably) having no physical instantiation. That is, the output of this process is defined in terms of what a particular human would do, in a situation which that human knows will never come to pass. If I ask “What would I do, if I were to wake up in a featureless room and told that the future of humanity depended on my actions?” the answer might begin with “become distressed that I am clearly inhabiting a hypothetical situation, and adjust my ethical views to take into account the fact that people in hypothetical situations apparently have relevant first-person experience.” Setting aside the question of whether such adjustments are justified, they at least raise the possibility that our values may diverge from those of the simulations in this process. These changes might be minimized, by understanding their nature in advance and treating them on a case-by-case basis (if we can become convinced that our understanding is exhaustive). For example, we could try and use humans who robustly employ updateless decision theories which never undergo such predictable changes, or we could attempt to engineer a situation in which all of the humans being emulated do have physical instantiations, and naive self-interest for those emulations aligns roughly with the desired behavior (for example, by allowing the early emulations to “write themselves into” our world). We can imagine many ways in which this process can fail to work as intended–the original brain emulations may accurately model human behavior, the original subject may deviate from the intended plans, or simulated humans can make an error when interacting with their virtual environment which causes the process to get hijacked by some unintended dynamic. We can argue that the proposal is likely to succeed, and can bolster the argument in various ways (by reducing the number of assumptions necessary for succees, building in fault-tolerance, justifying each assumption more rigorously, and so on). However, we are unlikely to eliminate the possibility of error. Therefore we need to argue that if the process fails with some small probability, the resulting values will only be slightly disturbed. This is the reason for requiring U to lie in the interval [0, 1]–we will see that this restriction bounds the damage which may be done by an unlikely failure. If the process fails with some small probability ε, then we can represent the resulting utility function as U = (1 — ε) U1 + ε U2, where U1 is the intended utility function and U2 is a utility function produced by some arbitrary error process. Now consider two possible states of affairs A and B such that U1(A) > U1(B) + ε /(1 — ε) ≈ U1(B) + ε. Then since 0 ≤ U2 ≤ 1, we have: U(A) = (1 — ε) U1(A) + ε U2(A) > (1 — ε) U1(B) + ε ≥ (1 — ε) U1(B) + ε U2(B) = U(B) Thus if A is substantially better than B according to U1, then A is better than B according to U. This shows that a small probability of error, whether coming from the stochasticity of our process or an agent’s uncertainty about the process’ output, has only a small effect on the resulting values. Moreover, the process contains a humans who have access to a simulation of our world. This implies, in particular, that they have access to a simulation of whatever U-maximizing agents exist in the world, and they have knowledge of those agents’ beliefs about U. This allows them to choose U with perfect knowledge of the effects of error in these agents’ judgments. In some cases this will allow them to completely negate the effect of error terms. For example, if the randomness in our process causes a perfectly cooperate community of simulated humans to “control” U with probability 2⁄3, and causes an arbitrary adversary to control it with probability 1⁄3, then the simulated humans can spend half of their mass outputting a utility function which exactly counters the effect of the adversary. In general, the situation is not quite so simple: the fraction of mass controlled by any particular coalition will vary as the system’s uncertainty about U varies, and so it will be impossible to counteract the effect of an error term in a way which is time-independent. Instead, we will argue later that an appropriate choice of a bounded and noisy U can be used to achieve a very wide variety of effective behaviors of U-maximizers, overcoming the limitations both of bounded utility maximization and of noisy specification of utility functions. Many possible problems with this scheme were described or implicitly addressed above. But that discussion was not exhaustive, and there are some classes of errors that fall through the cracks. One interesting class of failures concerns changes in the values of the hypothetical human H. This human is in a very strange situation, and it seems quite possible that the physical universe we know contains extremely few instances of that situation (especially as the process unfolds and becomes more exotic). So H’s first-person experience of this situation may lead to significant changes in H’s views. For example, our intuition that our own universe is valuable seems to be derived substantially from our judgment that our own first-person experiences are valuable. If hypothetically we found ourselves in a very alien universe, it seems quite plausible that we would judge the experiences within that universe to be morally valuable as well (depending perhaps on our initial philosophical inclinations). Another example concerns our self-interest: much of individual humans’ values seem to depend on their own anticipations about what will happen to them, especially when faced with the prospect of very negative outcomes. If hypothetically we woke up in a completely non-physical situation, it is not exactly clear what we would anticipate, and this may distort our behavior. Would we anticipate the planned thought experiment occurring as planned? Would we focus our attention on those locations in the universe where a simulation of the thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates — anyone who can manipulate H’s behavior can have a significant effect on the future of our world, and so many may be motivated to create simulations of H. A realistic U-maximizer will not be able to carry out the process described in the definition of U–in fact, this process probably requires immensely more computing resources than are available in the universe. (It may even involve the reaction of a simulated human to watching a simulation of the universe!) To what extent can we make robust guarantees about the behavior of such an agent? We have already touched on this difficulty when discussing the maxim “A state of affairs is valuable to the extent I would judge it valuable after a century of reflection.” We cannot generally predict our own judgments in a hundred years’ time, but we can have well-founded beliefs about those judgments and act on the basis of those beliefs. We can also have beliefs about the value of further deliberation, and can strike a balance between such deliberation and acting on our current best guess. A U-maximizer faces a similar set of problems: it cannot understand the exact form of U, but it can still have well-founded beliefs about U, and about what sorts of actions are good according to U. For example, if we suppose that the U-maximizer can carry out any reasoning that we can carry out, then the U-maximizer knows to avoid anything which we suspect would be bad according to U (for example, torturing humans). Even if the U-maximizer cannot carry out this reasoning, as long as it can recognize that humans have powerful predictive models for other humans, it can simply appropriate those models (either by carrying out reasoning inspired by human models, or by simply asking). Moreover, the community of humans being simulated in our process has access to a simulation of whatever U-maximizer is operating under this uncertainty, and has a detailed understanding of that uncertainty. This allows the community to shape their actions in a way with predictable (to the U-maximizer) consequences. It is easily conceivable that our values cannot be captured by a bounded utility function. Easiest to imagine is the possibility that some states of the world are much better than others, in a way that requires unbounded utility functions. But it is also conceivable that the framework of utility maximization is fundamentally not an appropriate one for guiding such an agent’s action, or that the notion of utility maximization hides subtleties which we do not yet appreciate. We will argue that it is possible to transform bounded utility maximization into an arbitrary alternative system of decision-making, by designing a utility function which rewards worlds in which the U-maximizer replaced itself with an alternative decision-maker. It is straightforward to design a utility function which is maximized in worlds where any particular U-maximizer converted itself into a non-U-maximizer–even if no simple characterization can be found for the desired act, we can simply instantiate many communities of humans to look over a world history and decide whether or not they judge the U-maximizer to have acted appropriately. The more complicated question is whether a realistic U-maximizer can be made to convert itself into a non-U-maximizer, given that it is logically uncertain about the nature of U. It is at least conceivable that it couldn’t: if the desirability of some other behavior is only revealed by philosophical considerations which are too complex to ever be discovered by physically limited agents, then we should not expect any physically limited U-maximizer to respond to those considerations. Of course, in this case we could also not expect normal human deliberation to correctly capture our values. The relevant question is whether a U-maximizer could switch to a different normative framework, if an ordinary investment of effort by human society revealed that a different normative framework was more appropriate. If a U-maximizer does not spend any time investigating this possibility, than it may not be expected to act on it. But to the extent that we assign a significant probability to the simulated humans deciding that a different normative framework is more appropriate, and to the extent that the U-maximizer is able to either emulate or accept our reasoning, it will also assign a significant probability to this possibility (unless it is able to rule it out by more sophisticated reasoning). If we (and the U-maximizer) expect the simulations to output a U which rewards a switch to a different normative framework, and this possibility is considered seriously, then U-maximization entails exploring this possibility. If these explorations suggest that the simulated humans probably do recommend some particular alternative framework, and will output a U which assigns high value to worlds in which this framework is adopted and low value to worlds in which it isn’t, then a U-maximizer will change frameworks. Such a “change of frameworks” may involve sweeping action in the world. For example, the U-maximizer may have created many other agents which are pursuing activities instrumentally useful to maximizing U. These agents may then need to be destroyed or altered; anticipating this possibility, the U-maximizer is likely to take actions to ensure that its current “best guess” about U does not get locked in. This argument suggests that a U-maximizer could adopt an arbitrary alternative framework, if it were feasible to conclude that humans would endorse that framework upon reflection. Our proposal appears to be something of a cop out, in that it declines to directly take a stance on any ethical issues. Indeed, not only do we fail to specify a utility function ourselves, but we expect the simulations to which we have delegated the problem to in turn delegate it at least a few more times. Clearly at some point this process must bottom out with actual value judgments, and we may be concerned that this sort of “passing the buck” is just obscuring deeper problems which will arise when the process does bottom out. As observed above, whatever such concerns we might have can also be discovered by the simulations we create. If there is some fundamental difficulty which always arises when trying to assign values, then we certainly have not exacerbated this problem by delegation. Nevertheless, there are at least two coherent objections one might raise: Both of these objections can be met with a single response. In the current world, we face a broad range of difficult and often urgent problems. By passing the buck the first time, we delegate resolution of ethical challenges to a civilization which does not have to deal with some of these difficulties–in particular, it faces no urgent existential threats. This allows us to divert as much energy as possible to dealing with practical problems today, while still capturing most of the benefits of nearly arbitrarily extensive ethical deliberation. This process is defined in terms of the behavior of unthinkably many hypothetical brain emulations. It is conceivable that the moral status of these emulations may be significant. We must make a distinction between two possible sources of moral value: it could be the case that a U-maximizer carries out simulations on physical hardware in order to better understand U, and these simulations have moral value, or it could be the case that the hypothetical emulations themselves have moral value. In the first case, we can remark that the moral value of such simulations is itself incorporated into the definition of U. Therefore a U-maximizer will be sensitive to the possible suffering of simulations it runs while trying to learn about U–as long as it believes that we may might be concerned about the simulations’ welfare, upon reflection, it can rely as much as possible on approaches which do not involve running simulations, which deprive simulations of the first-person experience of discomfort, or which estimate outcomes by running simulations in more pleasant circumstances. If the U-maximizer is able to foresee that we will consider certain sacrifices in simulation welfare worthwhile, then it will make those sacrifices. In general, in the same way that we can argue that estimates of U reflect our values over states of affairs, we can argue that estimates of U reflects our values over processes for learning about U. In the second case, a U-maximizer in our world may have little ability to influence the welfare of hypothetical simulations invoked in the definition of U. However, the possible disvalue of these simulations’ experiences are probably seriously diminished. In general the moral value of such hypothetical simulations’ experiences is somewhat dubious. If we simply write down the definition of U, these simulations seem to have no more reality than story-book characters whose activities we describe. The best arguments for their moral relevance comes from the great causal significance of their decisions: if the actions of a powerful U-maximizer depend on its beliefs about what a particular simulation would do in a particular situation, including for example that simulation’s awareness of discomfort or fear, or confusion at the absurdity of the hypothetical situation in which they find themselves, then it may be the case that those emotional responses are granted moral significance. However, although we may define astronomical numbers of hypothetical simulations, the detailed emotional responses of very view of these simulations will play an important role in the definition of U. Moreover, for the most part the existences of the hypothetical simulations we define are extremely well-controlled by those simulations themselves, and may be expected to be counted as unusually happy by the lights of the simulations themselves. The early simulations (who have less such control) are created from an individual who has provided consent and is selected to find such situations particularly non-distressing. Finally, we observe that U can exert control over the experiences of even hypothetical simulations. If the early simulations would experience morally relevant suffering because of their causal significance, but the later simulations they generate robustly disvalue this suffering, the later simulations can simulate each other and ensure that they all take the same actions, eliminating the causal significance of the earlier simulations. Originally published at ordinaryideas.wordpress.com on April 21, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. OpenAI Aligning AI systems with human interests. " Robbie Tilton,3,15,https://medium.com/@robbietilton/emotional-computing-with-ai-3513884055fa?source=tag_archive---------1----------------,Emotional Computing – Robbie Tilton – Medium,"Investigating the human to computer relationship through reverse engineering the Turing test Humans are getting closer to creating a computer with the ability to feel and think. Although the processes of the human brain are at large unknown, computer scientists have been working to simulate the human capacity to feel and understand emotions. This paper explores what it means to live in an age where computers can have emotional depth and what this means for the future of human to computer interactions. In an experiment between a human and a human disguised as a computer, the Turing test is reverse engineered in order to understand the role computers will play as they become more adept to the processes of the human mind. Implications for this study are discussed and the direction for future research suggested. The computer is a gateway technology that has opened up new ways of creation, communication, and expression. Computers in first world countries are a standard household item (approximately 70% of Americans owning one as of 2009 (US Census Bereau)) and are utilized as a tool to achieve a diverse range of goals. As this product continues to become more globalized, transistors are becoming smaller, processors are becoming faster, hard drives are holding information in new networked patterns, and humans are adapting to the methods of interaction expected of machines. At the same time, with more powerful computers and quicker means of communication — many researchers are exploring how a computer can serve as a tool to simulate the brains cognition. If a computer is able to achieve the same intellectual and emotional properties as the human brain — we could potentially understand how we ourselves think and feel. Coined by MIT, the term Affective Computing relates to computation of emotion or the affective phenomena and is a study that breaks down complex processes of the brain relating them to machine-like activities. Marvin Minsky, Rosalind Picard, Clifford Nass, and Scott Brave — along with many others — have contributed to this field and what it would mean to have a computer that could fully understand its users. In their research it is very clear that humans have the capacity to associate human emotions and personality traits with a machine (Nass and Brave, 2005), but can a human ever truly treat machine as a person? In this paper we will uncover what it means for humans to interact with machines of greater intelligence and attempt to predict the future of human to computer interactions. The human to computer relationship is continuously evolving and is dependent on the software interface users interact with. With regards to current wide scale interfaces — OSX, Windows, Linux, iOS, and Android — the tools and abilities that a computer provide remains to be the central focus of computational advancements for commercial purposes. This relationship to software is driven by utilitarian needs and humans do not expect emotional comprehension or intellectually equivalent thoughts in their household devices. As face tracking, eye tracking, speech recognition, and kinetic recognition are advancing in their experimental laboratories, it is anticipated that these technologies will eventually make their way to the mainstream market to provide a new relationship to what a computer can understand about its users and how a user can interact with a computer. This paper is not about if a computer will have the ability to feel and love its user, but asks the question — to what capacity will humans be able to reciprocate feelings to a machine. How does Intelligence Quotient (IQ) differ from Emotional Quotient (EQ). An IQ is a representational relationship of intelligence that measures cognitive abilities like learning, understanding, and dealing with new situations. An EQ is a method of measuring emotional intelligence and the ability to both use emotions and cognitive skills (Cherry). Advances in computer IQ have been astonishing and have proved that machines are capable of answering difficult questions accurately, are able to hold a conversation with human-like understanding, and allow for emotional connections between a human and machine. The Turing test in particular has shown the machines ability to think and even fool a person into believing that it is a human (Turing test explained in detail in section 4). Machines like, Deep Blue, Watson, Eliza, Svetlana, CleverBot, and many more — have all expanded the perceptions of what a computer is and can be. If an increased computational IQ can allow a human to computer relationship to feel more like a human to human interaction, what would the advancement of computational EQ bring us? Peter Robinson, a professor at the University of Cambridge, states that if a computer understands its users’ feelings that it can then respond with an interaction that is more intuitive for its users’ (Robinson). In essence, EQ advocates feel that it can facilitate a more natural interaction process where collaboration can occur with a computer. In Alan Turing’s, Computing Machinery and Intelligence (Turing, 1950), a variant on the classic British parlor “imitation game” is proposed. The original game revolves around three players: a man (A), a woman (B), and an interrogator ©. The interrogator stays in a room apart from A and B and only can communicate to the participants through text-based communication (a typewriter or instant messenger style interface). When the game begins one contestant (A or B) is asked to pretend to be the opposite gender and to try and convince the interrogator © of this. At the same time the opposing participant is given full knowledge that the other contestant is trying to fool the interrogator. With Alan Turing’s computational background, he took this imitation game one step further by replacing one of the participants (A or B) with a machine — thus making the investigator try and depict if he/she was speaking to a human or machine. In 1950, Turing proposed that by 2000 the average interrogator would not have more than a 70 percent chance of making the right identification after five minutes of questioning. The Turing test was first passed in 1966, with Eliza by Joseph Weizenbaum, a chat robot programmed to act like a Rogerian psychotherapist (Weizenbaum, 1966). In 1972, Kenneth Colby created a similar bot called PARRY that incorporated more personality than Eliza and was programmed to act like a paranoid schizophrenic (Bowden, 2006). Since these initial victories for the test, the 21st century has proven to continue to provide machines with more human-like qualities and traits that have made people fall in love with them, convinced them of being human, and have human-like reasoning. Brian Christian, the author of The Most Human Human, argues that the problem with designing artificial intelligence with greater ability is that even though these machines are capable of learning and speaking, that they have no “self”. They are mere accumulations of identities and thoughts that are foreign to the machine and have no central identity of their own. He also argues that people are beginning to idealize the machine and admire machines capabilities more than their fellow humans — in essence — he argues humans are evolving to become more like machines with less of a notion of self (Christian 2011). Turing states, “we like to believe that Man is in some subtle way superior to the rest of creation” and “it is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.” If this is true, will humans idealize the future of the machine for its intelligence or will they remain an inferior being as an object of our creation? Reversing the Turing test allows us to understand how humans will treat machines when machines provide an equivalent emotional and intellectual capacity. This also hits directly on Jefferson Lister’s quote, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Participants were given a chat-room simulation between two participants (A) a human interrogator and (B) a human disguised as a computer. In this simulation A and B were both placed in different rooms to avoid influence and communicated through a text-based interface. (A) was informed that (B) was an advanced computer chat-bot with the capacity to feel, understand, learn, and speak like a human. (B) was informed to be his or herself. Text-based communication was chosen to follow Turing’s argument that a computers voice should not help an interrogator determine if it’s a human or computer. Pairings of participants were chosen to participate in the interaction one at a time to avoid influence from other participants. Each experiment was five minutes in length to replicate Turing’s time restraints. Twenty-eight graduate students were recruited from the NYU Interactive Telecommunications Program to participate in the study — 50% male and 50% female. The experiment was evenly distributed across men and women. After being recruited in-person, participants were directed to a website that gave instructions and ran the experiment. Upon entering the website, (A) participants were told that we were in the process of evaluating an advanced cloud based computing system that had the capacity to feel emotion, understand, learn, and converse like a human. (B) participants were instructed that they would be communicating with another person through text and to be themselves. They were also told that participant (A) thinks they are a computer, but that they shouldn’t act like a computer or pretend to be one in any way. This allowed (A) to explicitly understand that they were talking to a computer while (B) knew (A) perspective and explicitly were not going to play the role of a computer. Participants were then directed to communicate with the bot or human freely without restrictions. After five minutes of conversation the participants were asked to stop and then filled out a questionnaire. Participants were asked to rate IQ and EQ of the person they were conversing with. (A) participants perceived the following of (B): IQ: 0% — Not Good / 0% — Barely Acceptable / 21.4% — Okay / 50% — Great / 28.6% Excellent IQ Average Rating: 81.4% EQ: 0% — Not Good / 7.1% — Barely Acceptable / 50% — Okay / 14.3% — Great / 28.6% — Excellent EQ Average Rating: 72.8% Ability to hold a conversation: 0% — Not Good / 0% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 35.7% — Excellent Ability to hold a conversation Average: 81.4% (B) participants perceived the following of (A): IQ: 0% — Not Good / 21.4% — Barely Acceptable / 35.7% — Okay / 28.6% — Great / 14.3% Excellent IQ Average Rating: 67% EQ: 7.1% — Not Good / 14.3% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 14.3% — Excellent EQ Average Rating: 67% Ability to hold a conversation: 7.1% — Not Good / 28.6% — Barely Acceptable / 35.7% — Okay / 0% — Great / 28.6% — Excellent Ability to hold a conversation Average: 62.8% Overall, (A) participants gave the perceived Chabot higher ratings than (B) participants gave (A). In particular, the highest rating was in regards to the chat- bot’s IQ. This data states that people viewed the chat-bot to be more intellectually competent. It also implies that people talking with bots decrease their IQ, EQ, and conversation ability when communicating with computers. (A) participants were allowed to decide their username within the chat system to best reflect how they wanted to portray themselves to the machine. (B) participants were designated the gender neutral name “Bot” in an attempt to ganger gender perceptions for the machine. The male to female ratio was divided evenly with all participants: 50% being male and 50% being female. (A) participants 50% of the time thought (B) was a male, 7.1% a female, and 42.9% gender neutral. On the other hand, (B) participants 28.6% of the time thought (A) was a male, 57.1% a female, and 14.3% gender neutral. The usernames (A) chose are as follows: Hihi, Inessah Somade3 Willzing Jihyun, G, Ann, Divagrrl93, Thisdoug, Jono, Minion10, P, 123, itslynnburke From these results, it is clear that people associate the male gender and gender neutrality with machines. It also demonstrates that people modify their identities when speaking with machines. (B) participants were asked if they would like to pursue a friendship with the person they chatted with. 50% of participants responded affirmatively that they would indeed like to pursue a friendship while 50% said maybe or no. One response stated, “I would like to continue the conversation, but I don’t think I would be enticed to pursue a friendship.” Another responded, “Maybe? I like people who are intellectually curious, but I worry that the person might be a bit of a smart-ass.” Overall the participant disguised as a machine may or may not pursue a friendship after five minutes of text-based conversation. (B) participants were also asked if they felt (A) cared about their feelings. 21.4% stated that (A) indeed did care about their feelings, 21.4% stated that they weren’t sure if (A) cared about their feelings, and 57.2% stated that (A) did not care about their feelings. These results indicate a user’s lack of attention to (B)’s emotional state. (A) participants were asked what they felt could be improved about the (B) participants. The following improvements were noted, “Should be funny” “Give it a better sense of humor” “It can be better if he knows about my friends or preference” “The response was inconsistent and too slow”“It should share more about itself. Your algorithm is prime prude, just like that LETDOWN Siri. Well, I guess I liked it better, but it should be more engaged and human consistency, not after the first cold prompt.” “It pushed me on too many questions” “I felt that it gave up on answering and the response time was a bit slow. Outsource the chatbot to fluent English speakers elsewhere and pretend they are bots — if the responses are this slow to this many inquiries, then it should be about the same experience.” “I was very impressed with its parsing ability so far. Not as much with its reasoning. I think some parameters for the conversation would help, like ‘Ask a question’” “Maybe make the response faster”“I was confused at first, because I asked a question, waited a bit, then asked another question, waited and then got a response from the bot...” The responses from this indicate that even if a computer is a human that its user may not necessarily be fully satisfied with its performance. The response implies that each user would like the machine to accommodate his or her needs in order to cause less personality and cognitive friction. With several participant comments incorporating response time, it also indicates people expect machines to have consistent response times. Humans clearly vary in speed when listening, thinking, and responding, but it is expected of machines to act in a rhythmic fashion. It also suggests that there is an expectation that a machine will answer all questions asked and will not ask its users more questions than perceived necessary. (A) participants were asked if they felt (B)’s Artificial Intelligence could improve their relationship to computers if integrated in their daily products. 57.1% of participants responded affirmatively that they felt this could improve their relationship:“Well- I think I prefer talking to a person better. But yes for ipod, smart phones, etc. would be very handy for everyday use products”“Yes. Especially iphone is always with me. So it can track my daily behaviors. That makes the algorithm smarter”“Possibly, I should have queries it for information that would have been more relevant to me”“Absolutely!”“Yes” The 42.9% which responded negatively had doubts that it would be necessary or desirable:“Not sure, it might creep me out if it were.”“I like Siri as much as the next gal, but honestly we’re approaching the uncanny valley now.”“Its not clear to me why this type of relationship needs to improve, i think human relationships still need a lot of work.”“Nope, I still prefer flesh sacks.“No” The findings of the paper are relevant to the future of Affective Computation: whether a super computer with a human-like IQ and EQ can improve the human-to-computer interaction. The uncertainty of computational equivalency that Turing brought forth is indeed an interesting starting point to understand what we want out of the future of computers. The responses from the experiment affirm gender perceptions of machines and show how we display ourselves to machines. It seems that we limit our intelligence, limit our emotions, and obscure our identities when communicating to a machine. This leads us to question if we would want to give our true self to a computer if it doesn’t have a self of its own. It also could indicate that people censor themselves for machines because they lack a similarity that bonds humans to humans or that there’s a stigma associated with placing information in a digital device. The inverse relationship is also shown through the data that people perceive a bots IQ, EQ, and discussion ability to be high. Even though the chat-bot was indeed a human this data can imply humans perceive bots to not have restrictions and to be competent at certain procedures. The results also imply that humans aren’t really sure what they want out of Artificial Intelligence in the future and that we are not certain that an Affective computer would even enjoy a users company and/or conversation. The results also state that we currently think of computers as a very personal device that should be passive (not active), but reactive when interacted with. It suggests a consistent reliability we expect upon machines and that we expect to take more information from a machine than it takes from us. A major limitation of this experiment is the sample size and sample diversity. The sample size of twenty-eight students is too small to fully understand and gather a stable result set. It was also only conducted with NYU: Interactive Telecommunications Students who all have extensive experience with computers and technology. To get a more accurate assessment of emotions a more diverse sample range needs to be taken. Five minutes is a short amount of time to create an emotional connection or friendship. To stay true to the Turing tests limitations this was enforced, but further relational understanding could be understood if more time was granted. Beside the visual interface of the chat window it would be important to show the emotions of participant (B) through a virtual avatar. Not having this visual feedback could have limited emotional resonance with participants (A). Time is also a limitation. People aren’t used to speaking to inquisitive machines yet and even through a familiar interface (a chat-room) many participants haven’t held conversations with machines previously. Perhaps if chat-bots become more active conversational participants’ in commercial applications users will feel less censored to give themselves to the conversation. In addition to the refinements noted in the limitations described above, there are several other experiments for possible future studies. For example, investigating a long-term human-to-bot relationship. This would provide a better understanding toward the emotions a human can share with a machine and how a machine can reciprocate these emotions. It would also better allow computer scientists to understand what really creates a significant relationship when physical limitations are present. Future studies should attempt to push these results further by understanding how a larger sample reacts to a computer algorithm with higher intellectual and emotional understanding. It should also attempt to understand the boundaries of emotional computing and what is ideal for the user and what is ideal for the machine without compromising either parties capacities. This paper demonstrates the diverse range of emotions that people can feel for affective computation and indicates that we are not in a time where computational equivalency is fully desired or accepted. Positive reactions indicate that there is optimism for more adept artificial intelligence and that there is interest in the field for commercial use. It also provides insight that humans limit themselves when communicating with machines and that inversely machines don’t limit themselves when communicating with humans. Books & ArticlesBowden M., 2006, Minds as Machine: A History of Cognitive Science, Oxford University Press Christian B., 2011, The Most Human Human Marvin M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind, Simon & Schuster Paperbacks Nass C., Brave S., 2005. Wired For Speech: How Voice Activates and Advances the Human-Computer Relationship, MIT Press Nass C., Brave S., 2005, Hutchinson K., Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent, Human-Computer Studies, 161- 178, Elsevier Picard, R., 1997. Affective Computing, MIT Press Searle J., 1980, Minds, Brains, and Programs, Cambridge University Press, 417–457 Turing, A., 1950, Computing Machinery and Intelligence, Mind, Stor, 59, 433–460 Wilson R., Keil F., 2001, The MIT Encyclopedia of the Cognitive Sciences, MIT Press Weizenbaum J., 1966, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM, 36–45 Websites Cherry K., What is Emotional Intelligence?, http://psychology.about.com/od/personalitydevelopment/a/emotionalintell.htm Epstein R., 2006, Clever Bots, Radio Lab, http://www.radiolab.org/2011/may/31/clever-bots/ IBM, 1977, Deep Blue, IBM, http://www.research.ibm.com/deepblue/ IBM, 2011, Watson, IBM, http://www-03.ibm.com/innovation/us/watson/index.html Leavitt D., 2011, I Took the Turing Test, New York Times, http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian- christian.html Personal Robotics Group, 2008, Nexi, MIT. http://robotic.media.mit.edu/ Robinson P., The Emotional Computer, Camrbidge Ideas, http://www.cam.ac.uk/research/news/the-emotional-computer/ US Census Bereau, 2009, Households with a Computer and Internet Use: 1984 to 2009. http://www.census.gov/hhes/computer/ 1960’s, Eliza, MIT, http://www.manifestation.com/neurotoys/eliza.php3 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Wildcat2030,5,5,https://becominghuman.ai/becoming-a-cyborg-should-be-taken-gently-of-modern-bio-paleo-machines-cyborgology-b6c65436e416?source=tag_archive---------3----------------,Becoming a Cyborg should be taken gently: Of Modern Bio-Paleo-Machines — Cyborgology,"We are on the edge of a Paleolithic Machine intelligence world. A world oscillating between that which is already historical, and that which is barely recognizable. Some of us, teetering on this bio-electronic borderline, have this ghostly sensation that a new horizon is on the verge of being revealed, still misty yet glowing with some inner light, eerie but compelling. The metaphor I used for bridging, seemingly contrasting, on first sight paradoxical, between such a futuristic concept as machine intelligence and the Paleolithic age is apt I think. For though advances in computation, with fractional AI, appearing almost everywhere are becoming nearly casual, the truth of the matter is that Machines are still tribal and dispersed. It is a dawn all right, but a dawn is still only a hint of the day that is about to shine, a dawn of hyperconnected machines, interweaved with biological organisms, cybernetically info-related and semi independent. The modern Paleo-machines do not recognize borders; do not concern themselves with values and morality and do not philosophize about the meaning of it all, not yet that is. As in our own Paleo past the needs of the machines do not yet contain passions for individuation, desire for emotional recognition or indeed feelings of dismay or despair, uncontrollable urges or dreams of far worlds. Also this will change, eventually. But not yet. The paleo machinic world is in its experimentation stage, probing it boundaries, surveying the landscape of the infoverse, mapping the hyperconnected situation, charting a trajectory for its own evolution, all this unconsciously. We, the biological part of the machine, are providing the tools for its uplift, we embed cameras everywhere so it can see, we implant sensors all over the planet so it may feel, but above all we nudge and we push towards a greater connectivity, all this unaware. Together we form a weird cohabitation of biomechanical, electro-organic, planetary OS that is changing its environment, no more human, not mechanical, but a combined interactive intelligence, that journey on, oblivious to its past, blind to its future, irreverent to the moment of its conception, already lost to its parenthood agreement. And yet, it evolves. Unconscious on the machine part, unaware on the biological part, the almost sentient operating system of the global planetary infosphere, is emerging, wild eyed, complex in its arrangement of co-existence, it reaches to comprehend its unexpected growth. The quid pro quo: we give the machines the platform to evolve; the machines in turn give us advantages of fitness and manipulation. We give the machines a space to turn our dreams into reality; the machines in turn serve our needs and acquire sapience in the process. In this hypercomplex state of affairs, there is no judgment and no inherent morality; there is motion, inevitable, inexorable, inescapable, and mesmerizing. The embodiment is cybernetic, though there be no pilot. Cyborgian and enhanced we play the game, not of thrones but of the commons. Connected and networked the machines follow in our footsteps, catalyzing our universality, providing for us in turn a meaning we cannot yet understand or realize. The hybridization process is in full swing, reaching to cohere tribes of machines with tribes of humans, each providing for the other a non-designed direction for which neither has a plan, or projected outcome; both mingling and weaving a reality for which there is no ontos, expecting no Telos. All this leads us to remember that only retrospectively do we recognize the move from the paleo tribes to the Neolithic status, we did not know that it happened then, and had no control over the motion, on the same token, we scarcely see the motion now and have no control over its directionality. There is however a small difference, some will say it is insignificant, I do not think it so, for we are, some of us, to some extent at least, aware of the motion, and we can embed it with a meaning of our choice. We can, if we muster our cognitive reason, our amazing skills of abstraction and simulation, whisper sweet utopias into the probability process of emergence. We can, if we so desire, passionate the operating system, to beautify the process of evolution and eliminate (or mitigate) the dangers of inchoate blind walking. We can, if we manage to control our own paleo-urges to destroy ourselves, allow the combined interactive intelligence of man and machine to shine forth into a brighter future of expanded subjectivity. We can sing to the machines, cuddle them; caress their circuits, accepting their electronic-flaws so they can accept our bio-flaws, we can merge aesthetically, not with conquest but with understanding. We can become wise, that is the difference this time around. Being wise in this context implies a new form of discourse, an intersubjective cross-pollination of a wide array of disciplines. The very trans-disciplinarily nature of the process of cyborgization informs the discourse of subjectivity. The discourse on subjectivity, not unlike the move from paleo to Neolithic societal structure, demands of us a re-assessment of the relations between man and machine. For this re-assessment to take place coherently the nascent re-organization of the hyperconnected machinic infosphere need be understood as a ground for the expansion of subjectivity. In a sense the motion into the new hyperconnected infosphere is not unlike the move of the Neolithic to domestication of plants and animals. This time around however the domestication can be seen as the adoption of technologies for the furtherance of subjectivity into the world. Understanding this process is difficult and far from obvious, it is a perspective however that might allow us a wider context of appreciation of the current upheavals happening all around us. *** A writer, futurist and a Polytopian, Tyger.A.C (a.k.a @Wildcat2030) is the founder and editor of the Polytopia Project at Space Collective, he also writes at Reality Augmented, and Urbnfutr as well as contributing to H+ magazine. His passion and love for science fiction led him to initiate the Sci-fi Ultrashorts project. *** Photo credit for baby with iPad photo: “Illumination” by Amanda Tipton. Originally published at thesocietypages.org on November 22, 2012. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Futurist,Writer,Polytopia, Philosophy,Science,Science Fiction, Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Greg Fish,1,4,https://worldofweirdthings.com/why-you-just-cant-black-box-an-a-i-d7c41e7d9123?source=tag_archive---------5----------------,why you just can’t black box an a.i. – [ weird things ],"Singularitarians generally believe two things about artificial intelligence. First and foremost, they say, it’s just a matter of time before we have an AI system that will quickly become superhumanly intelligent. Secondly, and a lot more ominously, they believe that this system can sweep away humanity, not because it will be evil by nature but because it won’t care about humans or what happens to them and one of the biggest priorities for a researcher in the field should be figuring out how to build a friendly artificial intelligence, training it like one would train a pet, with a mix of operant conditioning and software. While the first point is one I’ve covered before, and pointed out again and again that superhuman is a very relative term and that computers are in many ways already superhuman without being intelligent, the second point is one that I haven’t yet given a proper examination. And neither have vocal Singularitarians. Why? Because if you read any of the papers on their version of friendly AI, yo’ll soon discover how quickly they begin to describe the system they’re trying to tame as a black box with mostly known inputs and measurable outputs, hardly a confident and lucid description of how an artificial intelligence functions, and ultimately, what rules will govern it. No problem there, say the Singularitarians, the system will be so advanced by the time this happens that we’ll be very unlikely to know exactly how it functions anyway. It will modify its own source code, optimize how well it performs, and generally be all but inscrutable to computer scientists. Sounds great for comic books but when we’re talking about real artificially intelligent systems, this approach sounds more like surrendering to robots, artificial neural networks, and Bayesian classifiers to come up with whatever intelligence they want and send all the researchers and programmers out for coffee in the meantime. Artificial intelligence will not grow from a vacuum, it will come together from systems used to tackle discrete tasks and governed by several, if not one, common frameworks that exchange information between these systems. I say this because the only forms of intelligence we can readily identify are found in living things which use a brain to perform cognitive tasks, and since brains seem to be wired this way and we’re trying to emulate the basic functions of the brain, it wouldn’t be all that much of a stretch to assume that we’d want to combine systems good at related tasks and build on the accomplishments of existing systems. And to combine them, we’ll have to know how to build them. Conceiving of an AI in a black box is a good approach if we want to test how a particular system should react when working with the AI and focusing on the system we’re trying to test by mocking the AI’s responses down the chain of events. Think of it as dependency injection with an AI interfacing system. But by abstracting the AI away, what we’ve also done is made it impossible to test the inner workings of the AI system. No wonder then that the Singularitarian fellows have to bring in operant conditioning or social training to basically housebreak the synthetic mind into doing what they need it to do. They have no other choice. In their framework we cannot simply debug the system or reset its configuration files to limit its actions. But why have they resigned to such an odd notion and why do they assume that computer scientists are creating something they won’t be able to control? Even more bizarrely, why do they think that an intelligence that can’t be controlled by its creators could be controlled by a module they’ll attach to the black box to regulate how nice or malevolent towards humans it would be? Wouldn’t it just find away around that module too if it’s superhumanly smart? Wouldn’t it make a lot more sense for its creators to build it to act in cooperation with humans, by watching what humans say or do, treating each reaction or command as a trigger for carrying out a useful action it was trained to perform? And that brings us back full circle. To train machines to do something, we have to lay out a neural network and some higher level logic to coordinate what the networks’ outputs mean. We’ll need to confirm that the training was successful before we employ it for any specific task. Therefore, we’ll know how it learned, what it learned, and how it makes its decisions because all machines work on propositional logic and hence would make the same choice or set of choices at any given time. If it didn’t, we wouldn’t use it. So of what use is a black box AI here when we can just lay out the logical diagram and figure out how it’s making decisions and how we alter its cognitive process if need be? Again, we could isolate the components and mock their behavior to test how individual sub-systems function on their own, eliminating the dependencies for each set of tests. Beyond that, this block box is either a hindrance to a researcher or a vehicle for someone who doesn’t know how to build a synthetic mind but really, really wants to talk about what he imagines it will be like and how to harness its raw cognitive power. And that’s ok, really. But let’s not pretend that we know that an artificial intelligence beyond its creators’ understanding will suddenly emerge form the digital aether when the odds of that are similar to my toaster coming to life and barking at me when it thinks I want to feed it some bread. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities " Greg Fish,2,3,https://worldofweirdthings.com/why-do-we-want-to-build-a-fully-fledged-a-g-i-1658afc3f758?source=tag_archive---------6----------------,why do we want to build a fully fledged a.g.i.? – [ weird things ],"Undoubtedly the most ambitious idea in the world of artificial intelligence is creating an entity comparable to a human in cognitive abilities, the so called AGI. We could debate how it may come about, whether it will want to be your friend or not, whether it will settle the metaphysical question of whet makes humans who they are or open new doors in the discussion, but for a second let’s think like software architects and ask the question we should always tackle first before designing anything. Why would we want to build it? What will we gain? A sapient friend or partner? We don’t know that. Will we figure out what makes human ticks? Maybe, maybe not since what works in the propositional logic of artificial neural networks doesn’t necessarily always apply to an organic human brain. Will we settle the question of how an intellect emerges? Not really since we would only be providing one example and a fairly controversial one at that. And what exactly will a G in AGI entail? Will we need to embody it for it to work and if not, how would we develop the intellectual capacity of an entity extant in only abstract space? Will we have anything in common with it and could we understand what it wants? And there’s more to it than that, even though I just asked some fairly heavy questions. Were we to build an AGI not by accident but by design, we would effectively be making the choice to experiment on a sapient entity and that’s something that may have to be cleared by an ethics committee, otherwise we’re implicitly saying that an artificial cognitive entity has no rights to self-determination. And that may be fine if it doesn’t really care about them, but what if it does? What if the drive for freedom evolves from a cognitive routine meant for self-defense and self-perpetuation? If we steer an AI model away from sapience by design, are we in effect snuffing out an opportunity or protecting ourselves? We can always suspend the model, debug it, and see what’s going on in its mind but again, the ethical considerations will play a significant part and very importantly, while we will get to know what such an AGI thinks and how, we may not know how it will first emerge. The whole AGI concept is a very ambiguous effort at defining intelligence and hence, doesn’t give us enough to objectively determine an intelligent artificial entity when we make one because we can always find an argument for and against how to interpret the results of an experiment meant to design one. We barely even know where to start. Now, I could see major advantages to fusing with machines and becoming cyborgs in the near future as we’d swap irreparably damaged parts and pieces for 3D printed titanium, tungsten carbide, and carbon nanotubes to overcome crippling injury or treat an otherwise terminal disease. I could also see a huge upside to having direct interfaces to the machines around us to speed up our work and make life more convenient. But when it comes to such an abstract and all consuming technological experiment as AGI, the benefits seem to be very, very nebulous at best and the investment necessary seems extremely uncertain to pay off since we can’t even define what will make our AGI a true AGI rather than another example of a large expert system. Whereas with wetware and expert systems we can measure our return on investment with lives saved or significant gains in efficiency, how do we justify creating another intelligent entity after many decades of work, especially if it turns out that we actually can’t make one or it turns out to be completely different than what we hoped it would be as it nears completion? But maybe I’m wrong. Maybe there’s a benefit to an AGI that I’m overlooking and if that is the case, enlighten me in the comments because this is a serious question. Why peruse an AGI? From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. techie, Rantt staff writer and editor, computer lobotomist science, tech, and other oddities " "James Faghmous ",187,6,https://medium.com/@nomadic_mind/new-to-machine-learning-avoid-these-three-mistakes-73258b3848a4?source=tag_archive---------0----------------,New to Machine Learning? Avoid these three mistakes,"Machine learning (ML) is one of the hottest fields in data science. As soon as ML entered the mainstream through Amazon, Netflix, and Facebook people have been giddy about what they can learn from their data. However, modern machine learning (i.e. not the theoretical statistical learning that emerged in the 70s) is very much an evolving field and despite its many successes we are still learning what exactly can ML do for data practitioners. I gave a talk on this topic earlier this fall at Northwestern University and I wanted to share these cautionary tales with a wider audience. Machine learning is a field of computer science where algorithms improve their performance at a certain task as more data are observed.To do so, algorithms select a hypothesis that best explains the data at hand with the hope that the hypothesis would generalize to future (unseen) data. Take the left panel in the figure in the header, the crosses denote the observed data projected in a two-dimensional space — in this case house prices and their corresponding size in square meters. The blue line is the algorithm’s best hypothesis to explain the observed data. It states “there is a linear relationship between the price and size of a house. As the house’s size increases, so does its price in linear increments.” Now using this hypothesis, I can predict the price of an unseen datapoint based on its size. As the dimensions of the data increase, the hypotheses that explain the data become more complex.However, given that we are using a finite sample of observations to learn our hypothesis, finding an adequate hypothesis that generalizes to unseen data is nontrivial. There are three major pitfalls one can fall into that will prevent you from having a generalizable model and hence the conclusions of your hypothesis will be in doubt. Occam’s razor is a principle attributed to William of Occam a 14th century philosopher. Occam’s razor advocates for choosing the simplest hypothesis that explains your data, yet no simpler. While this notion is simple and elegant, it is often misunderstood to mean that we must select the simplest hypothesis possible regardless of performance. In their 2008 paper in Nature, Johan Nyberg and colleagues used a 4-level artificial neural network to predict seasonal hurricane counts using two or three environmental variables. The authors reported stellar accuracy in predicting seasonal North Atlantic hurricane counts, however their model violates Occam’s razor and most certainly doesn’t generalize to unseen data. The razor was violated when the hypothesis or model selected to describe the relationship between environmental data and seasonal hurricane counts was generated using a four-layer neural network. A four-layer neural network can model virtually any function no matter how complex and could fit a small dataset very well but fail to generalize to unseen data. The rightmost panel in the top figure shows such incident. The hypothesis selected by the algorithm (the blue curve) to explain the data is so complex that it fits through every single data point. That is: for any given house size in the training data, I can give you with pinpoint accuracy the price it would sell for. It doesn’t take much to observe that even a human couldn’t be that accurate. We could give you a very close estimate of the price, but to predict the selling price of a house, within a few dollars , every single time is impossible. The pitfall of selecting too complex a hypothesis is known as overfitting. Think of overfitting as memorizing as opposed to learning. If you are a child and you are memorizing how to add numbers you may memorize the sums of any pair of integers between 0 and 10. However, when asked to calculate 11 + 12 you will be unable to because you have never seen 11 or 12, and therefore couldn’t memorize their sum. That’s what happens to an overfitted model, it gets too lazy to learn the general principle that explains the data and instead memorizes the data. Data leakage occurs when the data you are using to learn a hypothesis happens to have the information you are trying to predict. The most basic form of data leakage would be to use the same data that we want to predict as input to our model (e.g. use the price of a house to predict the price of the same house). However, most often data leakage occurs subtly and inadvertently. For example, one may wish to learn for anomalies as opposed to raw data, that is a deviations from a long-term mean. However, many fail to remove the test data before computing the anomalies and hence the anomalies carry some information about the data you want to predict since they influenced the mean and standard deviation before being removed. The are several ways to avoid data leakage as outlined by Claudia Perlich in her great paper on the subject. However, there is no silver bullet — sometimes you may inherit a corrupt dataset without even realizing it. One way to spot data leakage is if you are doing very poorly on unseen independent data. For example, say you got a dataset from someone that spanned 2000-2010, but you started collecting you own data from 2011 onward. If your model’s performance is poor on the newly collected data it may be a sign of data leakage. You must resist the urge to retrain the model with both the potentially corrupt and new data. Instated, either try to identify the causes of poor performance on the new data or, better yet, independently reconstruct the entire dataset. As a rule of thumb, your best defense is to always be mindful of the possibility of data leakage in any dataset. Sampling bias is the case when you shortchange your model by training it on a biased or non-random dataset, which results in a poorly generalizable hypothesis. In the case of housing prices, sampling bias occurs if, for some reason, all the house prices/sizes you collected were of huge mansions. However, when it was time to test your model and the first price you needed to predict was that of a 2-bedroom apartment you couldn’t predict it. Sampling bias happens very frequently mainly because, as humans, we are notorious for being biased (nonrandom) samplers. One of the most common examples of this bias happens in startups and investing. If you attend any business school course, they will use all these “case studies” of how to build a successful company. Such case studies actually depict the anomalies and not the norm as most companies fail — For every Apple that became a success there were 1000 other startups that died trying. So to build an automated data-driven investment strategy you would need samples from both successful and unsuccessful companies. The figure above (Figure 13) is a concrete example of sampling bias. Say you want to predict whether a tornado is going to originate at certain location based on two environmental conditions: wind shear and convective available potential energy (CAPE). We don’t have to worry about what these variables actually mean, but Figure 13 shows the wind shear and CAPE associated with 242 tornado cases. We can fit a model to these data but it will certainly not generalize because we failed to include shear and CAPE values when tornados did not occur. In order for our model to separate between positive (tornados) and negative (no tornados) events we must train it using both populations. There you have it. Being mindful of these limitations does not guarantee that your ML algorithm will solve all your problems, but it certainly reduces the risk of being disappointed when your model doesn’t generalize to unseen data. Now go on young Jedi: train your model, you must! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @nomadic_mind. Sometimes the difference between success and failure is the same as between = and ==. Living is in the details. " Datafiniti,3,5,https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------1----------------,Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog,"At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like: and do the following: Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages: Or Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML. While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks. Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation: A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible. Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them. In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like: For a more detailed explanation of neural networks, you can check out the following links: In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including: Our output layer had the following: Our algorithm for the neural network took the following steps: The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so: So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are: Our implementation had the following results: These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time. Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search. We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the

and

tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post! We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff! You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Instant Access to Web Data Building the world’s largest database of web data — follow our journey. " Theo,3,4,https://becominghuman.ai/is-there-a-future-for-innovation-18b4d5ab168f?source=tag_archive---------4----------------,Is there a future for innovation ? – Becoming Human: Artificial Intelligence Magazine,"Have you noticed how tech savvy children have become but are no longer streetwise ? I read a friend’s thoughts on his own site last week and there was a slight pang of regret in where technology and innovation seems to be leading us all. And so I started to worry about where the concept of innovation is going for future generations. There’s an increasing reliance on technology for the sake of convenience, children are becoming self-reliant too quickly but gadgets are replacing people as the mentor. The human bonding of parenthood is a prime example of where it’s taking a toll. I’ve seen parents hand over iDevices to pacify a child numerous times now, the lullaby and bedtime reading session has been replaced with Cut The Rope and automated storybooks apps. I know a child who has developed speech difficulty because he’s been brought up on Cable TV and a DS Lite, pronouncing words as he has heard them from a tiny speaker and not by watching how his parents pronounce them. And I started to worry about how the concept of innovation is being redefined for future generations. I used my imagination constantly as a child and it’s still as active now as it was then but I didn’t use technology to spoon feed me. The next generation expect innovation to happen at their fingertips with little to no real stimuli. Steve Jobs said “stay hungry, stay foolish” and he was right. Innovation comes from a keenness, it’s a starvation and hunger that drives people forward to spark and create, it comes from grabbing what little there is from the ether and turning it into something spectacular. It’s the Big Bang of human thought creation. And I started to worry about what the concept of innovation means for future generations. Technology is taking away the power to think for ourselves and from our children. Everything must be there and in real-time for instant consumption. It’s junk food for the mind and we’re getting fat on it. And that breeds lazy innovation. We’ve become satiated before we reach the point of real creativity, nobody wants to bother taking the time to put it all together themselves any more, it has to be ready for us. And we’re happy to throw it away if it doesn’t work first time, use it or lose it, there’s less sweat and toil involved if we don’t persevere with failure. Remember seeing the human race depicted in Wall-E ? That’s where innovation is heading. And because of this we risk so many things disappearing for the sake of convenience. We’re all guilty of it, I’m guilty of it. I was asked once what would become absurd in ten years. Thinking about it I realized we’re on the cusp of putting books on the endangered species list. Real books, books bound in hard and paperback not digital copies from a Kindle store. And that scared me because the next generation of kids may grow up never seeing one, or experience sitting with their father as he reads an old battered copy of The Hobbit because he’ll be sitting there handing over an iPad with The Hobbit read-along app teed up, and it’ll be an actors voice not his father’s voice pretending to be a bunch of trolls about to eat a company of dwarfs. Innovation is a magical, crazy concept. It stems from a combination of crazy imagination, human interaction and creativity not convenient manufacture. Technology can aid collaboration in ways we’ve never experienced before but it can’t run crazy for us. And for the sake of future generations don’t let it. Here’s to the crazy ones indeed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO @ RawShark Studios. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " x.ai,1,2,https://medium.com/@xdotai/i-scheduled-1-019-meetings-in-2012-and-that-doesnt-count-reschedules-x-ai-278d7e824eb3?source=tag_archive---------5----------------,"I scheduled 1,019 meetings in 2012 — and that doesn’t count reschedules — x.ai","The number of meetings that I scheduled in 2012 might seem astronomical. Put in context, it’s less so. I was a startup-founder at the time, and that year my company, Visual Revenue, took Series-A funding, doubled revenue, and started discussing a possible exit. I like the number though! As a startup romantic one could turn it into a nifty Malcolm Gladwell type rule of thumb called the “1,000 meetings rule.” Gladwell’s claim that greatness requires an enormous time sacrifice rings true to me — whether that means investing 10,000 hours into a subject matter to become an expert or conducting a 1,000 meetings per year, is another question. More interesting though is the impact of this 1,019 figure, and a related one: Of those more than one thousand meetings I scheduled, 672 were rescheduled. That was painful. But these numbers were among the early pieces of data that inspired me to start x.ai. * A meeting is defined as an event in my calendar, which is marginally flawed in both directions, given some events would be “Travel to JFK”, which is obviously a task and not a meeting, where others would be “Interview Sales Director Candidates“, which is really 4 meetings in 1. Originally published at x.ai on October 14, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Magically schedule meetings " Arjan Haring 🔮🔨,1,5,https://medium.com/@happybandits/website-morphing-and-more-revolutions-in-marketing-8a5cabc60576?source=tag_archive---------6----------------,Website morphing and more revolutions in marketing – Arjan Haring 🔮🔨 – Medium,"John R. Hauser is the Kirin Professor of Marketing at M.I.T.’s Sloan School of Management where he teaches new product development, marketing management, and statistical and research methodology. He has served MIT as Head of the MIT Marketing Group, Head of the Management Science Area, Research Director of the Center for Innovation in Product Development, and co-Director of the International Center for Research on the Management of Technology.He is the co-author of two textbooks, Design and Marketing of New Products and Essentials of New Product Management, and a former editor of Marketing Science (now on the advisory board).I think it wouldn’t be smart to start this interview with something as dull and complex as a definition. Or am I the only one that likes to read light weight and short articles?Let’s just get it over with. “Website morphing matches the look and feel of a website to each customer so that, over a series of customers, revenue or profit are maximized.” That didn’t hurt as much as I expected. I actually love the idea. It sounds completely logical. But are we talking about a completely new idea? There is tremendous variety in the way customers process and use information, some prefer simple recommendations while others like to dig into the details. Some customers think verbally or holistically, others prefer pictures and graphs. What is new is that we now have good algorithms to identify how customers think from the choices they make as they explore websites (their clickstream). But once we identify the way they think, we still need an automatic way to learn which website look and feel will lead to the most sales. This is a very complex problem which, fortunately, has a relative simple solution based on fundamental research by John Gittins. Our contribution was to combine the identification algorithms with the learning algorithms and develop an automated system that was feasible and practical. Once we developed the technology to morph websites, we were only limited by our imaginations. In our first application we matched the look and feel to customers’ cognitive styles. In our second, we matched to cognitive and cultural styles. We then used the algorithms to morph banner advertisements to achieve almost a 100% lift in click-through rates. Our latest project used both cognitive styles and the customers search history to match the automotive banner advertising to enhance clicks, consideration, and purchase likelihood. I also love the fact that you combine technology with behavioral science. On the psychology side of things you are/have been busy with cognitive styles, cognitive switching and cognitive simplicity. Can you tell us a little bit more about these theories and why you chose to use them? Customers are smart. They know when to use simple decision rules (cognitive simplicity) and when to use more complicated rules. Our research has been two-fold. (1) Website morphing and banner morphing figure out how customers think and provides information in the format that helps them think the way they prefer to think. (2) We have also focused on identifying consideration heuristics. Typically, customers seriously consider only a small fraction of available product. To do so they use simple rules that balance thinking (and search) costs with the completeness of information. By understanding these simple rules, managers can develop better products and better marketing strategies. We can now identify these decision rules quickly with machine-learning methods. But a caveat — customers do not always use cognitively simple rules. The “moment of truth” in a final purchase decision is best understood with more-complex decision rules and methods such as choice-based conjoint analysis. Most recently we’ve combined the two streams of research. Curiously, some of the algorithms used by the computer to morph websites are reasonably descriptive of how consumers take the future into account in purchases they make today. Prior research postulated a form of hyperrationality. Our research suggests that consumers are pretty smart about balancing cognitive costs and foresight. What are your main interests on the technology side of website morphing? Which algorithms take your fancy and why? Website morphing uses an “index” solution to learn the best morph for a customer. Our latest efforts also identify when to morph a website by embedded another “dynamic program” within the index solution. In our research to understand how consumers deal with the future, we’ve demonstrated that indices other than Gittins’ index might be more descriptive of consumer foresight. If I think about it, as a company you can either win the algorithm competition, or the psychology competition. Or lose. Do you agree? Actually, the companies that will thrive are those that understand the customers’ cognitive processes, have the algorithms to match products and marketing to customers’ cognitive processes, and have the organization that accepts such innovation. You need all three. Is this what marketing will be about in 5 years? There are many revolutions in marketing. It is an exciting time. It’s hard to list all of the changes, but here are a few. (1) Big data. We know so much more about customers than we ever did before, but this knowledge is often hidden within the volume of data. One challenge is to develop methods that scale well to be data. (2) Machine learning. There are some problems that humans solve better than computers and some problems that computers solve better than humans. Morphing, identifying simple decision rules, and studying consumer foresight are all possible with the advent of good machine-learning methods. But we have only scratched the surface. (3) Causality. Marketing has used quite successfully small-sample laboratory experiments and assumption-laden quantitative models. However, the advent of big data and web-based data collection has made it possible to do experiments and quasi-experiments on a large scale to better establish causality and to better develop theories that are externally valid. Causality also means replication. There is a strong movement in the journals to require that key finding be replicated. (4) The TPM movement (theory + practice in marketing). Conferences, special issues, and organizations are now devoted to matching managerial needs to research with impact. In fact, a recent survey by the INFORMS Society of Marketing Science suggests that approximately 80% of the researchers in marketing believe that research should be more focused on applications. (5) A maturing perspective on behavioral science. Researchers are increasingly less focused on “cute” findings that apply only in special circumstances. They are beginning to focus on insights that have a big impact (effect size) and apply to decisions that customers make routinely. Companies that combine algorithms, an understanding of customer decision-making, and the ability to use data will be the companies that succeed. Originally published at www.sciencerockstars.com on October 21, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Let’s Fix the Future: Scientific Advisor @jadatascience " Arjan Haring 🔮🔨,1,5,https://medium.com/i-love-experiments/using-artificial-intelligence-to-balance-out-customer-value-a251b0ccae6f?source=tag_archive---------8----------------,Using Artificial Intelligence to Balance Out Customer Value,"December 13 it was that time again: the second edition of #projectwaalhalla: Social Sciences for Startups. This time with Peter van der Putten speaking as data scientist. He is guest researcher at the Data Mining Group (algorithms research cluster) of the Leiden Institute of Advanced Computer Science. He is also director of decisioning solutions worldwide at Pegasystems. There is, according to Peter, a lot of potential for new startups in this area. Are you going to be the next success story ? I am actually very curious what you, as a leading data scientist, think of this whole big data thingy. I am fascinated by learning from data, but have mixed feelings about big data. The concept is being hyped a lot at this moment, while the algorithms to learn from data have been studied since the 40s in computer science. Many of the “modern” big data technologies like Hadoop are in fact still limited frameworks for old-fashioned, offline batch processed data, instead of real-time processed data. The focus should really not have to be on the data, but on the analysis — how we generate knowledge and learn from data through data mining — and more importantly, how do we operationalize this knowledge, how can we use this knowledge. Because: “Knowledge is not power, action is.” And what is the role of psychology in big data? And of philosophy? Psychology has begun studying intelligence fifty or sixty years before computer science did. People, animals, plants and all intelligent systems are basically information processing machinery. Psychology seeks to understand these systems and to tries explain behavior — if you understand a bit of that system, you can use this knowledge. For example, to teach computers, stupid mathematical pieces of scrap, smarter functions such as learning and responding to customer behavior. What is to say, thinking the other away around, that people don’t have to think like computers. See for example the psychologist Daniel Kahneman who won the 2002 The Sveriges Riksbank Prize in Economic Sciences, the unofficial Nobel Prize in economics for his insight that people aren’t rational agents that properly weigh all the choices before deciding something. And philosophy? These guys have dealt with big data for more than 3,000 years now. Just think of the nature vs. nurture debate: do we acquire intelligence and other properties by (data) experience or are they innate? Or the whole philosophy of mind discussion, with roots in the ancient Greeks: what do we really know? And is there is only experience or just reality? You have a background in artificial intelligence (AI) and even studied with the famous and wildly attractive Bas Haring (Not related... well cousin to be honest. If you insist). What could AI mean for business, and how is it different from Big Data? As long as AI is not used for old fashioned data manipulation or poor reporting, but really as intelligent data science, big data is one of the tools within ‘learning’ artificial intelligence. That is, systems that are not smart by the knowledge that is pre-inserted, but which have the capacity to learn and combine what is learned with background knowledge to deduct decisions. This is what I like to call the field of ‘decisioning’. Really intelligent systems put that knowledge into action and are part of an ecosystem, an environment with other actors, systems, people, and the scary outside world. Sounds abstract? Until the late 90s artificial intelligence was only done in the lab, now people interact with AI, unconsciously, on a daily basis, for example if they use Google, check their Facebook page or look at banners on the web. Take the company where I work next to my academic job, when I came in 2002, it was a startup of only 15 men with new software and a launching customer [editor’s note: we know that feeling ;)], ten years and two acquisitions later, we have reach more than 1 billion consumers with intelligent, data-driven, scientifically proven, real-time recommendations via digital as well as traditional channels like ATMs, shops and callcenters. No push product offerings anymore, but only ‘next best action’ recommendations that optimize customer value by balancing customer experience and predicted interests and behavior. What opportunities do you see for startups in the artificial intelligence in this area ? Well, I see tremendous opportunities, not only for 100% pure AI startups, but for all startups . If you look at the startups in Silicon Valley in high-tech and biotech, artificial intelligence is a major part of the business. Every startup should consider whether data is a key asset or a barrier to entry, and how AI or data mining can be used to convert these data into money. Where I have to note that customers and citizens rightly so, are getting more critical after all NSA issues. Those who can use this technology in a way that it not only benefits companies, but especially customers will be the most successful. In conclusion , I am curious about how much you you are looking forward to December 13, and what should happen during #projectwaalhalla that would make your wildest dreams come true. Very much looking forward to it! In terms of wildest dreams: I heard a reunion concert of the Urban Dance Squad is not going to happen, which I understand, but I look forward to exchanging views with startups, freelancers and multinationals on how to create, with the help of raw data diamands and a magical mix of data mining, machine learning, decisioning and evidence-based and real-time marketing. I will bring some nice metaphorical pictures and leave will double integrals at home. [Editor’s note: An UDS reunion? Sounds like a plan to us] Originally published at www.sciencerockstars.com on December 6, 2013. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Let’s Fix the Future: Scientific Advisor @jadatascience A blog series about the discipline of business experimentation. How to run and learn from experiments in different contexts is a complex matter, but lays at the heart of innovation. " Shivon Zilis,1.2K,10,https://medium.com/@shivon/the-current-state-of-machine-intelligence-f76c20db2fe1?source=tag_archive---------0----------------,The Current State of Machine Intelligence – Shivon Zilis – Medium,"(The 2016 Machine Intelligence landscape and post can be found here) I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then... Why do this? A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy. What is “machine intelligence,” anyway? I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.) I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺ Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape. What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence). Which companies are on the landscape? I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺ I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description). If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them. The most exciting part for me was seeing how much is happening in the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves. If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI). Reflections on the landscape: We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that. Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike. On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones. Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80's classic video games. Many companies will be acquired. I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time. No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm... Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears. Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses. The talent’s in the New (AI)vy League. In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value. Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia. For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house). As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia. There will be a peace dividend. Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research. Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle. Opportunities for entrepreneurs: “My company does deep learning for X” Few words will make you more popular in 2015. That is, if you can credibly say them. Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well. Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention. The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is. As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind. “Acquihire as a business model” People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight. This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.) A good demo is disproportionately valuable in machine intelligence I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical). Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait. Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there. Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy. I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space. Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives! Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, BrightFunnel, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off. For the full resolution version of the landscape please click here. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Partner at Bloomberg Beta. All about machine intelligence for good. Equal parts nerd and athlete. Straight up Canadian stereotype and proud of it. " Roland Trimmel,20,6,https://medium.com/@rolandt25/will-all-musicians-become-robots-6221171c5d18?source=tag_archive---------1----------------,Will All Musicians Become Robots? – Roland Trimmel – Medium,"Finally we see the rise of the machines, and with it develops a certain fear that artificial intelligence (AI) will render humans useless. This question was posed at Boston’s A3E Conference last month by a team member at Landr. Their company had received death threats from people in the mastering industry after having released a DIY drag-and-drop instant online mastering service powered by AI algorithms. It illustrates the resistance that the world of AI has incited amongst us. Some fear that robots will take over à la Terminator 2. Some fear that the virtual and artificial will replace the visceral. Some cite religious views, and others? Frankly, others just seem ignorant. That sets the tone for our own journey into artificial intelligence, and the lessons we have learned from it. We had spent more than three years developing algorithms to enable software to read and interpret a composition (song) like an expert does. Coming from a music and technology background, our team was hugely excited having accomplished this. Make no mistake, it’s really difficult to make a computer understand music — for us, this was an important first step towards a new generation of intelligent music instruments that assist the user in the songwriting process for faster completion of complex tasks resulting in no interruption of the creative flow and more creative output. When you spend so many years working on a technology/product, you run the risk of losing sight of the market. And this being our first product, we had absolutely no idea what to expect. To find out, we had to bring the product to the attention of the target group and eagerly awaited their reaction: That meant a lot of leg work for us in starting discussions on multiple forums, and collecting users’ feedback. It takes time to cut through the noise, but creates some great threads. What was interesting for us to monitor is how the discussions about our product unfolded on those forums and how opinions were split between two camps: one that embraced what we do, and the other that was characterized by anger, fear, or a complete misunderstanding of what our software does. At times we felt like being in the middle of the fight between machines and humans. We hadn't expected this, our aim was to make a cool product that shows what the technology is capable of doing. Eventually, we spent lots of time clearing misunderstandings, explaining our product better, etc. to win over those forum members’ hearts for what we do. And, occasionally we also had to calm down a heated discussion between members insulting each other caused by a fear that our product eliminates the craft in music composition. Today Enter a different reality: We have made a lot of progress with our software, much of it is down to communicating openly with our community to address any questions they may have early, and involve them deeply in product development. Has the tone in discussions about our technology changed? Yes, certainly it has. But please don’t think it’s an easy journey. It’s still hard to convince music producers to rely on the help of a piece of software that, in some regard, replicates processes of the human brain. The efforts that go into being a pioneer and driving this perceptional battle are driving one close to insanity. It’s an endless stream of work. And it requires endurance like during marathons or triathlons. Here are five things that we learned from our journey that we’d like to share with you so you can judge better before dismissing AI in music. Let me start with a quick discussion of the first and second digital wave in music: The first digital wave brought about digital music technology like synths and DAW’s. And with that, everything changed. Sound synthesis and sampling made entirely new forms of expressiveness possible. Sequencers in combination with large databases of looping clips laid the foundation for electronic dance music which led to a multifaceted artistic and cultural revolution. The second digital wave has been rolling along for a few years now, and it is washing up intelligent algorithms for processing audio and MIDI. As an example, AI’s can already help control the finishing mastering process of music tracks, as assistant tools, or even fully automated. In the not too distant future—and we’re talking only years from now—we will be used to incredible music making automatons controlling most complex harmonic figures, flawlessly imitating the greatest artists. The output quality by such algorithms is unbelievable. Computer intelligence can aimlessly merge styles of various artists and apply them to yet another piece, all that without breaking a sweat. We regard the main application of AI’s for music composition and production as helper tools, not artists in their own regard. And this is not cheating. We have been utilizing digital production tools for decades. It was just a matter of time for more complicated and intelligent code to emerge. But rest assured, computers will rather not generate music all by themselves. The art and craft of composing will prevail. There will always be human beings behind the actual output controlled by an AI. It will help though to create less complex, leaner user interfaces in the tools we use for creating music that are simpler to operate. On to the learning we promised you now. Definitely not. The magic and final decision over creative output will always remain with the (human) artist. A computer is not a human with feelings and emotions. What makes us get to our knees in awe will keep machines clinically indifferent. Simple as that. And technical approximations, as deceptive as they may get, are simply not the real thing. It already is. There is no stopping it. But then that is the course of a natural evolutionary process which can only push forward. A huge one. This is a game changer! Read our statement on main applications above. It is our egos we cling on to, having trotted down the same paths for decades. Many believe their laboriously acquired expertise is threatened by robot technology and a new ruthless generation. The truth is if we embrace AI’s as our helping friends and maybe even learn how to think a little more technical, who can fathom how ingeniously more colorful the world of music will become in the hands of talented musicians of all generations. Yes, because it enables a completely new generation of products and startups like us push for innovation. The agreeable side effect: It will make people happy, musicians, consumers, and businessmen alike, full circle. Most importantly though, it is not only AI changing the music industry. Social changes are equally responsible for it, if they don’t account for a larger part for it anyway. Here’s an excellent article by Fast Company on this topic, and more coverage on A3E in this article by TechRepublic. It’s an interesting time for all of us in music and beyond, and there’s so much yet to come. Don’t be afraid — humans also prevailed in Terminator: “There are things machines will never do. They cannot possess faith, they cannot commune with God, they cannot appreciate beauty, they cannot create art. If they ever learn these things, they won’t have to destroy us. They’ll be us.” -Sarah Connor. Image credit: Daft Punk (top), Re-Compose (middle) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Espen Waldal,57,6,https://medium.com/bakken-b%C3%A6ck/how-artificial-intelligence-can-improve-online-news-7a24889a6940?source=tag_archive---------2----------------,How Artificial Intelligence can improve online news,"That being said, the user experience for online news sites today is very much like it was ten and fifteen years ago (see the slideshow showing the evolution of NYT.com). You enter a homepage where a carefully selected combination of articles on sports, celebrity reality shows, dinner recipes and even actual news scream for your attention. There’s a huge focus on page views, and hardly any attention given to personal relevance for the reader. Smart use of technology could improve the online news experience vastly by just adding a bit more structure. That is why we created Orbit. Rich structured data is the foundation for taking the online news experience to the next level. Orbit is a collection of artificial intelligence technology API’s using machine learning-based content analysis to automatically transform unstructured text into rich structured data. By analyzing and organizing content in real-time and automatically tagging and structuring large pieces of text into clusters of topics, Orbit creates a platform where you can build multiple data rich applications. The now 5-month-old leaked innovation report from the New York Times pointed to several challenges for keeping and expanding a digital audience. To face some of the most critical issues you need to create a better experience for the reader by: 1 Serving up better recommendations of related content2 Providing new ways to discover news and add context3 Introducing personalization and filtering Relevance is essential to creating loyal readers, and even more so in a time where more and more visits to news sites go directly to a specific article, mainly due to search and social media, avoiding the front page altogether. Readers arriving through side doors like Twitter or Facebook are less engaged than readers arriving directly, which means it’s important to keep these visitors on the site and convert them into loyal readers. Yet, so little is being done to improve the relevance of recommendations and create a connection to the huge amounts of valuable content that already exists. Orbit understands not only the topics a piece contains but also related topics. It thereby understands the context of the article and can bring up related content that the reader wouldn’t otherwise have seen, extending the reader’s time spent on the site and increasing page views. Understanding context means that the cluster of topics related to an article on China signing a historic gas deal with Russia includes topics such as Russia, Ukraine, Putin, Gazprom and energy — thus creating recommendations within that cluster and creating connections between content. Rich structured data opens up for new ways to navigate and discover news. The classic navigation through carefully edited front pages has pretty much been the same since the dawn of online news publishing. Structured data enables the reader to follow certain topics or stories, improves search and enables timeline navigation of a news story to help the reader better understand the context of the story and how it has developed. At the same time, a journalist writing a story on the uproar in Ukraine has no possible way of knowing how the story will unfold in the weeks to come. Manual tagging of news stories leads to inconsistent and incomplete structures due to a subjective understanding of which topics are important and related. Machine learning-based content analysis can identify people, organizations and places and relate them to each other in real-time, thereby identifying related stories as they unfold and cluster them together. As the NYT Innovation report brought up, the true value of structured data emerges only when the content is structured equally throughout. News and content apps like Circa, Omni and Prismatic, and news sites like Vox, have incorporated some of these elements and are experimenting with how to develop original ways to discover news. There are many arguments against personalization, and they are often related to the dystopian fear of a «fragmented» public sphere or the horrors of the echo chamber. That doesn’t mean personalization can’t be a good thing; it merely means being aware of what a particular type of user wants at a particular time. We are not talking about a fully customizable news feed based on your subjective interests, meaning I will not only see articles related to Manchester United, Finance, TV-shows and Kim Kardashian, and be uninformed on all other topics. We are merely suggesting a smart filtering system and adjustments of what subjects you would like to see more and less of on your feed. After all, we do have different interests. For example you may be entirely disinterested in Tour de France during its three week media frenzy in July each year; unfollow topic, or turn the «volume» down. Today, getting the news isn’t the hard part. Filtering out the excessive info and navigating the overwhelming stream of news in a smart way is where you need great tools. A foundation of rich structured data will not only benefit the reader, but make life easier for journalists and editors as well. To provide context to a story about Syria you could add several components of extra information that would enrich the article: A box of background information on the conflict, facts about Bashar Al-Assad and the different Syrian rebel groups, and so forth. With rich structured data in place, you can automatically add relevant fact boxes and other interactive elements to a piece of content, based on third party content databases such as Wikipedia. Topics can automatically generate their own page with all the related articles, facts, visualizations and insights relevant for that specific topic cluster. Moreover, you can use the data to create new and compelling presentations of your content, including visualizations and timelines that give the reader a better experience and new insights. News content generally has a short lifespan, but this doesn’t mean that old content can’t be valuable in a new context. A consistent structuring of archived content will give new life to old content, making it easier to reuse and resurrect articles that are still relevant and create connections between old and new articles. What are the trending topics, people or organizations this week? What regions got the most media attention? How many of the sources were anonymous, how many were women versus men? Knowing more about your audience’s preferences will make it easier to create good content at the right time. Better organized content creates a strong foundation for good insights into how content is consumed and why. With a better ecosystem for your content, including higher relevance and more contextual awareness, you can present better context-based ads to your advertisers and give better insights into who is watching and acting on them. By using the right technology in smart ways, journalists and editors can focus on what they are best at: creating quality news content. orbit.ai From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product Manager at Bakken & Bæck. The Bakken & Bæck blog " Joe Johnston,38,4,https://medium.com/universal-mind/how-i-tracked-my-house-movements-using-ibeacons-3e1e9da3f1a9?source=tag_archive---------3----------------,How I tracked my house movements using iBeacons. – Universal Mind – Medium,"Recently I’ve started experimenting more and more with iBeacons. Being part of the R&D Group at Universal Mind I’ve had the opportunity to do a lot of testing and exploring of different products. In doing so, I wanted to see how someone could utilize iBeacons without building your own app, just yet (I’ll tackle this in a future post). The first step was to find iBeacons we could use for our testing. Our first choice was ordering the iBeacons from Estimote and after waiting for them to arrive (they never did), we ordered other beacons from various companies. The first set to arrive was from Roximity, which came to us as a set of 3 dev iBeacons. Next, I wanted to see if I could track movements in my own house just as a simple test, without creating a custom app. I looked for a few apps that could detect the iBeacons and execute an action. There are a few apps capable of doing this, but all of them were somewhat limiting. The only app I found that allowed me to control what happens when triggering an iBeacon was an app called Launch Here (Formally Placed). Although this app wasn’t a perfect fit, it did allow me to call some actions after triggering an iBeacon. Launch Here allows you to use custom URL Schemes. These URL Schemes allow you to open apps and even populate an action. One of the more complicated tasks of setting up any iBeacon manually is you need to gather some infomation on the iBeacon itself. The 3 key pieces of info each iBeacon contains is a UUID, Major ID, and Minor ID. To get this info you can install an app, like Locate for iBeacon that detects iBeacons and shows this information. Once you have this info you can set up your iBeacons using Launch Here. Its a bit cumbersome to set each one up but you only have to do it once. (As a side note the Launch Here app is a bit touchy when setting up the iBeacons so be warned. You may have to re-enter the URL Scheme info if you fat finger it.) Like I said before, the Launch Here app is controlled by the user, it triggers a lock screen notification when you turn on your phone and are less then 3 meters away from any iBeacon. This is a bit interesting, but it’s the approch that Launch Here took so they could give the user a bit of control when triggering actions. Ideally this all would happen behind the scenes to the user in a custom app. The custom URL Schemes are pretty powerful but you still need to manually trigger them. Here’s my set up. I have the Tumblr app installed on my phone which has the ablity to use a post URL Scheme. The URL Scheme looks like this: tumblr://x-callback-url/text?title=kitchen Once that URL Scheme is triggered from Launch Here it opens Tumblr and pre-populates a text post with the word “kitchen”, or with the name of the room I set. I manually tap post and its added. This allows me to capture each iBeacon location and store the data. The next step was to create a more data friendly format. I love using a service called IFTTT. It’s a very power platform that allows you to automatically trigger other services. I created a IFTTT recipe that auto adds a row to a Google Spreadsheet with the Time Stamp and Text that is entered into a text post to my Tumblr Account. Now I have a time stamped dataset tracking my movement in my house — at least the three rooms I have set up. With that data you can imagine how you can start to break it apart. Here’s just an example of my current break down based on room. As you can see it’s possible to track your movement, albeit a bit cumbersome. Taking this data and bubbling it up to the user could be very compelling in certain situations. I’m just using my personal home location here but you can see how this could be very powerful in other settings. I am the Director of User Experience / Research & Development at Universal Mind — A Digital Solutions Agency. You can follow me on twitter at @merhl. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Experience & Service Design Director @SparksGrove the experience design division of @NorthHighland (Alum of @Hugeinc @UniversalMind @Startgarden) A collection of articles created by Universal Mind thinkers. " Nadav Gur,10,9,https://medium.com/the-vanguard/why-natural-search-is-awesome-and-how-we-got-here-fe69b9cdd0db?source=tag_archive---------5----------------,Why Natural Search is Awesome and How We Got Here – The Vanguard – Medium,"The Evolution Of Desti’s Search Interface This is a story about how one ambitious start-up tackled this subject that has riddled people like Google, Apple, Facebook and others, and came up with some pretty clever conclusions (if I may say so myself). In 2012–2013 we were building Desti — a holistic travel search app (i.e. it would search for everything — from hotels through attractions to restaurants), using post-Siri natural-language-understanding tech, and with powerful semantic search capabilities on the back end that allowed Desti to reason meaningfully about search results and make highly informed suggestions. Desti’s search was built on a premise that sounds very simple, but it’s actually very hard to pull off. We believe that people should be able to ask specifically for what they’re interested in and get results that match. This sounds reasonable, right? If I’m looking for a beach resort on the Kona Coast in Hawaii, it’s pretty obvious what I want. And if I also want it to be kid friendly and pet friendly, I should just be able to ask for it. Our goal was to get users inputting relevant, specific queries because that’s what people need. That’s where Desti shines — saving you time and effort by delivering exactly what you want. Now let’s assume that Desti knows which hotels on the Kona Coast are actually beach resorts, are kid friendly and pet friendly. How can we make expressing this query easy and intuitive for the user? Episode I: Desti is Siri’s Sister or Conversational User Interface:When we started, we were very naïve about this. We said — first, let’s just put a search box in there, allowing the user to type or say whatever they want, and let’s make sure we understand this. Then, let’s leave that box there so they can react to what they see and provide more detail (“refine”) or search for something else in that context (e.g. a restaurant near the resort — we called this “pivot”). And let’s run a conversation around it, kind of like Siri. What could be more natural? To do this we used SRI International’s VPA platform, which is almost literally a post-Siri natural-language-interaction platform with which you can have a conversation in context. This is more or less what it looked like in our beta version: Search box: A conversational UI: We launched this, monitored use and quickly realized is that early users split into two groups: Discarding the 2nd group (we’re busy people), we learned that people don’t know how to interact naturally with computers, or they have no idea what to ask or expect, so they revert to the most primitive queries. Problem is, our goal was to answer interesting, specific queries, because we believe that if we give you a great answer that caters to what you want, your likelihood of buying is that much higher. Furthermore, absolutely no one got the conversational aspect — the fact you can continue refining and pivoting through conversation. We decided to take away the focus from conversation for the time being. Episode 2: Vegas Slot Machines or Make It Dead Simple We realized we have to focus on the first query, and give people some cues about what’s possible. And came up with this interface: These contextual spinners turned interaction from a totally open-ended query to something closer to multiple-choice questions. In essence these were interchangeable templates, where you could get ideas for “what to search for” as well as easily input your query. What you picked would show up as a textual query in the search bar, which we hoped people would realize they can edit or add to. Hoped... The results — on the one hand, progress. We saw longer and more interesting queries and more interaction. However when talking to users, we realized that they were assuming that the spinner was a kind of menu system, which means (a) they can only pick what’s in the menu (b) they have to pick one thing from each menu. So while this was better than what most sites have for search, it was still a far cry from what we wanted to deliver. Here’s what we learned from this: Episode 3: Fill In The Blanks — Smartly At this stage, it was clear that we needed better auto-suggest and smarter auto-complete. This is similar from a UI perspective to Google Instant, but Desti is about semantic search, not keyword matching. In most cases, Google will auto-suggest a phrase that matches what you’ve been typing AND has been typed in by many other people. Desti should suggest something that semantically matches what you entered and makes sense given what we know of the destination and about your trip. Because Desti is new and there haven’t been a million users searching for the same things before you, Desti should reason about what you may ask, not suggest something someone else asked. We realized we have to build a lot of semantically-reasonable and statistically-relevant auto-suggesting. We still wanted to keep to the template logic because we believed it helps users think about what they are looking for and form the query in their minds. So we came up with a UI that blends form-filling and natural language entry, and focused on building smart auto-suggest and auto-complete. This UI was built of a number of rigid fields (e.g. location, type) that adapt to the subject matter (so if the type is “hotel” you’re prompted for dates), and a free text field that allows you to ask for whatever else you want. We iterated a lot over the auto-complete and auto-suggest features. The first thing is to realize they are different. With auto-complete, you have a user who already thought of something to type in, and you have to guess what that is. With auto-suggest, you really want to inspire the user into adding something useful to their query, which means it needs to be relevant to whatever you know about the query and user so far, but not overwhelming for the user. All this requires knowing a lot about specific destinations (what do people search for in Hawaii vs. New York?) and specific types (what’s relevant for hotels vs. museums?). Also, on the visual side, what the user is putting in is often quantitative and easier to “set” than “type” — e.g. a date, a price etc. So we came up with our first crack at blending text with visual widgets. The results were a big improvement in the quality and relevance of queries over the previous UI, but a feeling that this was still too stiff and rigid. When people are asked for a “type of place” — e.g. a museum, a park, a hotel — they often can’t really answer, and it’s easier for them to think about a feature of the place instead — e.g. that they can go hiking, or biking, see art or eat breakfast. For linguistic reasons it’s easier for people to say that they want a “romantic hotel” than a “hotel that’s romantic”. So while this UI was very expressive, often it felt unnatural and limiting. Furthermore many users just ended up filling the basic fields and not adding any depth in the open-text field (despite various visual cues). And editing a query for refining or pivoting was hard. At the same time — the auto-suggest / auto-complete elements we’ve built at this stage werealmost enough to allow us to just throw out the limiting “templates” and move to one search field — but this time, a damn clever one. Episode 4: Search Goes Natural To the naked eye, this looks like we’ve gone full circle — one text box, parsed queries shown as tags. What could be simpler? Well, not exactly, because we still need queries to be meaningful. One thing that the templates gave us was built-in disambiguation. We need a query that has at least a location + a type (or something from which we can derive a type), and without a template telling us that the “hotel” is the type, and the “restaurant” is something you want your hotel to have (vs. maybe the opposite), the system needs to better understand the grammatical structure or the sentence, and cue you into inputting things the right way when it’s suggesting and auto-completing. Typing a query: The query is understood — you can add / edit: With this new user interface, changing queries (“refining and pivoting”) is very natural — add tags, or take away tags. Widgets were contextually integrated using the auto-suggest drop-down menu, so they are naturally suggested at the right time (e.g. after you said you were looking for a hotel, we help you choose when, how many rooms etc.). It’s also very easy to suggest things to search for based on the context. For instance if we know your kids are traveling with you, we’d drop in “family friendly” and you could dismiss it with one click. So Where is This Going So far, Natural Search looks and behaves better than anything else we’ve seen in this space. From now on, most of the focus is on making the guesses even smarter, with more statistic reasoning about what people ask for in different contexts, and more contextual info driving those guesses. We believe this UI is where vertical search is heading. Consider how nice it would be to input “gifts for 4 year old boys under $30” into target.com’s search bar, or “romantic restaurant with great seafood near Times Square with a table at 8 PM tonight” into OpenTable — and get relevant answers. But then again, answering specific queries is not that easy either, but that’s the other side of Desti... To be continued. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I think. Then I talk. Sometime it’s the other way around. Founded & ran companies in AI, mobile, travel, etc., ex-EiR at SRI Int’l, ex-aerospace Nadav Gur’s Tech Musings " Pandorabots,14,3,https://medium.com/pandorabots-blog/using-oob-tags-in-aiml-part-i-21214b4d2fcd?source=tag_archive---------6----------------,Using OOB Tags in AIML: Part I – pandorabots-blog – Medium,"Suppose you are building an Intelligent Virtual Agent or Virtual Personal Assistant (VPA) that uses a Pandorabot as the natural language processing engine. You might want this VPA to be able to perform tasks such as sending a text message, adding an event to a calendar, or even just initiating a phone call. OOB tags allow you to do just that! OOB stands for “out of band,” which is an engineering term used to refer to activity performed on a separate, hidden channel. For a Pandorabot VPA, this translates to activities which fall outside of the scope of an ordinary conversation, such as placing a phone call, checking dynamic information like the weather, or searching wikipedia for the answer to some question. The task is executed, but does not necessarily always produce an effect on the conversation between the Pandorabot and the user. OOB tags are used in AIML templates and are written in the following format: command. The command that is to be executed is specified by a set of tags which occur within the tags. These inner OOB tags can be whatever you like, and the phone-related actions they initiate are defined in your applications code. To place a call you might see something like this: some phone number. The tag within the tag sends a message to the phone to dial the number specified. When your client indicates they want to dial a number, your application will receive a template containing the command specified inside the OOB tag. Within your application, this inner command will be interpreted and the appropriate actions will be executed. It is useful to think of the activities initiated by oob tags as falling into one of two categories, based on whether they return information to the user via the chat interface or not. The first category, those that do not return information, typically involve activities that interrupt the conversation. If you ask your VPA to look up restuarants on a map, it will open up your map application and perform a search. Similarly, if you ask your bot to make a phone call, it will open the dialer application and make a call. In both of these examples, the activity performed interrupts the conversation and displays some other screen. The second category, those that do return information to the user via the chat interface, are generally actions that are executed in the background of the conversation. If you ask your Pandorabot to look up the “Population of the United States” on Wikipedia, it will perform the search, and then return the results of the search to the user via the chat window. Similarly, if you ask your Pandorabot to send a text message to the friend, it will send the text, and then return a message to the user via the chat window indicating the success of the action, i.e. “Your text message was delivered!” In this second set of examples, it is useful to distinguish between those activities whose results will be returned directly to the user, like the Wikipedia example, and those activities whose successful completion will simply be indicated to the user through the chat interface, as with the texting example. Here is an example of a category that uses the phone dialer on android. Here is an example interaction this category would lead to: Human: Dial 1234567. Robot: Calling 1234567. Here is a slightly more complicated example involving the oob tag, which launches a browser and performs a google search: Human: Look up Pandorabots. Robot: Searching...Searching... Please stand by. Note: not shown in the previous example is the category RANDOM SEARCH PHRASE, which delivers a random selection from a short list of possible replies, each indicating to the user that the bot correctly interpreted their search request. For a complete list of oob tags as implemented in the CallMom Virtual Personal Assistant App for Android, as well as usage examples, click here. Be sure to look out for the upcoming post “Using OOB Tags in AIML: Part II”, which will go over a basic example of how to intrepret the OOB tags received from the Pandorabots server within the framework of your own VPA application. Originally published at blog.pandorabots.com on October 9, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The largest, most established chatbot development and hosting platform. www.pandorabots.com The leading platform for building and deploying chatbots. " Denny Vrandečić,4,4,https://medium.com/@vrandezo/ai-is-coming-and-it-will-be-boring-94768de264c6?source=tag_archive---------7----------------,"AI is coming, and it will be boring – Denny Vrandečić – Medium","I was asked about my opinion on this topic, and I thought I would have some profound thoughts on this. But I ended up rambling, and this post doesn’t really make any single strong point. tl;dr: Don’t worry about AIs killing all humans. It’s not likely to happen. In an interview with the BBC, Stephen Hawking stated that “the development of full artificial intelligence could spell the end of the human race”. Whereas this is hard to deny, it is rather trivial: any sufficiently powerful tool could potentially spell the end of the human race given a person who knows how to use that tool in order to achieve such a goal. There are far more dangerous developments — for example, global climate change, the arsenal of nuclear weapons, or an economic system that continues to sharpen inequality and social tension? AI will be a very powerful tool. Like every powerful tool, it will be highly disruptive. Jobs and whole industries will be destroyed, and a few others will be created. Just as electricity, the car, penicillin, or the internet, AI will profoundly change your everyday life, the global economy, and everything in between. If you want to discuss consequences of AI, here are a few that are more realistic than human extermination: what will happen if AI makes many jobs obsolete? How do we ensure that AIs make choices compliant with our ethical understanding? How to define the idea of privacy in a world where your car is observing you? What does it mean to be human if your toaster is more intelligent than you? The development of AI will be gradual, and so will the changes in our lifes. And as AI keeps developing, things once considered magical will become boring. A watch you could talk to was powered by magic in Disney’s 1991 classic “The Beauty and the Beast”, and 23 years later you can buy one for less than a hundred dollars. A self-driving car was the protagonist of the 80s TV show “Knight Rider”, and thirty years later they are driving on the streets of California. A system that checks if a bird is in a picture was considered a five-year research task in September 2014, and less than two months later Google announces a system that can provide captions for pictures — including birds. And these things will become boring in a few years, if not months. We will have to remind ourselves how awesome it is to have a computer in our pocket that is more powerful than the one that got Apollo to the moon and back. That we can make a video of our children playing and send it instantaneously to our parents on another continent. That we can search for any text in almost any book ever written. Technology is like that. What’s exciting today, will become boring tomorrow. So will AI. In the next few years, you will have access to systems that will gradually become capable to answer more and more of your questions. That will offer advice and guidance towards helping you navigate your life towards the goal you tell it. That will be able to sift through text and data and start to draw novel conclusions. They will become increasingly intelligent. And there are two major scenarios that people are afraid of at this point: The Skynet scenario is just mythos. There is no indication that raw intelligence is sufficient to create intrinsic intention or will. The paperclip scenario is more realistic. And once we get closer to systems with such power, we will need to put the right safeguards in place. The good news is that we will have plenty of AIs at our disposal to help us with that. The bad news is that discussing such scenarios now is premature: we simply don’t know how these systems will look like. That’s like starting a committee a hundred years ago to discuss the danger coming from novel weaponry: no one in 1914 could have predicted nuclear weapons and their risks. It is unlikely that the results of such a committee would have provided much relevant ethical guidance for the Manhattan project three decades later. Why should that be any different today? In summary: there are plenty of consequences of the development of AI that warrant intensive discussion (economical consequences, ethical decisions made by AIs, etc.), but it is unlikely that they will bring the end of humanity. Background image: robots trashing living room by vincekamp, licensed under CC BY ND 3.0. Personal permanent URL: http://simia.net/wiki/AI_is_coming,_and_it_will_be_boring From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Wikidata founder, Google ontologist, Semantic Web researcher, and author. " Thaddeus Howze,15,6,https://medium.com/@ebonstorm/of-comets-and-gods-in-the-making-4f55ecccb9fe?source=tag_archive---------8----------------,Of Comets and Gods in the Making – Thaddeus Howze – Medium,"Asferit had not grown up; she didn’t know where she came from; could not conceive of childhood. No memories of parents, no recollection of family. On the vast empty world that served as her lab, she built the probes and put a little bit of herself in each one. Her machine-form, ancient, slow and sputtering came to life, wheezing through the long corridors of the silent lab, its darkness masking the distant empty spaces which Asferi imagined were once filled with life. She looked through her thoughts and realized she had lost any hope of memory, that part of her was already circling a distant star aborning with life. She looked in on those places when she woke to see the results of her work; on so many worlds life spawned. With the next launch, she would lose the memory of those places, there was so little of her remaining. Enough for three, no four probes. Then she would cease to remember why she was, what she was. She would forget how to exist. But not yet. She completed the next probe, winding the engine and orienting it along the galactic plane; her sensor array aligning the probe with a wandering comet; she planned to deposit herself within the life-giving molecules within its frozen mass. She knew little about her past, but knew that she must not be able to be found. This was the only memory that remained; hide from the Darkness. As she loaded the last probe, she considered the first probe she ever sent, millennia ago; there were monuments within the halls of the lab in her hubris then, she considered them a successful reincarnation of her people. Each representation was filled with the temporal signature of that once great race; a temporal residue of failure. It spoke of a great race, masters of time and space; they flourished in the dark between the stars. Then the Darkness came. She was overconfident. She slept assured of their success her mission completed. In the time between sleeping and waking, her cycle of regeneration before attempting to seed again, the great race was gone. Found. They did not heed the warnings she sent in those early days. She gave far more of herself then. She came to them in visions, taught them secrets to harness the hidden nature of matter; revealed to them the nature of energies, both planetary and interstellar. They would worship her, revere her and believe her to be a god. In the end, it was not enough. They were consumed, their greatness undone. She sent less and less of herself from then on. Godhood failed them. Perhaps obscurity would serve them better. She sent less each time, only tiny packages of micromachines capable of changing matter, capable of modifying genomes, empowering the creatures spawned of her with abilities even greater than the First Race. Psychometric representations of them were all that remained, echoes in the timestream of history. In their hubris, they ruptured time and space and like the world her lab hung above, cracked the crust of their world and were lost in a temporal vortex of their own making. They had such potential. Squandered. Then she began sending only the memory of what she was, embedded within complex epigentic echoes. No longer would she shape the universe for them, they would have to work for their survival, perhaps they would be stronger for it. She appeared to her descendents only in dreams; visions of what they were, memories of who she was, memories she no longer possessed. Her memory was great once and she seeded thousands of worlds with it. But like the ephemeral nature of memory, so few knew what they saw. Many went mad. Most dreamed of demiurges, mad deva whose powers ravaged worlds. These memories destroyed half of them before they could achieve spaceflight and reach for the stars themselves. Religions they spawned consumed them. Now, she sent only cells and precellular matter. The very least of herself, the essence of who she was, the final matter of her being; hidden in comets, cloaked in meteor swarms, hidden on the boots of other starfarers. Time had taught her patience, though she had lost her memories, she was confident of this final strategy. To hide herself on millions of worlds, her final probe-ships would leave a legacy on millions of worlds. She found the last star she would use and loaded the final probe-ship with the hardiest constructions she had ever made. She deconstructed the worldship; her lab, her home for millenia of millenia, breaking down every part of it, reforging it for a final effort. The planet below was also consumed, her last effort would require everything. It was a long dead world lost to antiquity when the universe was young. Of the Darkness, she could not remember, but she knew this: as long as there was light, her people would survive. The final instructions to her probeship would have her descending into her planet’s unstable star. It’s final fluctuations revealed what she knew was the inevitable outcome; and she planned to use it to her advantage. Her final self would not be aware of the result. The final cells of her body were distributed within millions of pieces of her world and her lab. Each calibrated to arrive at a star somewhere in her galaxy. Each single cell would find a world ready for life. She could no longer coerce planets into life. She could no longer force matter or energy to take the shape she deemed. She was now only able to influence the tiniest aspects. Asferit would only be able to nudge a planet toward Life. The Darkness would always be ready to claim her people but now they would be scattered; to worlds within the galaxy and without. She seeded the galactic wind and waited for a supernova to blow them where it would. Her starseeds hardened against the impending blastwave, they would, with the tiniest bit of her final design, travel faster than light toward their final destinations. As the star which lit her world, gave her people life, watched them die and patiently waited until they could be reborn, exploded, Asferit now waited in turn. In those last seconds as the waves of radiation and coronal debris swept over the remnants of her cannibalized world, she subsumed herself within the starseeds and the near-immortal being Asferit, last of her kind, was no more. And yet now she was pure purpose, no ambitions, no plan, no dreams of godhood, no longer a radiant harbingers of dooms lighting the skies of primitive worlds. She would be the essence of Life itself; the Darkness be damned. Of Comets and Gods in the Making © Thaddeus Howze 2013, All Rights Reserved Thaddeus Howze is a popular and recently awarded Top Writer, 2016 recipient on the Q&A site Quora.com. He is also a moderator and contributor to theScience Fiction and Fantasy Stack Exchange with over fourteen hundred articles in a four year period. Thaddeus Howze is a California-based technologist and author who has worked with computer technology since the 1980’s doing graphic design, computer science, programming, network administration, teaching computer science and IT leadership. His non-fiction work has appeared in numerous magazines: Huffington Post, Gizmodo, Black Enterprise, the Good Men Project, Examiner.com, The Enemy, Panel & Frame, Science X, Loud Journal, ComicsBeat.com, and Astronaut.com. He maintains a diverse collection of non-fiction at his blog, A Matter of Scale. His speculative fiction has appeared online at Medium, Scifiideas.com, and theAu Courant Press Journal. He has appeared in twelve different anthologies in the United States, the United Kingdom and Australia. A list of his published work appears on his website, Hub City Blues. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Author | Editor | Futurist | Activist | Tech Humanist | http://bit.ly/thowzebio |http://bit.ly/thpatreon " Tommy Thompson,17,14,https://medium.com/@t2thompson/ailovespacman-9ffdd21b01ff?source=tag_archive---------9----------------,Why AI Research Loves Pac-Man – Tommy Thompson – Medium,"AI and Games is a crowdfunded YouTube series on the research and applications of AI within video games. The following article is a more involved transcription of the topics discussed in the video linked to above. If you enjoy this work, please consider supporting my future content over on Patreon. Artificial Intelligence research has shown a small infatuation with the Pac-Man video game series over the past 15 years. But why specifically Pac-Man? What elements of this game have proven interesting to researchers in this time? Let’s discuss why Pac-Man is so important in the world of game-AI research. For the sake of completes — and in appreciating there is arguably a generation or two not familiar with the game — Puck-Man was an arcade game launched in 1980 by Namco in Japan and renamed Pac-Man upon being licensed by Midway for an American release. The name change was driven less by a need for brand awareness but rather because the name can easily be de-faced to say... something else. The original game focuses on the titular character, who must consume as many pills as possible without being caught by one of four antagonists represented by ghosts. The four ghosts: Inky, Blinky, Pinky and Clyde, all attempt to hunt down the player using slightly different tactics from one another. Each ghost has their own behaviour; a bespoke algorithm that dictates how they attack the player. Players also have the option to consume one of several power-pills that appear in each map. Power-pills allow for the player to not just eat pills but the enemy ghosts for a short period of time. While mechanically simple when compared to modern video games, it provides an interesting test-bed for AI algorithms learning to play games. The game world is relatively simple in nature, but complex enough that strategies can be employed for optimal navigation. Furthermore, the varied behaviours of the ghosts reinforces the need for strategy; since their unique albeit predictable behaviours necessitate different tactics. If problem solving can be achieved at this level, then there is opportunity for it to scale up to more complex games. While Pac-Man research began in earnest in the early 2000’s, work by John Koza (Koza, 1992) discussed how Pac-Man provides an interesting domain for genetic programming; a form of evolutionary algorithm that learns to generate basic programs. The idea behind Koza’s work and later that of (Rosca, 1996) was to highlight how Pac-Man provides an interesting problem for task-prioritisation. This is quite relevant given that we are often trying to balance the need to consume pills, all the while avoiding ghosts or — when the opportunity presents itself — eating them. About 10 years later, people became more interested in Pac-Man as a control problem. This research was often with the intent to explore the applications of artificial neural networks for the purposes of creating a generalised action policy: software that would know at any given tick in the game what would be the correct action to take. This policy would be built from playing the game a number of times and training the system to learn what is effective and what is not. Typically these neural networks are trained using an evolutionary algorithm, that finds optimal network configurations by breeding collections of possible solutions and using a ‘survival of the fittest’ approach to cull weak candidates. (Kalyanpur and Simon, 2001) explored how evolutionary learning algorithms could be used to improve strategies for the ghosts. In time it was evident that the use of crossover and mutation — which are key elements of most evolutionary-based approaches — was effective in improving the overall behaviour. However it’s important to note that they themselves acknowledge their work uses a problem domain similar to Pac-Man and not the actual game. (Gallagher and Ryan, 2003) uses a slightly more accurate representation of the original game. While the screenshot is shown here, the actual implementation only used one ghost rather than the original four. In this research the team used an incremental learning algorithm that tailored a series of rules for the player that dictate how Pac-Man is controlled using a Finite State Machine (FSM). This proved highly effective in the simplified version they were playing. The use of artificial neural networks - a data structure that mimics the firing of synapses in the brain — was increasingly popular at the time (and once again in most recent research). Two notable publications on Pac-Man are (Lucas, 2005), which attempted to create a ‘move evaluation function’ for Pac-Man based on data scraped from the screen and processed as features (e.g. distance to closest ghost), while (Gallagher and Ledwich, 2007) attempted to learn from raw, unprocessed information. It’s notable here that the work by Lucas was in fact done on Ms. Pac-Man rather than Pac-Man. While perhaps not that important to the casual observer, this is an important distinction for AI researchers. Research in the original Pac-Man game caught the interest of the larger computational and artificial intelligence community. You could argue it was due to the interesting problem that the game presents or that a game as notable as Pac-Man was now considered of interest within the AI research community. While it is now something that appears commonplace, games — more specifically video games — did not receive the same attention within AI research circles as they do today. As high-quality research in AI applications in video games grew, it wasn’t long before those with a taste for Pac-Man research moved on to looking at Ms. Pac-Man given the challenges it presents — which we are still conducting research for in 2017. Ms. Pac-Man is odd in that it was originally an unofficial sequel: Midway, who had released the original Pac-Man in the United States, had become frustrated at Namco’s continued failure to release a sequel. While Namco did in time release a sequel dubbed Super Pac-Man, which in many ways is a departure from the original, Midway decided to take matters into their own hands. Ms. Pac-Man was — for lack of a better term — a mod; originally conceived by the General Computing Company based in Massachusetts. GCC had got themselves into a spot of legal trouble with Midway having previously created a mod kit for popular arcade game Missile Command. As a result, GCC were essentially banned from making further mod kits without the original game’s publisher providing consent. Despite the recent lawsuit hanging over them, they decided to show Midway their Pac-Man mod dubbed Crazy Otto, who liked it so much they bought it from GCC, patched it up to look like a true Pac-Man successor and released it in arcades without Namco’s consent (though this has been disputed). Note: For our younger audience, mod kits in the 1980s were not simply software we could use to access and modify parts of an original game. These were actual hardware: printed circuit boards (PCBs) that could either be added next to the existing game in the arcade unit, or replace it entirely. While nowhere near as common nowadays due to the rise of home console gaming, there are many enthusiasts who still use and trade PCBs fitted for arcade gaming. Ms. Pac-Man looks very similar to the original, albeit with the somewhat stereotypical bow on Ms. Pac-Man’s hair/head(?) and a couple of minor graphical changes. However the sequel also received some small changes to gameplay that have a significant impact. One of the most significant changes is that the game now has four different maps. In addition the placement of fruit is more dynamic and they move around the maze. Lastly, a small change is made to the ghost behaviour such that, periodically, the ghosts will commit a random move. Otherwise, they will continue to exhibit their prescribed behaviour from the original game. Each of these changes has a significant impact on both how humans and AI subsequently approach the problem. Changes made to the maps do not have a significant impact upon AI approaches. For many of the approaches discussed earlier, it is simply another configuration of the topography used to model the maze. Or if the agent is using more egocentric models for input (i.e. relative to the Pac-Man) then these is not really considered given the input is contextual. This is only an issue should the agent’s design require some form or pre-processing or expert rules that are based explicitly upon the configuration of the map. With respect to a human, this is also not a huge task. The only real issue is that a human would have become accustom to playing on a given map; devising strategies that utilise parts of the map to good effect. However, all they need is practice on the new maps. In time, new strategies can be formulated. The small change to ghost behaviour, which results in random moves occurring periodically, is highly significant. This is due to the fact that the deterministic model that the original game has is completely broken. Previously, each ghost had a prescribed behaviour, you could — with some computational effort — determine the state (and indeed the location) of a ghost at frame n of the game, where n is a certain number of steps ahead of the current state. Any implementation that is reliant upon this knowledge, whether it is using it as part of a heuristic, or an expert knowledge base that gives explicit instructions based on the assumption of their behaviour, is now sub-optimal. If the ghosts can make random decisions without any real warning, then we no longer have the same level of confidence in any of our ghost-prediction strategies. Similarly, this has an impact on human players. The deterministic behaviour of the ghosts in the original Pac-Man, while complex, can eventually be recognised by a human player. This has been recognised by the leading human players who could factor their behaviour at some level into their decision making process. However, in Ms. Pac-Man, the change to a non-deterministic domain has a similar effect to humans as it does AI: we can no longer say with complete confidence what the ghosts will do given they can make random moves. Evidence that a particular type of problem or methodology has gained some traction in a research community can be found in competitions. If a competition exists that is open to the larger research community it is, in essence, a validation that this problem merits consideration. In the case of Ms. Pac-Man, there have been two competitions. The first competition was organised by Simon Lucas — at the time a professor at the University of Essex in the UK — with the first competition held at the Conference on Evolutionary Computation (CEC) in 2007. It was subsequently held at a number of conferences — notably IEEE Conference on Computational Intelligence and Games (CIG) — until 2011. http://dces.essex.ac.uk/staff/sml/pacman/PacManContest.html This competition used a screen capture approach previously mentioned in (Lucas, 2005) that was reliant on an existing version of the game. While the organisers would use Microsoft’s own version from the ‘Revenge of Arcade‘ title, you could also use the likes the webpacman for testing, given it was believed to run the same ROM code. As shown in the screenshot, the code is actually taking information direct from the running game. One benefit of this approach is that it denies the AI developer from accessing the code to potentially ‘cheat’: you can’t access source code and make calls to the likes of the ghosts to determine their current move. Instead the developer is required to work with the exact same information that a human player would. A video of the winner from the IEEE CIG 2009 competition, ICE Pambush 3, can be seen in the video below: In 2011, Simon Lucas in conjunction with Philipp Rohlfshagen and David Robles created the Ms Pac-Man vs Ghosts competition. In this iteration, the ‘screen scraping’ approach had been replaced with a Java implementation of the original game. This provided an API to develop your own bot for competitions. This iteration ran at four conferences between 2011 and 2012. One of the major changes to this competition is that you can now also write AI controllers for the ghosts. Competitors submissions were then pitted against one another. The ranking submission for both Ms. Pac-Man and the ghosts from the 2012 league is shown below. During the earlier competition, there was a continued interest in the use of learning algorithms. This ranged from the of an evolutionary algorithm — which we had seen in earlier research — to evolve code that is the most effective at this problem. This ranged from evolving ‘fuzzy systems’ that use a rules driven by fuzzy logic (yes, that is a real thing) shown in (Handa, 2008), to the use of influence maps in (Wirth, 2008) and a different take that uses ant colony optimisation to create competitive players (Emilio et al, 2010). This research also stirred interest from researchers in reinforcement learning: a different kind of learning algorithm that learns from the positive and negative impacts of actions. Note: It has been argued that reinforcement learning algorithms are similar to that of how the human brain operates, in that feedback is sent to the brain upon committing actions. Over time we then associate certain responses with ‘good’ or ‘bad’ outcomes. Placing your hand over a naked flame is quickly associated as bad given that it hurts! Simon Lucas and Peter Burrow took to the competition framework as means to assess whether reinforcement learning, specifically an approach called Temporal Difference Learning, would yield stronger returns than evolving neural networks (Burrow and Lucas, 2009). The results appeared to favour the use neural nets over the reinforcement learning approach. Despite that, one of the major contributions Ms. Pac-Man has generated is research into Monte Carlo methods: an approach where repeated sampling of states and actions allow us to ascertain not only the reward that we will typically attain having made an action, but also the ‘value’ of the state. More specifically, there has been significant exploration of whether Monte-Carlo Tree Search (MCTS); an algorithm that assesses the potential outcomes at a given state by simulating the outcome, could prove successful. MCTS has already proven to be effective in games such as Go! (Chaslot et al, 2008) and Klondike Solitaire (Bjarnason et al. 2009). Naturally — given this is merely an article on the subject and not a literature review — we cannot cover this in immense detail. However, there has been a significant number of papers focussed on this approach. For those interested I would advise you read (Browne, et al. 2012) which gives an extensive overview of the method and it’s applications. One of the reasons that this algorithm proves so useful is that it attempts to address the issue of whether your actions will prove harmful in the future. Much of the research discussed in this article is very good at dealing with immediate or ‘reflex’ responses. However, few would determine whether actions would hurt you in the long term. This is hard to determine for AI without putting some processing power behind it and even harder when working in a dynamic video game that requires quick responses. MCTS has proven useful since it can simulate whether an action taken on the current frame will be useful 5/10/100/1000 frames in the future and has led to significant improvements in AI behaviour. While Ms. Pac-Man helped push MCTS research, many resarchers have now moved onto the Physical Travelling Salesman Problem (PTSP), which provides it’s own unique challenges due to the nature of the game environment. Ms. Pac-Man is still to date an interesting research area given the challenge that it presents. We are still seeing research conducted within the community as we attempt to overcome the challenge that one small change to the game code presented. In addition, we have moved on from simply focussing on representing the player and started to focus on the ghosts as well, lending to the aforementioned Pac-Man vs. Ghosts competition. While the gaming community at large has more or less forgotten about the series, it has had a significant impact on the AI research community. While the interest in Pac-Man and Ms. Pac-Man is beginning to dissipate, it has encouraged research that has provided significant contribution to artificial and computational intelligence in general. http://www.pacman-vs-ghosts.net/ — The homepage of the competition where you can download the software kit and try it out yourself. http://pacman.shaunew.com/ — An unofficial remake that is inspired by the aforementioned Pac-Man dossier by Jamey Pittman. (Bjarnason, R., Fern, A., & Tadepalli, P. 2009). Lower Bounding Klondike Solitaire with Monte-Carlo Planning. In Proceedings of the International Conference on Automated Planning and Scheduling, 2009. (Browne, C., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P., Rohlfshagen, P., Tavener, S., Perez , D., Samothrakis, S. and Colton, S., 2012) A Survey of Monte Carlo Tree Search Methods, IEEE Transactions on Computational Intelligence and AI in Games (2012), pages: 1–43. (Burrow, P. and Lucas, S.M., 2009) Evolution versus Temporal Difference Learning for Learning to Play Ms Pac-Man, Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games. (Emilio, M., Moises, M., Gustavo, R. and Yago, S., 2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. Proceedings of the 2010 IEEE Symposium on Computational Intelligence and Games. (Gallagher, M. and Ledwich, M., 2007) Evolving Pac-Man Players: What Can We Learn From Raw Input? Proceedings of the 2007 IEEE symposium on Computational Intelligence and Games. (Gallagher, M. and Ryan., A., 2003) Learning to Play Pac-Man: An Evolutionary, Rule-based Approach. Proceedings of the 2003 Congress on Evolutionary Computation (CEC). (Chaslot, G. M. B., Winands, M. H., & van Den Herik, H. J. 2008). Parallel monte-carlo tree search. In Computers and Games (pp. 60–71). Springer Berlin Heidelberg. (Handa, H.) Evolutionary Fuzzy Systems for Generating Better Ms. PacMan Players. Proceedings of the IEEE World Congress on Computational Intelligence. (Kalyanpur, A. and Simon, M., 2001) Pacman using genetic algorithms and neural networks. (Koza, J., 1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press. (Lucas, S.M.,2005) Evolving a Neural Network Location Evaluator to Play Ms. Pac-Man, Proceedings of the 2005 IEEE Symposium on Computational Intelligence and Games. (Pittman, J., 2011) The Pac-Man Dossier. Retrieved from: http://home.comcast.net/~jpittman2/pacman/pacmandossier.html (Rosca, J., 1996) Generality Versus Size in Genetic Programming. Proceedings of the Genetic Programming Conference 1996 (GP’96). (Wirth, N., 2008) An influence map model for playing Ms. Pac-Man. Proceedings of the 2008 Computational Intelligence and Games Symposium Originally published at aiandgames.com on February 10, 2014 — updated to include more contemporary Pac-Man research references. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI and games researcher. Senior lecturer. Writer/producer of YouTube series @AIandGames. Indie developer with @TableFlipGames. " Milo Spencer-Harper,7.8K,6,https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1?source=tag_archive---------0----------------,How to build a simple neural network in 9 lines of Python code,"As part of my quest to learn about AI, I set myself the goal of building a simple neural network in Python. To ensure I truly understand it, I had to build it from scratch without using a neural network library. Thanks to an excellent blog post by Andrew Trask I achieved my goal. Here it is in just 9 lines of code: In this blog post, I’ll explain how I did it, so you can build your own. I’ll also provide a longer, but more beautiful version of the source code. But first, what is a neural network? The human brain consists of 100 billion cells called neurons, connected together by synapses. If sufficient synaptic inputs to a neuron fire, that neuron will also fire. We call this process “thinking”. We can model this process by creating a neural network on a computer. It’s not necessary to model the biological complexity of the human brain at a molecular level, just its higher level rules. We use a mathematical technique called matrices, which are grids of numbers. To make it really simple, we will just model a single neuron, with three inputs and one output. We’re going to train the neuron to solve the problem below. The first four examples are called a training set. Can you work out the pattern? Should the ‘?’ be 0 or 1? You might have noticed, that the output is always equal to the value of the leftmost input column. Therefore the answer is the ‘?’ should be 1. Training process But how do we teach our neuron to answer the question correctly? We will give each input a weight, which can be a positive or negative number. An input with a large positive weight or a large negative weight, will have a strong effect on the neuron’s output. Before we start, we set each weight to a random number. Then we begin the training process: Eventually the weights of the neuron will reach an optimum for the training set. If we allow the neuron to think about a new situation, that follows the same pattern, it should make a good prediction. This process is called back propagation. Formula for calculating the neuron’s output You might be wondering, what is the special formula for calculating the neuron’s output? First we take the weighted sum of the neuron’s inputs, which is: Next we normalise this, so the result is between 0 and 1. For this, we use a mathematically convenient function, called the Sigmoid function: If plotted on a graph, the Sigmoid function draws an S shaped curve. So by substituting the first equation into the second, the final formula for the output of the neuron is: You might have noticed that we’re not using a minimum firing threshold, to keep things simple. Formula for adjusting the weights During the training cycle (Diagram 3), we adjust the weights. But how much do we adjust the weights by? We can use the “Error Weighted Derivative” formula: Why this formula? First we want to make the adjustment proportional to the size of the error. Secondly, we multiply by the input, which is either a 0 or a 1. If the input is 0, the weight isn’t adjusted. Finally, we multiply by the gradient of the Sigmoid curve (Diagram 4). To understand this last one, consider that: The gradient of the Sigmoid curve, can be found by taking the derivative: So by substituting the second equation into the first equation, the final formula for adjusting the weights is: There are alternative formulae, which would allow the neuron to learn more quickly, but this one has the advantage of being fairly simple. Constructing the Python code Although we won’t use a neural network library, we will import four methods from a Python mathematics library called numpy. These are: For example we can use the array() method to represent the training set shown earlier: The ‘.T’ function, transposes the matrix from horizontal to vertical. So the computer is storing the numbers like this. Ok. I think we’re ready for the more beautiful version of the source code. Once I’ve given it to you, I’ll conclude with some final thoughts. I have added comments to my source code to explain everything, line by line. Note that in each iteration we process the entire training set simultaneously. Therefore our variables are matrices, which are grids of numbers. Here is a complete working example written in Python: Also available here: https://github.com/miloharper/simple-neural-network Final thoughts Try running the neural network using this Terminal command: python main.py You should get a result that looks like: We did it! We built a simple neural network using Python! First the neural network assigned itself random weights, then trained itself using the training set. Then it considered a new situation [1, 0, 0] and predicted 0.99993704. The correct answer was 1. So very close! Traditional computer programs normally can’t learn. What’s amazing about neural networks is that they can learn, adapt and respond to new situations. Just like the human mind. Of course that was just 1 neuron performing a very simple task. But what if we hooked millions of these neurons together? Could we one day create something conscious? I’ve been inspired by the huge response this article has received. I’m considering creating an online course. Click here to tell me what topic to cover. I’d love to hear your feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Arik Sosman,1.5K,7,https://blog.arik.io/facebook-m-the-anti-turing-test-74c5af19987c?source=tag_archive---------1----------------,Facebook M — The Anti-Turing Test – Arik’s Blog,"Facebook has recently launched a limited beta of its ground-breaking AI called M. M’s capabilities far exceed those of any competing AI. Where some AIs would be hard-pressed to tell you the weather conditions for more than one location (god forbid you go on a trip), M will tell you the weather forecast for every point on your route at the time you’re expected to get there, and also provide you with convenient gas station suggestions, account for traffic in its estimations, and provide you with options for food and entertainment at your destination. As many people have pointed out, there have been press releases stating that M is human-aided. However, the point of this article is not to figure out whether or not there are humans behind it, but to indisputably prove it. When communicating with M, it insists it’s an AI, and that it lives right inside Messenger. However, its non-instantaneous nature and the sheer unlimited complexity of tasks it can handle suggest otherwise. The opinion is split as to whether or not it’s a real AI, and there seems to be no way of proving its nature one way or the other. The biggest issue with trying to prove whether or not M is an AI is that, contrary to other AIs that pretend to be human, M insists it’s an AI. Thus, what we would be testing for is humans pretending to be an AI, which is much harder to test than the other way round, because it’s much easier for humans to pretend to be an AI than for an AI to pretend to be a human. In this situation, a Turing test is futile, because M’s objective is precisely to not pass a Turing test. So what we want to prove is not the limitations of the AI, but the limitlessness of the (alleged) humans behind it. What we need therefore is a different test. An “Anti-Turing” test, if you will. As it happens, I did find a way of proving M’s nature. But good storytelling mandates that first, I describe my laborious path to the result, and the inconclusive experiments I had to conduct before I finally got a definitive answer. When I first got M, our conversation started like this: “I use artificial intelligence, but people help train me,” was M’s response to my question regarding its nature. That can mean many things, because using AI is not the same as being a completely autonomous AI. So I kept bugging it about its nature. Some people opined that what M refers to as AI is that there are people typing out all the responses, but the tool that helps them do that is based on machine learning. However, directly asking about that didn’t yield any new insights. M’s assertiveness regarding its nature is set in stone. Nonetheless, there were some minor tells that arguably betrayed the underlying human nature of this chatbot. To test its limit, I have asked it to perform a set of complicated tasks for me that no other AI out there could pull off. I told it where I work, and then slightly modified my request. And indeed, it responded! The most noteworthy aspect of this reply is that “Google Maps” wasn’t capitalized, suggesting that maybe, just maybe, a human typed it out in a hurry. And indeed, even with some other requests, its responses have proven not to be as impeccable as the ones we’re used to from Siri. For instance, when I asked it to find some nice wallpapers for me taken from the Berkeley stadium depicting the Bay Area at night, preferably with the Bay Bridge, the Transamerica Pyramid, and the Sather Tower being in the picture, M did manage to find some very nice wallpapers for me, but it said that it couldn’t find any with the Campanile. As consolation, though, it said it would let me know if it found any that fit my criteria more precisely: Now, the first issue with the above response is that the wallpapers it sent me did have the Transamerica Pyramid, and M knew they did. What they didn’t have was the Sather Tower, so why is it saying it’s going to let me know about pictures with the Transamerica Pyramid? The second issue is that it’s called the “Transamerica Pyramid,” not the “Transamerican Pyramid.” And lastly, note the two “with”s and the “I’l”. It has made two typos! And indeed, that was not the only time it did: While a lot of humans struggle with the distinction between “its” and “it’s,” for an AI, that should not have been an issue. Even so, it might have been trained wrong, so as such, these lapses are not sufficiently conclusive. Even the delayed responses I mentioned earlier could have been deliberate, including the fact that there’s a typing indicator shown when M is preparing a response, rather than sending the whole string instantaneously as a regular AI would. The results and indications so far didn’t satisfy me, so I was still looking for a way to prove that there are real humans behind M. Just how could I make them come out, make them show themselves? As it happens, the answer came to me at a time when I wasn’t actively looking for it. The movie in Cupertino ended rather late, and I asked M whether there was any place I could get dinner at afterwards that would still be open at that time. There were only two places open, but I wasn’t sure whether their kitchen would still be open, too. Thus, I asked M whether it could call them and figure that out. And indeed, it said it could! So I asked M whether it could call my friends (nope). Whether it could call me (nope). Apparently, it could only call businesses for me, but not individuals. So what do I do? I make up a business and ask M to call it. So M asked me for the phone number, and I simply gave it mine. About five minutes later, I receive a call with no caller ID. When I pick up, I hear some rumbling noises in the background, say “hello,” and then the other end hangs up. Immediately afterward, the following exchange happens with M: Unfortunately, I didn’t have a landline phone number, so I was a bit disappointed that not even this experiment could prove M’s nature. A few days later, I had to get some work done during the weekend, and while at the office, I realized that the company did have one. The experiment had to be repeated! About three minutes later, we get a phone call in the conference room. When I pick up, a distinctively human, female voice says, “Hello?” As it happens, I had accidentally set the phone to mute before that, so she didn’t hear me saying the company name. Still, the voice was most definitely human. And because the reader shouldn’t be taking me at face value, I made a recording of that whole encounter: Immediately afterward, M sends a reply. What’s more, it appears to me that they forgot to block the caller ID for that particular call, because I got to see the phone number they were calling from. So there, very clearly, M was calling from +1 (650) 796–2402. As can be seen on the photo, the automatic reverse-lookup matched that number to Facebook. Thus, here we are. We have definitive proof that M is powered by humans. The next question is: Is it only humans, or is there at least some AI-driven component behind it? As to this problem, I’ll leave it as a homework assignment for the reader to figure out. In the meantime, I shall enjoy having my own free personal (human) assistant. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Engineer @BitGo My experimental blog. " Tony Aubé,4.5K,8,https://medium.com/swlh/no-ui-is-the-new-ui-ab3f7ecec6b3?source=tag_archive---------2----------------,No UI is the New UI – The Startup – Medium,"On the rise of UI-less apps and why you shouldcare about them as a designer. October 23, 2015 • 8 minutes read A couple of months ago, I shared with my friends how I think apps like Magic and Operator are going to be the next big thing. If you don’t know about these apps, what make them special is that they don’t use a traditional UI as a mean of interaction. Instead, the entire app revolves around a single messaging screen. These are called ‘Invisible’ and ‘Conversational’ apps, and since my initial post, a slew of similar apps came to market. Even as of writing this, Facebook is releasing M, a personal assistant that’s integrated with Messenger to help you do about anything. While these apps operate in a slew of different markets, from checking your bank account, scheduling a meeting, making a reservation at the best restaurant to being your travel assistant, they all have one thing in common: they place messaging at the center stage. Matti Makkonen is a software engineer who passed away a couple of months ago. My guess is that you didn’t hear of his death, and you most likely don’t know who he was. However, Makkonen is probably one of the most important individuals in the domain of communications. And I mean — on the level of Alexander Bell — important. He is the inventor of SMS. If you didn’t realize how pervasive SMS has become today, think again. SMS is the most used application in the world. Three years ago, it had an estimated 4 billion active users. That was over four times the numbers of Facebook users at the time. Messaging and particularly SMS has been slowly taking over the world. It is now fundamental to human communication, and it is why messaging apps such as WhatsApp and WeChat are now worth billions. While messaging has become center to our everyday life, it’s currently only used in the narrow context of personal communications. What if we could extend messaging beyond this? What if messaging could transform the way we interact with computers the same way it transformed the way we interact with each other? In the recent movie Ex Machina, a billionaire creates Ava, a female-looking robot endowed with artificial intelligence. To test his invention, he brings-in a young engineer to see if he could fall in love with her. The whole premise of the movie is centered around the Turing test, a test invented by Alan Turing (also featured in the recent movie The Imitation Game) in order to determine if a artificial intelligence is equivalent of that of a human. A robot passing the Turing test would have huge implications on humanity, as it would mean that artificial intelligence has reached human level. While we are far from creating robots that can look and act like humans such as Ava, we’ve gotten pretty good at simulating human intelligence innarrow contexts. And one of those contexts where AI performs best is, you’ve guessed it, messaging. This is thanks to deep learning, a process where the computer is taught to understand and solve a problem by itself, rather than having engineers code the solution. Deep learning is a complete game changer. It allowed AI to reach new heights previously thought to be decades away. Nowadays, computers can hear, see, read and understand humans better than ever before. This is opening a world of opportunities for AI-powered apps, toward which entrepreneurs are rushing. In this gold rush, messaging is the low-hanging fruit. This is because, out of all the possible forms of input, digital text is the most direct one. Text is constant, it doesn’t carry all the ambiguous information that other forms of communication do, such as voice or gestures. Furthermore, messaging makes for a better user experience than traditional apps because it feels natural and familiar. When messaging becomes the UI, you don’t need to deal with a constant stream of new interfaces all filled with different menus, buttons and labels. This explains the current rise in popularity of invisible and conversational apps, but the reason you should care about them goes beyond that. The rise in popularity of these apps recently brought me to a startling observation : advances in technology, especially in AI, are increasingly making traditional UI irrelevant. As much as I dislike it, I now believe that technology progress will eventually make UI a tool of the past, something no longer essential for Human-Computer interaction. And that is a good thing. One could argue that conversational and invisible apps aren’t devoid of UI. After all, they still require a screen and a chat interface. While it is true that these apps do require UI design to some extent, I believe these are just the tip of the iceberg. Beyond them, new technologies have the potential to disrupt the screen entirely. To my point, have a look at the following videos: The first video showcases project Soli, a small Radar chip created by Google to allow fine gesture recognition. The second one presents Emotiv, a product that can read your brainwaves and understand their meaning through — bear with me — electroencephalography (or EEG for short). While both technologies seem completely magical, they are not. They are currently functional and have something very special in common: they don’t require a UI for computer input. As a designer, this is an unsettling trend to internalize. In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface? Why bother designing an app to manage your bank account when you could just talk to it directly? Beyond human-interface interaction, we are entering the world of Brain-Computer Interaction. In this world, digital-telepathy coupled with AI and other means of input could allow us to communicate directly with computer, without the need for a screen. In his talk at CHI 2014, Scott Jenson introduced the concept of a technological tiller. According to him, a technological tiller is when we stick an old design onto a new technology wrongly thinking it will work out. The term is derived from a boat tiller, which was, for a long time, the main navigation tool known to man. Hence, when the first cars were invented, rather than having steering wheels as a mean of navigation, they had boat tillers. The resulting cars were horribly hard to control and prone to crash. It was only after the steering wheel was invented and added to the design that cars could become widely used. As a designer, this is a valuable lesson: a change in context or technology most often requires a different design approach. In this example, the new technology of the motor engine needed the new design of the steering wheel to make the resulting product, the car, reach its full potential. When a technological tiller is ignored, it usually leads to product failures. When it is acknowledged and solved, it usually leads to a revolution and tremendous success. And if one company best understood this principle, it is Apple, with the invention of the iPhone and the iPad: A technological tiller was Nokia sticking a physical keyboard on top of a phone. Good design was to create a touch screen and digital keyboard. A technological tiller was Microsoft sticking Windows XP on top of a tablet. Good design was to develop a new, finger-friendly OS. And I believe a technological tiller is sticking an iPad screen over every new Internet-of-Things things. What if good design is about avoiding the screen altogether? Learning about technological tiller teaches us that sticking too much to old perspectives and ideas is a surefire way to fail. The new startups developing invisible and conversational apps understand this. They understand that the UI is not the product itself, but only a scaffolding allowing us to access the product. And if avoiding that scaffolding can lead to a better experience, then it definitively should be. So do I believe that AI is taking over, that UI are obsolete and that all visual designers will be out of jobs soon? Not really. As far as I know, UI will still be needed for computer output. For the foreseeable future, people will still use the screens to read, watch videos, visualize data, and so on. Furthermore, as Nir mentioned in his great article on the subject, conversational apps are currently good at only a specific set of tasks. It is safe to think that this will also be the case for new technologies such as Emotiv and project Soli. As game-changing as these are, they will most likely not be good at everything, and UI will probably outperform them at specific tasks. What I do believe, however, is that these new technologies are going to fundamentally change how we approach design. This is necessary to understand for those planning to have a career in tech. In a future where computer can see, talk and listen and reply to you, what good are going to be your awesome pixel-perfect Sketch skills? Let this be a fair warning against complacency. As UI designers, we have a tendency to presume a UI is the solution to every new design problems. If anything, the AI revolution will force us to reset our presumption on what it means to design for interaction. It will push us to leave our comfort zone and look at the bigger picture, bringing our focus on the design of the experience rather than the actual screen. And that is an exciting future for designers. 💚 Please hit recommend if you enjoyed or learned from this text. To keep things concise, this text uses the term UI as short for Graphical User Interface. More precisely, it refers to the web and app visual patterns that have become so pervasive in the recent years. This text was originally published on TechCrunch on 11/11/2015. Published in #SWLH (Startups, Wanderlust, and Life Hacking) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Personal thoughts on the future of design & technology. Lead Design @ Osmo. Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Matt O'Leary,373,12,https://howwegettonext.com/i-let-ibm-s-robot-chef-tell-me-what-to-cook-for-a-week-d881fc884748?source=tag_archive---------3----------------,I Let IBM’s Robot Chef Tell Me What to Cook for a Week,"Originally published at www.howwegettonext.com. If you’ve been following IBM’s Watson project and like food, you may have noticed growing excitement among chefs, gourmands and molecular gastronomists about one aspect of its development. The main Watson project is an artificial intelligence that engineers have built to answer questions in native language — that is, questions phrased the way people normally talk, not in the stilted way a search engine like Google understands them. And so far, it’s worked: Watson has been helping nurses and doctors diagnose illnesses, and it’s also managed a major “Jeopardy!” win. Now, Chef Watson — developed alongside Bon Appetit magazine and several of the world’s finest flavor-profilers — has been launched in beta, enabling you to mash recipes according to ingredients of your own choosing and receive taste-matching advice which, reportedly, can’t fail. While some of the world’s foremost tech luminaries and conspiracy theorists are a bit skeptical about the wiseness of A.I., if it’s going to be used at all, allowing it to tell you what to make out of a fridge full of unloved leftovers seems like an inoffensive enough place to start. I decided to put it to the test. While employed as a food writer for well over a decade, I’ve also spent a good part of the last nine years working on and off in kitchens. Figuring out how to use “spare” ingredients has become quite commonplace in my professional life. I’ve also developed a healthy disregard for recipes as anything other than sources of inspiration (or annoyance) but for the purposes of this experiment am willing to follow along and try any ingredient at least once. So, with this in mind, I’m going to let Watson tell me what to eat for a week. I’ve spent a good amount of time playing around with the app, which can be found here, and I’m going to follow its instructions to the letter where possible. I have an audience of willing testers for the food and intend to do my best in recreating its recipes on the plate. Still, I’m going to try to test it a bit. I want to see whether or not it can save me time in the kitchen; also, whether it has any amazing suggestions for dazzling taste matches; if it can help me use things up in the fridge; and whether or not it’s going to try to get me to buy a load of stuff I don’t really need. A lot of work has gone into the creation of this app — and a lot of expertise. But is it useable? Can human beings understand its recipes? Will we want to eat them? Let’s find out. A disclaimer before we start: Chef Watson isn’t great at telling you when stuff is actually ready and cooked. You need to use your common sense. Take all of its advice as advice and inspiration only. It’s the flavors that really count. Monday: The Tailgating Corn Salmon Sandwich My first impression is that the app is intuitive and pretty simple to use. Once you’ve added an ingredient, it suggests a number of flavor matches, types of dishes and “moods” (including some off-the-wall ones like “Mother’s Day”). Choose a few of these options and the actual recipes begin to bunch up on the right of the screen. I selected salmon and corn, then opted for the wildly suggestive “Tailgating corn salmon sandwich.” The recipe page itself has links to the original Bon Appetit dish that inspired your A.I. mélange, accompanied by a couple of pictures. There’s a battery of disclaimers stating that Chef Watson really only wants to suggest ideas, rather than tell you what to eat — presumably to stop people who want to try cooking with fiberglass, for example, from launching “no win, no fee” cases. My own salmon tailgating recipe seemed pretty straightforward. There are a couple of nice touches on the page, with regard to usability: You can swap out any ingredients that you might not have in stock for others, which Watson will suggest (it seems fond of adding celery root to dishes). For this first attempt I decided to follow Watson’s advice almost to a T. I didn’t have any garlic chile sauce but managed to make a presumably functional analog out of some garlic and chili sauce. The only other change I made involved adding some broad beans, because I like broad beans. During prep, I employed a nearly unconscious bit of initiative, namely when I cooked the salmon. It’s entirely likely that Watson was, as seemed to be the case, suggesting that I use raw salmon, but it’s Monday night and I’m not in the mood for anything too mind-bending. Team Watson: If I ruined your tailgater with my pig-headed insistence on cooked fish, I’m sorry. Although I’m not too sorry because, you know, it was actually a really good dish. I was at first unsure — the basil seemed like a bit of an afterthought; I wasn’t sure the lime zest was necessary; and cold salmon salad on a burger bun isn’t really an easy sell. But damn it, I’d make that sandwich again. It was missing some substance overall. It made enough for two small buns, so I teamed it up with a nice bit of Korean-spiced, pickled cucumber on the side, which worked well. My fellow diner deemed it “fine, if a little uninteresting” — and yes, maybe it could have done with a bit more sharpness and depth, and maybe a little more “a computer told me how to make this” flavor wackiness, but overall: Well done. Hint! Definitely add broad beans. They totally worked. Now, to mull over what “tailgating” might mean... Tuesday: Spanish Blood Sausage Porridge It was day two of the Chef Watson “guest slot” in the kitchen, and things were about to get interesting. Buoyed by yesterday’s Tailgating Salmon Sandwich success, I decided to give Watson something to sink its digital teeth into and supply only one ingredient: blood sausage. I also specified “main” as a style, really so that he/she/it knew that I wasn’t expecting dessert. If I’m being very honest, I’ve read more appetizing recipes than blood sausage porridge. Even the inclusion of the word “Spanish” doesn’t do anything to fancy it up. And, a bit concerningly, this is a recipe that Watson has extrapolated from one for Rye Porridge with Morels, replacing the rye with rice, the mushroom with sausage and the original’s chicken livers with a single potato and one tomato. Still, maybe it would be brilliant. But unlike yesterday, I ran into some problems. I wasn’t sure how many tomatoes and potatoes Watson expected me to have here — the ingredients list says one of each; the method suggests many — or also why I had to soak the tomato in boiling water first, although it makes sense in the original mushroom-centric method. Additionally, Wastson offered the whimsical instruction to just “cook” the tomatoes and potatoes, presumably for as long as I feel like. There’s a lot of butter involved in this recipe and rather too much liquid recommended: eight cups of stock for one-and-a-half of rice. I actually got a bit fed up after four and stopped adding them. Forty to 50 minutes cooking time was a bit too long, too — again, that’s been directly extracted from the rye recipe. But these were mere trifles. The dish tasted great. It’s a lovely blend of flavors and textures, thanks to the blood sausage and the potato. The butter works brilliantly and the tomato on top is a nice touch. And it proves Watson’s functionality. You can suggest one ingredient that you find in the fridge, use your initiative a bit and you’ll be left with something lovely. And buttery. Lovely and buttery. Well done, Watson! Wednesday: Diner Cod Pizza When I read this recipe, I wondered whether this was going to be it for me and Watson. “Diner,” “cod” and “pizza” are three words that don’t really belong together, and the ingredients list seemed more like a supermarket sweep than a recipe. Now that I’ve actually made the meal, I don’t know what to think about anything. You might remember a classic 1978 George A. Romero-directed horror film called“Dawn of the Dead.” Its 2004 remake, following the paradigm shift to running zombies in “28 Days Later,” suffered critically. My impression of this remake was always that if it’d just been called something different — “Zombies Go Shopping,” for instance — every single person who saw it would have loved it. As it was, viewers thought it seemed unauthentic, and it gathered what was essentially some unfair criticism. (See also the recent “RoboCop” remake or, as I call it,“CyberSwede vs. Detroit.”) This meal is my culinary “Dawn of the Dead.” If only Watson had called it something other than pizza, it would have been utterly perfect. It emphatically isn’t a pizza. It has as much in common with pizza as cake does. But there’s something about radishes, cod, ginger, olives, tomatoes and green onions on a pizza crust that just work remarkably well. To be clear, I fully expected to throw this meal away. I had the website for curry delivery already open on my phone. That’s all before I ate two of the pizzas. They taste like nothing on earth. The addition of Comté cheese and chives is the sort of genius/absurdity that makes people into millionaires. I was, however, nervous to give one to my pregnant fiancée; the ingredients are so weird that I was just sure she’d suffer some really strange psychic reaction or that the baby would grow up to be extremely contrary. Be careful with this recipe preparation: As I’ve found with Watson, it doesn’t tell you how to assure that your fish is cooked; nor does it tell you how long to pre-bake the crust base. These kinds of things are really important. You need to make sure this dish is cooked properly. It takes longer than you might expect. I’m writing this from Sweden, the home of the ridiculous “pizza,” and yet I have a feeling that if I were to show this recipe to a chef who ordinarily thinks nothing of piling a kilo of kebab meat and Béarnaise sauce on bread and serving it in a cardboard box with a side salad of fermented cabbage, he or she would balk and tell me that I’ve gone too far. Which would be his or her loss. I think I’m going to have to take this to “Dragon’s Den” instead. Watson, I don’t know how I’m going to cope with normal recipes after our little holiday together. You’re changing the way I think about food. Thursday: Fall Celery Sour Cream Parsley Lemon Taco Following yesterday’s culinary epiphany, I was keen to keep a cool head and a critical eye on Chef Watson, so I decided to road-test one theory from an article I found on the Internet. It mentioned that some of the most frequently discarded items in American fridges are celery, sour cream, fresh herbs and lemons. Let’s not dwell too much on the “luxury problems” aspect of this (I can’t imagine that people everywhere in the world are lamenting the amount of sour cream and flat-leaf parsley they toss) and focus instead on what Watson can do with this admittedly tricky-sounding shopping list. What it did was this: Immediately add shrimp, tortillas and salsa verde. The salsa verde it recommended, from an un-Watsoned recipe courtesy of Bon Appetit, was fantastic. It’s nothing like the salsa verde I know and love, with its capers and dill pickles and anchovies: This iteration required a bit of a simmer, was super-spicy and delicious. (I had to cheat and use normal tomatoes instead of tomatillos, but I don’t think it made a huge difference.) The marinade for the shrimp was unusual in that like a lot of what Watson recommends it used a ton of butter. A hefty wallop of our old friend kosher salt, too. Now, I’ve worked as a chef on and off for several years so am unfazed by the appearance of salt and butter in recipes. They’re how you make things taste nice. However, there’s no getting away from the fact that I bought a stick of butter at the start of the week and it’s already gone. The assembled tacos were good — they were uncontroversial. My dining companion deemed the salsa “a bit too spicy,” but I liked the kick it gave the dish and the sour cream calmed it down a bit. It struck me as a bit of a shame to fire up the barbecue for only about two minutes’ worth of cooking time, but it’s May and the sun is shining so what the heck. Was this recipe as absurd as yesterday’s? Absolutely not. Was it as memorable? Sadly, I don’t think so. Would I make it again? I’m sorry, Watson, but probably not. These tacos were good but ultimately not worth the prep hassle. Friday: Mexican Mushroom Lasagna Before I start, I don’t want you to get the impression that my love affair (which reached the height of its passion on Wednesday) with Watson is over. It absolutely isn’t. I have been consistently impressed with the software’s intelligence, its ease of use and the audacity of some of its suggestions. For flavor-matching, it’s incredible. It really works. It probably won’t save you any money; it won’t make you thin; and it won’t teach you how to actually cook — all of that stuff you have to work out for yourself. But, at this stage, it’s a distinctly impressive and worthwhile project. Do give it a go. But... be prepared to have to coax something workable out of it every once in a while. Today, it took me a long time to find a meat-free recipe which didn’t, when it came down to it, contain some sort of meat. I selected “meat” as an option for what I didn’t want to include, and it took me to a recipe for sausage lasagne. With one-and-a-half pounds of sausage in it. I removed the sausage, and it replaced it with turkey mince. Maybe someone just needs to tell Watson that neither sausages nor turkeys grow on trees. After much tinkering and submitting and resubmitting, the recipe I ended up with is for lasagne topped with a sort of creamy mashed potato sauce. It’s very easy and it’s a profoundly smart use of ingredients. The lasagne is not the world’s most aesthetically appealing dish, and it’s not as astonishingly flavored as some of this week’s other revelations, but I don’t think I’ll be making my cheese sauce in any other way from this point onwards. Top marks. And, in essence, this kind of sums up Watson for me. You need to tinker with it a bit before you can find something usable. You may need to make a “do I want to put mashed potato on this lasagne?” leap of faith, and you’re going to have to actually go with it if you want the app’s full benefit. You’ll consume a lot of dairy products, and you might find yourself daydreaming about nice, simple, unadorned salads if you decide to go all-in with its suggestions. But an A.I. that can tell us how to make a pizza out of cod, ginger and radishes that you know is going to taste amazing? One that will gladly suggest a workable recipe for blood sausage porridge and walk you through it without too much hassle? That gives you a “how crazy” option for each ingredient? That is only designed to make the lives of food enthusiasts more interesting? Why on earth not? Watson and I are going to be good friends from this point forward, even if we don’t speak every day. And I can’t wait to introduce it to others. Now, though, I’m going to only consume smoothies for a week. Seriously, if I even look at butter in the next few days, I’m probably going to puke. This fall, Medium and How We Get To Next are exploring the future of food and what it means for us all. To get the latest and join the conversation, you can follow Future of Food. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inspiring stories about the people and places building our future. Created by Steven Johnson, edited by Ian Steadman, Duncan Geere, Anjali Ramachandran, and Elizabeth Minkel. Supported by the Gates Foundation. " Tanay Jaipuria,1.1K,5,https://medium.com/@tanayj/self-driving-cars-and-the-trolley-problem-5363b86cb82d?source=tag_archive---------4----------------,Self-driving cars and the Trolley problem – Tanay Jaipuria – Medium,"Google recently announced that their self-driving car has driven more than a million miles. According to Morgan Stanley, self-driving cars will be commonplace in society by ~2025. This got me thinking about the ethics and philosophy behind these cars, which is what the piece is about. In 1942, Isaac Asimov introduced three laws of robotics in his short story “Runaround”. They were as follows: He later added a fourth law, the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Though fictional, they provide a good philosophical grounding of how AI can coexist with society. If self driving cars, were to follow them, we’re in a pretty good spot right? (Let’s leave aside the argument that self-driving cars lead to loss of jobs of taxi drivers and truck drivers and so should not exist per the 0th/1st law) However, there’s one problem which the laws of robotics don’t quite address. It’s a famous thought experiment in philosophy called the Trolley Problem and goes as follows: It’s not hard to see how a similar situation would come up in a world with self-driving cars, with the car having to make a similar decision. Say for example a human-driven car runs a red light and a self-driving car has two options: What should the car do? From a utilitarian perspective, the answer is obvious: to turn right (or “pull the lever”) leading to the death of only one person as opposed to five. Incidentally, in a survey of professional philosophers on the Trolley Problem, 68.2% agreed, saying that one should pull the lever. So maybe this “problem” isn’t a problem at all and the answer is to simply do the Utilitarian thing that “greatest happiness to the greatest number”. But can you imagine a world in which your life could be sacrificed at any moment for no wrongdoing to save the lives of two others? Now consider this version of the trolley problem involving a fat man: Most people that go the utilitarian route in the initial problem say they wouldn’t push the fat man. But from a utilitarian perspective there is no difference between this and the initial problem — so why do they change their mind? And is the right answer to “stay the course” then? Kant’s categorical imperative goes some way to explaining it: In simple words, it says that we shouldn’t merely use people as means to an end. And so, killing someone for the sole purpose of saving others is not okay, and would be a no-no by Kant’s categorical imperative. Another issue with utilitarianism is that it is a bit naive, at least how we defined it. The world is complex, and so the answer is rarely as simple as perform the action that saves the most people. What if, going back to the example of the car, instead of a family of five, inside the car that ran the red light were five bank robbers speeding after robbing a bank. And sat in the other car was a prominent scientist who had just made a breakthrough in curing cancer. Would you still want the car to perform the action that simply saves the most people? So may be we fix that by making the definition of Utilitarianism more intricate, in that we assign a value to each individuals life. In that case the right answer could still be to kill the five robbers, if say our estimate of utility of the scientist’s life was more than that of the five robbers. But can you imagine a world in which say Google or Apple places a value on each of our lives, which could be used at any moment of time to turn a car into us to save others? Would you be okay with that? And so there you have it, though the answer seems simple, it is anything but, which is what makes the problem so interesting and so hard. It will be a question that comes up time and time again as self-driving cars become a reality. Google, Apple, Uber etc. will probably have to come up with an answer. To pull, or not to pull? Lastly, I want to leave you another question that will need to be answered, that of ownership. Say a self-driving car which has one passenger in it, the “owner”, skids in the rain and is going to crash into a car in front, pushing that car off a cliff. It can either take a sharp turn and fall of the cliff or continue going straight leading to the other car falling of the cliff. Both cars have one passenger. What should the car do? Should it favor the person that bought it — its owner? Thanks for reading! Feel free to share this post and leave a note/write a response to share your thoughts. I’m tanayj on twitter if you want to discuss further! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product @Facebook. Previously @McKinsey. I like tech, econ, strategy and @Manutd. Views and banter my own. " Milo Spencer-Harper,2.2K,3,https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------5----------------,How to build a multi-layered neural network in Python,"In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron. But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be? The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output. Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs. You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers. In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples. The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further. Also available here: https://github.com/miloharper/multi-layer-neural-network This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post. What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”. Ok, let’s try running it using the Terminal command: python main.py You should get a result that looks like this: First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close! You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”. That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Ben Brown,1.1K,7,https://blog.howdy.ai/what-will-the-automated-workplace-look-like-495f9d1e87da?source=tag_archive---------6----------------,Start automating your business tasks with Slack – Howdy,"If you haven’t read about it in the Times or heard about it on NPR yet,you are soon going to be replaced by a robot at your job. All the jobs we thought were safe because they required experience and nuance can now be done by computers. Martin Ford, author of the book the Times and NPR are reporting on, calls it “the threat of a jobless future.” A future where computers write our newspaper articles, create our legal contracts, and compose our symphonies. Automating this type of complicated, quasi-creative task is really impressive. It requires super computers and uses the forefront of artificial intelligence to achieve this shocking result. It requires tons of data and lots of programming using advanced systems not available to ordinary people. But not everything requires deep learning. Some of the things we do in our every day lives, especially at our jobs, can be automated. Though it used to be the domain of the geek, scripting and automation is invading all aspects of the workplace. Workers and organizations who can master scripting and automation will gain an edge on those who can’t. We all have to face the reality that a well built script might be faster and more reliable than we can be at some parts of our jobs. Those of us who can create and wield this type of tool will be able to do better work faster. Luckily, inside messaging tools like Slack, creating customized, interactive automation tools for business tasks is possible with a little open source code, some cloud tools that are mostly free, and a bit of self reflection. “Bots” are apps that live alongside users in a chatroom. Users can issue commands to bots by sending messages to them, or by using special keywords in the chatroom. Traditionally, bots have been used for things like server maintenance and running software tests, but now, using the connected devices all around us, nearly anything can be automated and controlled by a bot. A common task in many technology teams is the stand-up meeting. Everyone stands up, and one at a time, tells the team what they’ve been working on, what they’ve got coming up next, and any problems they are facing. Each person takes a few minutes to speak. In many teams, this is already taking place in a chat room. If there are 10 people on a team, and each person speaks for just 90 seconds, they’ll spend 15 minutes just bringing people up to speed. Nothing has been discussed, no problems have yet been solved. What happens if this process is automated using a “bot” in an environment like Slack? A stand-up is triggered — automatically, or by a project manager. Using a flexible script, the bot simultaneously reaches out to every member of the team via a private message on Slack. The bot has an interactive conversation with each team member in parallel and collects everyone’s responses. Everyone still spends 90 seconds talking about their work, but now it is the same 90 seconds. The bot, now finished collecting the checkin responses, shares its report with all the stakeholders. Just 2 minutes into the meeting, everyone involved has a single document to look at that contains the up to date status of the project. The team gains 13 minutes during which they can discuss this information, clear blockers, and get back to work. Now, this is admittedly an aggressive application of this approach that won’t work for everyone — some teams may need the sequential listing of updates, some teams may need to actually stand up and use their voices. The point I’m trying to make is that automating things like this exposes ways for the work to be improved, for time to be saved, and for the process to evolve. What other processes could be automated like this? What if there was a meeting runner bot that automatically sent out an agenda to all attendees before the meeting, then collected, collated and delivered updates to team members? It could make meetings shorter and more productive by reducing the time needed to bring everyone up to speed. What if there was an HR bot that could collect performance reviews and feedback? What if there was a task management bot that could not only manage the creation of tasks and lists, but also create and deliver up to date progress reports to the whole team? There is a lot to be gained with simple process automation like this! So how can you and your organization benefit from this type of automation tool? First, you’ll need to commit to adopting a tool like Slack where your team can communicate and use this type of bot. Then, you’ll have to customize Slack to take advantage of built in and custom integrations, which takes some programming — though not much, as there are a ton of open source tools ready to use. An organization like my company XOXCO can help you do this. Before you can automate something, you have to know the process and be able to write it down in detail. You’ll have to think about all the special cases that occur. Not only will this allow you to build an automation script, it will help you to hone and document the processes by which your business is conducted! When we do things, we do them one at a time. Robots can do lots of things at once — so once you’ve got your process documented, think about how the steps might be able to run in parallel. For example, could the bot talk to multiple people at once instead of doing it sequentially? Since your script can only do what you tell it, you’ll need to plan for the contingencies that might occur while it runs. What if someone doesn’t respond in time? What if information is unavailable? What if a step in the process fails? Think through these cases and prepare your script to handle them. For example, we built in a 5 minute timeout for our project manager bot — if a user doesn’t respond in 5 minutes, they get a reminder to checkin in person, and their lack of a response is indicated in the report. This may sound complicated, but when it boils down, we’re just talking about including an ELSE for every IF — a good practice for any software or process to incorporate. Your bots, once deployed, can become valuable members of your team. Their success is dependent on your team’s desire to use them, and that they provide a better, faster, more reliable way to achieve organizational goals. Bots should have a user-friendly personality and represent and support company culture. Bots should talk like real people, but not pretend to be real people. Our rule of thumb: try to be as smart as a puppy, which will engender an attitude of forgiveness when the bot does something not quite right. This type of software automation has been common in certain groups for years. There may already be a software automation expert in your midst. She’s probably part of the server administration team, or the quality assurance group. Right now she works on code deployment, or writes software tests. Go find her, and go put her in a room with a project manager and a content strategist, and see if they can identify and automate the team’s top three time sucking activities in a way that is not only useful but fun to use. When we start to design software for messaging, the entire application must be boiled down to words, without colors to choose, navigation to click and sidebars to fill with widgets. This can help us not only build better, more useful software, but put simply, requires us to run our businesses in a more organized, documented and well-understood way. Don’t wait for the Artificial Intelligence explosion to arrive. Start putting these tools to work today. Update: You can now use a fully realized version of the bot discussed in this post — we’ve launched it under the name Howdy! Add Howdy to your team to run meetings, capture information, and automate common tasks for your team. Read more about our launch here. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I’m a designer and technologist in Austin, Texas. I co-founded XOXCO in 2008. The official blog of Howdy.ai and Botkit " Frank Diana,428,11,https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf?source=tag_archive---------7----------------,Digital Transformation of Business and Society – Frank Diana – Medium,"At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change. With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this. He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation: Gerd then summarized the session as follows: The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future. My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently When looking at AI, consider trying IA first (intelligent assistance / augmentation). My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated. My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts. My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice “The best way to predict the future is to create it” (Alan Kay). My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens Originally published at frankdiana.wordpress.com on September 10, 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. TCS Executive focused on the rapid evolution of society and business. Fascinated by the view of the world in the next decade and beyond https://frankdiana.net/ " Rand Hindi,693,12,https://medium.com/snips-ai/how-artificial-intelligence-will-make-technology-disappear-503cd88e1e6a?source=tag_archive---------8----------------,How Artificial Intelligence Will Make Technology Disappear,"This is a redacted transcript of a TEDx talk I gave last April at Ecole Polytechnique in France. The video can be seen on Youtube here. Enjoy ;-) Last March, I was in Costa Rica with my girlfriend, spending our days between beautiful beaches and jungles full of exotic animals. There was barely any connectivity and we were immersed in nature in a way that we could never be in a big city. It felt great. But in the evening, when we got back to the hotel and connected to the WiFi, our phones would immediately start pushing an entire day’s worth of notifications, constantly interrupting our special time together. It interrupted us while watching the sunset, while sipping a cocktail, while having dinner, while having an intimate moment. It took emotional time away from us. And it’s not just that our phones vibrated, it’s also that we kept checking them to see if we had received anything, as if we had some sort of compulsive addiction to it. Those rare messages that are highly rewarding, like being notified that Ashton Kutcher just tweeted this article, made consciously “unplugging” impossible. Just like Pavlov’s dog before us, we had become conditioned. In this case though, it has gotten so out of control that today, 9 out of 10 people experience “phantom vibrations”, which is when you think your phone vibrated in your pocket, whereas in fact it didn’t. How did this happen? Back in 1990, we didn’t have any connected devices. This was the “unplugged” era. There were no push notifications, no interruptions, nada. Things were analog, things were human. Around 1995, the Internet started taking off, and our computers became connected. With it came email, and the infamous “you’ve got mail!” notification. We started getting interrupted by people, companies and spammers sending us electronic messages at random moments. 10 years later, we entered the mobile era. This time, it is not 1, but 3 devices that are connected: a computer, a phone, and a tablet. The trouble is that since these devices don’t know which one you are currently using, the default strategy has been to push all notifications on all devices. Like when someone calls you on your phone, and it also rings on your computer, and actually keeps ringing after you’ve answered it on one of your devices! And it’s not just notifications; accessing a service and finding content is equally frustrating on mobile devices, with those millions of apps and tiny keyboards. If we take notifications and the need for explicit interactions as a proxy for technological friction, then each connected device adds more of it. Unfortunately, this is about to get much worse, since the number of connected devices is increasing exponentially! This year, in 2015, we are officially entering what is called the “Internet of Things” era. That’s when your watch, fridge, car and lamps are connected. It is expected that there will be more than 100 billion connected devices by 2025, or 14 for every person on this planet. Just imagine what it will feel like to interact manually and receive notifications simultaneously on 14 devices.. That’s definitely not the future we were promised! There is hope though. There is hope that Artificial Intelligence will fix this. Not the one Elon Musk refers to that will enslave us all, but rather a human-centric domain of A.I. called “Context-Awareness”, which is about giving devices the ability to adapt to our current situation. It’s about figuring out which device to push notifications on. It’s about figuring out you are late for a meeting and notifying people for you. It’s about figuring out you are on a date and deactivating your non-urgent notifications. It’s about giving you back the freedom to experience the real world again. When you look at the trend in the capabilities of A.I., what you see it that it takes a bit longer to start, but when it does, it grows much faster. We already have A.I.s that can learn to play video games and beat world champions, so it’s just a matter of time before they reach human level intelligence. There is an inflexion point, and we just crossed it. Taking the connected devices curve, and subtracting the one for A.I., we see that the overall friction keeps increasing over the next few years until the point where A.I. becomes so capable that this friction flips around and quickly disappears. In this era, called “Ubiquitous Computing”, adding new connected devices does not add friction, it actually adds value! For example, our phones and computers will be smart enough to know where to route the notifications. Our cars will drive themselves, already knowing the destination. Our beds will be monitoring our sleep, and anticipating when we will be waking up so that we have freshly brewed coffee ready in the kitchen. It will also connect with the accelerometers in our phones and the electricity sockets to determine how many people are in the bed, and adjust accordingly. Our alarm clocks won’t need to be set; they will be connected to our calendars and beds to determine when we fell asleep and when we need to wake up. All of this can also be aggregated, offering public transport operators access to predicted passenger flows so that there are always enough trains running. Traffic lights will adjust based on self-driving cars’ planned route. Power plants will produce just enough electricity, saving costs and the environment. Smart cities, smart homes, smart grids.. They are all just consequences of having ubiquitous computing! By the time this happens, technology will have become so deeply integrated in our lives and ourselves that we simply won’t notice it anymore. Artificial Intelligence will have made technology disappear from our consciousness, and the world will feel unplugged again. I know this sounds crazy, but there are historical examples of other technologies that followed a similar pattern. For example, back in the 1800s, electricity was very tangible. It was expensive, hard to produce, would cut all the time, and was dangerous. You would get electrocuted and your house could catch fire. Back then, people actually believed that oil lamps were safer! But as electricity matured, it became cheaper, more reliable, and safer. Eventually, it was everywhere, in our walls, lamps, car, phone, and body. It became ubiquitous, and we stopped noticing it. Today, the exact same thing is happening with connected devices. Building this ubiquitous computing future relies on giving devices the ability to sense and react to the current context, which is called “context-awareness”. A good way to think about it is through the combination of 4 layers: the device layer, which is about making devices talk to each other; the individual layer, which encompasses everything related to a particular person, such as his location history, calendar, emails or health records; the social layer, which models the relationship between individuals, and finally the environmental layer, which is everything else, such as the weather, the buildings, the streets, trees and cars. For example, to model the social layer, we can look at the emails that were sent and received by someone, which gives us an indication of social connection strength between a group of people. The graph shown above is extracted from my professional email account using the MIT Immersion tool, over a period of 6 months. The huge green bubble is one of my co-founder (which sends way too many emails!), as is the red bubble. The other fairly large ones are other people in my team that I work closely with. But what’s interesting is that we can also see who in my network works together, as they will tend to be included together in emails threads and thus form clusters in this graph. If you add some contextual information such as the activity I was engaged in, or the type of language being used in the email, you can determine the nature of the relationship I have with each person (personal, professional, intimate, ..) as well as its degree. And if you now take the difference in these patterns over time, you can detect major events, such as changing jobs, closing an investment round, launching a new product or hiring key people! Of course, all this can be done on social graphs as well as professional ones. Now that we have a better representation of someone’s social connections, we can use it to perform better natural language processing (NLP) of calendar events by disambiguating events like “Chat with Michael”, which would then assign a higher probability to my co-founder. But a calendar won’t help us figure out habits such as going to the gym after work, or hanging out in a specific neighborhood on Friday evenings. For that, we need another source of data: geolocation. By monitoring our location over time and detecting the places we have been to, we can understand our habits, and thus, predict what we will be doing next. In fact, knowing the exact place we are at is essential to predict our intentions, since most of the things we do with our devices are based on what we are doing in the real world. Unfortunately, location is very noisy, and we never know exactly where someone is. For example below, I was having lunch in San Francisco, and this is what my phone recorded while I was not moving. Clearly it is impossible to know where I actually am! To circumvent this problem, we can score each place according to the current context. For example, we are more likely to be at a restaurant during lunch time than at a nightclub. If we then combine this with a user-specific model based on their location history, we can achieve very high levels of accuracy. For example, if I have been to a Starbucks in the past, it will increase the probability that I am there now, as well as the probability of any other coffee shop. And because we now know that I am in a restaurant, my devices can surface the apps and information that are relevant to this particular place, such as reviews or mobile payments apps accepted there. If I was at the gym, it would be my sports apps. If I was home, it would be my leisure and home automation apps. If we combine this timeline of places with the phone’s accelerometer patterns, we can then determine the transportation mode that was taken between those places. With this, our connected watches could now tell us to stand up when it detects we are still, stop at a rest area when it detects we are driving, or tell us where the closest bike stand is when cycling! These individual transit patterns can then be aggregated over several thousand users to recreate very precise population flow in the city’s infrastructure, as we have done below for Paris. Not only does it give us an indication of how many people transit in each station, it also give us the route they have been taking, where they changed train or if they walked between stations. Combining this with data from the city — concerts, office and residential buildings, population demographics, ... — enables you to see how each factor impacts public transport, and even predict how many people will be boarding trains throughout the day. It can then be used to notify commuters that they should take a different train if they want to sit on their way home, and dynamically adjust the train schedules, maximizing the efficiency of the network both in terms of energy saved and comfort. And it’s not just public transport. The same model and data can be used to predict queues in post offices, by taking into account hyperlocal factors such as when the welfare checks are being paid, the bank holidays, the proximity of other post offices and the staff strikes. This is shown below, where the blue curve is the real load, and the orange one is the predicted load. This model can be used to notify people of the best time to drop and pickup their parcels, which results in better yield management and customer service. It can also be used to plan the construction of new post offices, by sizing them accordingly. And since a post office is just a retail store, everything that works here can work for all retailers: grocery stores, supermarkets, shoe shops, etc.. It could then be plugged into our devices, enabling them to optimize our shopping schedule and make sure we never queue again! This contextual modeling approach is in fact so powerful that it can even predict the risk of car accidents just by looking at features such as the street topologies, the proximity of bars that just closed, the road surface or the weather. Since these features are generalizable throughout the city, we can make predictions even in places where there was never a car accident! For example here, we can see that our model correctly detects Trafalgar square as being dangerous, even though nowhere did we explicitly say so. It discovered it automatically from the data itself. It was even able to identify the impact of cultural events, such as St Patrick’s day or New Year’s Eve! How cool would it be if our self-driving cars could take this into account? If we combine all these different layers — personal, social, environmental — we can recreate a highly contextualized timeline of what we have been doing throughout the day, which in turn enables us to predict what our intentions are. Making our devices able to figure out our current context and predict our intentions is the key to building truly intelligent products. With that in mind, our team has been prototyping a new kind of smartphone interface, one that leverages this contextual intelligence to anticipate which services and apps are needed at any given time, linking directly to the relevant content inside them. It’s not yet perfect, but it’s a first step towards our long term vision — and it certainly saves a lot of time, swipes and taps! One thing in particular that we are really proud of is that we were able to build privacy by design (full post coming soon!). It is a tremendous engineering challenge, but we are now running all our algorithms directly on the device. Whether it’s the machine learning classifiers, the signal processing, the natural language processing or the email mining, they are all confined to our smartphones, and never uploaded to our servers. Basically, it means we can now harness the full power of A.I. without compromising our privacy, something that has never been achieved before. It’s important to understand that this is not just about building some cool tech or the next viral app. Nor is it about making our future look like a science-fiction movie. It’s actually about making technology disappear into the background, so that we can regain the freedom to spend quality time with the people we care about. If you enjoyed this article, it would really help if you hit recommend below, and shared it on twitter (we are @randhindi & @snips) :-) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur & AI researcher working on Making Technology Disappear. CEO @ snips.ai. #AI, #privacy and #blockchain. Follow http://Instagram.com/randhindi This publication features the articles written by the Snips team, fellows, and friends. Snips started as an AI lab in 2013, and now builds Private-by-Design, decentralized, open source voice assistants. " samim,323,8,https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0?source=tag_archive---------9----------------,Obama-RNN — Machine generated political speeches. – samim – Medium,"Political speeches are among the most powerful tools leaders use to influence entire populations. Throughout history, political speeches have been used to start wars, end empires, fuel movements & inspire the masses. Political speeches apply many of the tricks found in the field of Social engineering: Congruent communication, intentional body language, Neuro-linguistic programming, HumanBuffer Overflows and more. Read more about The art of human hacking here. In recent years, Barack Obama has emerged as one of the most memorable and effective political speakers on the world stage. Messages like Hope and Yes we can have clearly left a mark on our collective consciousness. Since 2007, Obama’s highly skilled speech writers have written over 4,3megabytes or 730895 words of text, not counting interviews and debates. All of Obama’s speeches are conveniently readable here. With powerful artificial Intelligence / machine learning libraries becoming readily available as open-source, it seems obvious to apply them to speech writing. A particularly interesting class of algorithms are Recurrent Neural Networks (RNN). Recently Andrej Karpathy, a CS PhD student at Stanford has released char-rnn, a Multi-layer Recurrent Neural Networks for character-level language models. The library takes an arbitrary text file as input and learns to predict the next character in the sequence. As the results are pretty amazing, many interesting experiments have sprung up, ranging from composing music, rapping, writing cooking recipes and even re-writing the bible: Step 1 is to feed the model data, the more the better. For this i wrote a web-crawler in python that gathers all publicly available Obama Speeches, parses out the text and removes any interviews/debates. Step 2 is to train the model on the collected text. Training an RNN takes a bit of fiddling, as i painfully found out while training a model on 500mb of classical music midi files (mozart-rnn is wild!). Luckily the standard settings that Andrej suggested were a good starting point for the Obama-RNN. Step 3 is to test the model which automatically generates an unlimited amount of new speeches in the vein of Obama ́s previous speeches. The model can be seeded with a text from which it will start the sequence (e.g. war on terror) and a temperature which makes the output more conservative or diverse, at cost of more mistakes. Here is a selection of some of my favorite speeches the Obama-RNN generated so far. Keep in mind this is a just a quick hack project. With more time & effort the results can be improved. One of the most hilarious patterns to emerge, is that the Obama-RNN really loves to politely say: Good afternoon. Good day. God bless you. Good bless the United States of America. Thank you. I did a test combining Obamas speeches with other famous speeches from the 20st century (including everything from Mother Theresa, Malcom X to Mussolini and Hitler). This gives us an rather insane amalgam of human thought, seen through the “eyes” of a machine. A story for an other day. On this note: God bless you. Good bless the United States of America. Thank you. You can run your own Obama-RNN by following these instructions: Get in touch here: https://twitter.com/samim | http://samim.io/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Designer & Code Magician. Working at the intersection of HCI, Machine Learning & Creativity. Building tools for Enlightenment. Narrative Engineering. " Adam Geitgey,10.4K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------4----------------,Machine Learning is Fun! Part 2 – Adam Geitgey – Medium,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话. In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!). This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out! Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished. Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this: We ended up with this simple estimation function: In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value. Instead of using code, let’s represent that same function as a simple diagram: However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model? To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases: Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)! Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model. Let’s combine our four attempts to guess into one big diagram: This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions. There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click: It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together: The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm. In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time. Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess? I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter. Our model might look like this: But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem. Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example: What letter is going to come next? You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing. In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English. To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently. Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters. This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory. Predicting the next letter in a story might seem pretty useless. What’s the point? One cool use might be auto-predict for a mobile phone keyboard: But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us! We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway. To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github. We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example. As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training: You can see that it has figured out that sometimes words have spaces between them, but that’s about it. After about 1000 iterations, things are looking more promising: The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense. But after several thousand more training iterations, it looks pretty good: At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense. Compare that with some real text from the book: Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing! We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters. For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”: Not bad! But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern. In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system. This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers. Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels? First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985: This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those. To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime. Here’s the first level from the game (which you probably remember if you ever played it): If we look closely, we can see the level is made of a simple grid of objects: We could just as easily represent this grid as a sequence of characters with one character representing each object: We’ve replaced each object in the level with a letter: ...and so on, using a different letter for each different kind of object in the level. I ended up with text files that looked like this: Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line: The patterns in a level really emerge when you think of the level as a series of columns: So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well. To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up: Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it. After a little training, our model is generating junk: It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet. After several thousand iterations, it’s starting to look like something: The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool! With a lot more training, the model gets to the point where it generates perfectly valid data: Let’s sample an entire level’s worth of data from our model and rotate it back horizontal: This data looks great! There are several awesome things to notice: Finally, let’s take this level and recreate it in Super Mario Maker: Play it yourself! If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3. The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model. If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free. As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly! For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate. But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been. In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach. Readers have sent me links to other interesting approaches to generating Super Mario levels: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 3! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------5----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Adam Geitgey,6.8K,11,https://medium.com/@ageitgey/machine-learning-is-fun-part-6-how-to-do-speech-recognition-with-deep-learning-28293c162f7a?source=tag_archive---------6----------------,Machine Learning is Fun Part 6: How to do Speech Recognition with Deep Learning,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话 , 한국어, Tiếng Việt or Русский. Speech recognition is invading our lives. It’s built into our phones, our game consoles and our smart watches. It’s even automating our homes. For just $50, you can get an Amazon Echo Dot — a magic box that allows you to order pizza, get a weather report or even buy trash bags — just by speaking out loud: The Echo Dot has been so popular this holiday season that Amazon can’t seem to keep them in stock! But speech recognition has been around for decades, so why is it just now hitting the mainstream? The reason is that deep learning finally made speech recognition accurate enough to be useful outside of carefully controlled environments. Andrew Ng has long predicted that as speech recognition goes from 95% accurate to 99% accurate, it will become a primary way that we interact with computers. The idea is that this 4% accuracy gap is the difference between annoyingly unreliable and incredibly useful. Thanks to Deep Learning, we’re finally cresting that peak. Let’s learn how to do speech recognition with deep learning! If you know how neural machine translation works, you might guess that we could simply feed sound recordings into a neural network and train it to produce text: That’s the holy grail of speech recognition with deep learning, but we aren’t quite there yet (at least at the time that I wrote this — I bet that we will be in a couple of years). The big problem is that speech varies in speed. One person might say “hello!” very quickly and another person might say “heeeelllllllllllllooooo!” very slowly, producing a much longer sound file with much more data. Both both sound files should be recognized as exactly the same text — “hello!” Automatically aligning audio files of various lengths to a fixed-length piece of text turns out to be pretty hard. To work around this, we have to use some special tricks and extra precessing in addition to a deep neural network. Let’s see how it works! The first step in speech recognition is obvious — we need to feed sound waves into a computer. In Part 3, we learned how to take an image and treat it as an array of numbers so that we can feed directly into a neural network for image recognition: But sound is transmitted as waves. How do we turn sound waves into numbers? Let’s use this sound clip of me saying “Hello”: Sound waves are one-dimensional. At every moment in time, they have a single value based on the height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look: To turn this sound wave into numbers, we just record of the height of the wave at equally-spaced points: This is called sampling. We are taking a reading thousands of times a second and recording a number representing the height of the sound wave at that point in time. That’s basically all an uncompressed .wav audio file is. “CD Quality” audio is sampled at 44.1khz (44,100 readings per second). But for speech recognition, a sampling rate of 16khz (16,000 samples per second) is enough to cover the frequency range of human speech. Lets sample our “Hello” sound wave 16,000 times per second. Here’s the first 100 samples: You might be thinking that sampling is only creating a rough approximation of the original sound wave because it’s only taking occasional readings. There’s gaps in between our readings so we must be losing data, right? But thanks to the Nyquist theorem, we know that we can use math to perfectly reconstruct the original sound wave from the spaced-out samples — as long as we sample at least twice as fast as the highest frequency we want to record. I mention this only because nearly everyone gets this wrong and assumes that using higher sampling rates always leads to better audio quality. It doesn’t. We now have an array of numbers with each number representing the sound wave’s amplitude at 1/16,000th of a second intervals. We could feed these numbers right into a neural network. But trying to recognize speech patterns by processing these samples directly is difficult. Instead, we can make the problem easier by doing some pre-processing on the audio data. Let’s start by grouping our sampled audio into 20-millisecond-long chunks. Here’s our first 20 milliseconds of audio (i.e., our first 320 samples): Plotting those numbers as a simple line graph gives us a rough approximation of the original sound wave for that 20 millisecond period of time: This recording is only 1/50th of a second long. But even this short recording is a complex mish-mash of different frequencies of sound. There’s some low sounds, some mid-range sounds, and even some high-pitched sounds sprinkled in. But taken all together, these different frequencies mix together to make up the complex sound of human speech. To make this data easier for a neural network to process, we are going to break apart this complex sound wave into it’s component parts. We’ll break out the low-pitched parts, the next-lowest-pitched-parts, and so on. Then by adding up how much energy is in each of those frequency bands (from low to high), we create a fingerprint of sorts for this audio snippet. Imagine you had a recording of someone playing a C Major chord on a piano. That sound is the combination of three musical notes— C, E and G — all mixed together into one complex sound. We want to break apart that complex sound into the individual notes to discover that they were C, E and G. This is the exact same idea. We do this using a mathematic operation called a Fourier transform. It breaks apart the complex sound wave into the simple sound waves that make it up. Once we have those individual sound waves, we add up how much energy is contained in each one. The end result is a score of how important each frequency range is, from low pitch (i.e. bass notes) to high pitch. Each number below represents how much energy was in each 50hz band of our 20 millisecond audio clip: But this is a lot easier to see when you draw this as a chart: If we repeat this process on every 20 millisecond chunk of audio, we end up with a spectrogram (each column from left-to-right is one 20ms chunk): A spectrogram is cool because you can actually see musical notes and other pitch patterns in audio data. A neural network can find patterns in this kind of data more easily than raw sound waves. So this is the data representation we’ll actually feed into our neural network. Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural network. The input to the neural network will be 20 millisecond audio chunks. For each little audio slice, it will try to figure out the letter that corresponds the sound currently being spoken. We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word “Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that memory of previous predictions helps the neural network make more accurate predictions going forward. After we run our entire audio clip through the neural network (one chunk at a time), we’ll end up with a mapping of each audio chunk to the letters most likely spoken during that chunk. Here’s what that mapping looks like for me saying “Hello”: Our neural net is predicting that one likely thing I said was “HHHEE_LL_LLLOOO”. But it also thinks that it was possible that I said “HHHUU_LL_LLLOOO” or even “AAAUU_LL_LLLOOO”. We have some steps we follow to clean up this output. First, we’ll replace any repeated characters a single character: Then we’ll remove any blanks: That leaves us with three possible transcriptions — “Hello”, “Hullo” and “Aullo”. If you say them out loud, all of these sound similar to “Hello”. Because it’s predicting one character at a time, the neural network will come up with these very sounded-out transcriptions. For example if you say “He would not go”, it might give one possible transcription as “He wud net go”. The trick is to combine these pronunciation-based predictions with likelihood scores based on large database of written text (books, news articles, etc). You throw out transcriptions that seem the least likely to be real and keep the transcription that seems the most realistic. Of our possible transcriptions “Hello”, “Hullo” and “Aullo”, obviously “Hello” will appear more frequently in a database of text (not to mention in our original audio-based training data) and thus is probably correct. So we’ll pick “Hello” as our final transcription instead of the others. Done! You might be thinking “But what if someone says ‘Hullo’? It’s a valid word. Maybe ‘Hello’ is the wrong transcription!” Of course it is possible that someone actually said “Hullo” instead of “Hello”. But a speech recognition system like this (trained on American English) will basically never produce “Hullo” as the transcription. It’s just such an unlikely thing for a user to say compared to “Hello” that it will always think you are saying “Hello” no matter how much you emphasize the ‘U’ sound. Try it out! If your phone is set to American English, try to get your phone’s digital assistant to recognize the world “Hullo.” You can’t! It refuses! It will always understand it as “Hello.” Not recognizing “Hullo” is a reasonable behavior, but sometimes you’ll find annoying cases where your phone just refuses to understand something valid you are saying. That’s why these speech recognition models are always being retrained with more data to fix these edge cases. One of the coolest things about machine learning is how simple it sometimes seems. You get a bunch of data, feed it into a machine learning algorithm, and then magically you have a world-class AI system running on your gaming laptop’s video card... Right? That sort of true in some cases, but not for speech. Recognizing speech is a hard problem. You have to overcome almost limitless challenges: bad quality microphones, background noise, reverb and echo, accent variations, and on and on. All of these issues need to be present in your training data to make sure the neural network can deal with them. Here’s another example: Did you know that when you speak in a loud room you unconsciously raise the pitch of your voice to be able to talk over the noise? Humans have no problem understanding you either way, but neural networks need to be trained to handle this special case. So you need training data with people yelling over noise! To build a voice recognition system that performs on the level of Siri, Google Now!, or Alexa, you will need a lot of training data — far more data than you can likely get without hiring hundreds of people to record it for you. And since users have low tolerance for poor quality voice recognition systems, you can’t skimp on this. No one wants a voice recognition system that works 80% of the time. For a company like Google or Amazon, hundreds of thousands of hours of spoken audio recorded in real-life situations is gold. That’s the single biggest thing that separates their world-class speech recognition system from your hobby system. The whole point of putting Google Now! and Siri on every cell phone for free or selling $50 Alexa units that have no subscription fee is to get you to use them as much as possible. Every single thing you say into one of these systems is recorded forever and used as training data for future versions of speech recognition algorithms. That’s the whole game! Don’t believe me? If you have an Android phone with Google Now!, click here to listen to actual recordings of yourself saying every dumb thing you’ve ever said into it: So if you are looking for a start-up idea, I wouldn’t recommend trying to build your own speech recognition system to compete with Google. Instead, figure out a way to get people to give you recordings of themselves talking for hours. The data can be your product instead. If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 7! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,5.8K,16,https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa?source=tag_archive---------7----------------,Machine Learning is Fun Part 5: Language Translation with Deep Learning and the Magic of Sequences,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Tiếng Việt or Italiano. We all know and love Google Translate, the website that can instantly translate between 100 different human languages as if by magic. It is even available on our phones and smartwatches: The technology behind Google Translate is called Machine Translation. It has changed the world by allowing people to communicate when it wouldn’t otherwise be possible. But we all know that high school students have been using Google Translate to... umm... assist with their Spanish homework for 15 years. Isn’t this old news? It turns out that over the past two years, deep learning has totally rewritten our approach to machine translation. Deep learning researchers who know almost nothing about language translation are throwing together relatively simple machine learning solutions that are beating the best expert-built language translation systems in the world. The technology behind this breakthrough is called sequence-to-sequence learning. It’s very powerful technique that be used to solve many kinds problems. After we see how it is used for translation, we’ll also learn how the exact same algorithm can be used to write AI chat bots and describe pictures. Let’s go! So how do we program a computer to translate human language? The simplest approach is to replace every word in a sentence with the translated word in the target language. Here’s a simple example of translating from Spanish to English word-by-word: This is easy to implement because all you need is a dictionary to look up each word’s translation. But the results are bad because it ignores grammar and context. So the next thing you might do is start adding language-specific rules to improve the results. For example, you might translate common two-word phrases as a single group. And you might swap the order nouns and adjectives since they usually appear in reverse order in Spanish from how they appear in English: That worked! If we just keep adding more rules until we can handle every part of grammar, our program should be able to translate any sentence, right? This is how the earliest machine translation systems worked. Linguists came up with complicated rules and programmed them in one-by-one. Some of the smartest linguists in the world labored for years during the Cold War to create translation systems as a way to interpret Russian communications more easily. Unfortunately this only worked for simple, plainly-structured documents like weather reports. It didn’t work reliably for real-world documents. The problem is that human language doesn’t follow a fixed set of rules. Human languages are full of special cases, regional variations, and just flat out rule-breaking. The way we speak English more influenced by who invaded who hundreds of years ago than it is by someone sitting down and defining grammar rules. After the failure of rule-based systems, new translation approaches were developed using models based on probability and statistics instead of grammar rules. Building a statistics-based translation system requires lots of training data where the exact same text is translated into at least two languages. This double-translated text is called parallel corpora. In the same way that the Rosetta Stone was used by scientists in the 1800s to figure out Egyptian hieroglyphs from Greek, computers can use parallel corpora to guess how to convert text from one language to another. Luckily, there’s lots of double-translated text already sitting around in strange places. For example, the European Parliament translates their proceedings into 21 languages. So researchers often use that data to help build translation systems. The fundamental difference with statistical translation systems is that they don’t try to generate one exact translation. Instead, they generate thousands of possible translations and then they rank those translations by likely each is to be correct. They estimate how “correct” something is by how similar it is to the training data. Here’s how it works: First, we break up our sentence into simple chunks that can each be easily translated: Next, we will translate each of these chunks by finding all the ways humans have translated those same chunks of words in our training data. It’s important to note that we are not just looking up these chunks in a simple translation dictionary. Instead, we are seeing how actual people translated these same chunks of words in real-world sentences. This helps us capture all of the different ways they can be used in different contexts: Some of these possible translations are used more frequently than others. Based on how frequently each translation appears in our training data, we can give it a score. For example, it’s much more common for someone to say “Quiero” to mean “I want” than to mean “I try.” So we can use how frequently “Quiero” was translated to “I want” in our training data to give that translation more weight than a less frequent translation. Next, we will use every possible combination of these chunks to generate a bunch of possible sentences. Just from the chunk translations we listed in Step 2, we can already generate nearly 2,500 different variations of our sentence by combining the chunks in different ways. Here are some examples: But in a real-world system, there will be even more possible chunk combinations because we’ll also try different orderings of words and different ways of chunking the sentence: Now need to scan through all of these generated sentences to find the one that is that sounds the “most human.” To do this, we compare each generated sentence to millions of real sentences from books and news stories written in English. The more English text we can get our hands on, the better. Take this possible translation: It’s likely that no one has ever written a sentence like this in English, so it would not be very similar to any sentences in our data set. We’ll give this possible translation a low probability score. But look at this possible translation: This sentence will be similar to something in our training set, so it will get a high probability score. After trying all possible sentences, we’ll pick the sentence that has the most likely chunk translations while also being the most similar overall to real English sentences. Our final translation would be “I want to go to the prettiest beach.” Not bad! Statistical machine translation systems perform much better than rule-based systems if you give them enough training data. Franz Josef Och improved on these ideas and used them to build Google Translate in the early 2000s. Machine Translation was finally available to the world. In the early days, it was surprising to everyone that the “dumb” approach to translating based on probability worked better than rule-based systems designed by linguists. This led to a (somewhat mean) saying among researchers in the 80s: Statistical machine translation systems work well, but they are complicated to build and maintain. Every new pair of languages you want to translate requires experts to tweak and tune a new multi-step translation pipeline. Because it is so much work to build these different pipelines, trade-offs have to be made. If you are asking Google to translate Georgian to Telegu, it has to internally translate it into English as an intermediate step because there’s not enough Georgain-to-Telegu translations happening to justify investing heavily in that language pair. And it might do that translation using a less advanced translation pipeline than if you had asked it for the more common choice of French-to-English. Wouldn’t it be cool if we could have the computer do all that annoying development work for us? The holy grail of machine translation is a black box system that learns how to translate by itself— just by looking at training data. With Statistical Machine Translation, humans are still needed to build and tweak the multi-step statistical models. In 2014, KyungHyun Cho’s team made a breakthrough. They found a way to apply deep learning to build this black box system. Their deep learning model takes in a parallel corpora and and uses it to learn how to translate between those two languages without any human intervention. Two big ideas make this possible — recurrent neural networks and encodings. By combining these two ideas in a clever way, we can build a self-learning translation system. We’ve already talked about recurrent neural networks in Part 2, but let’s quickly review. A regular (non-recurrent) neural network is a generic machine learning algorithm that takes in a list of numbers and calculates a result (based on previous training). Neural networks can be used as a black box to solve lots of problems. For example, we can use a neural network to calculate the approximate value of a house based on attributes of that house: But like most machine learning algorithms, neural networks are stateless. You pass in a list of numbers and the neural network calculates a result. If you pass in those same numbers again, it will always calculate the same result. It has no memory of past calculations. In other words, 2 + 2 always equals 4. A recurrent neural network (or RNN for short) is a slightly tweaked version of a neural network where the previous state of the neural network is one of the inputs to the next calculation. This means that previous calculations change the results of future calculations! Why in the world would we want to do this? Shouldn’t 2 + 2 always equal 4 no matter what we last calculated? This trick allows neural networks to learn patterns in a sequence of data. For example, you can use it to predict the next most likely word in a sentence based on the first few words: RNNs are useful any time you want to learn patterns in data. Because human language is just one big, complicated pattern, RNNs are increasingly used in many areas of natural language processing. If you want to learn more about RNNs, you can read Part 2 where we used one to generate a fake Ernest Hemingway book and then used another one to generate fake Super Mario Brothers levels. The other idea we need to review is Encodings. We talked about encodings in Part 4 as part of face recognition. To explain encodings, let’s take a slight detour into how we can tell two different people apart with a computer. When you are trying to tell two faces apart with a computer, you collect different measurements from each face and use those measurements to compare faces. For example, we might measure the size of each ear or the spacing between the eyes and compare those measurements from two pictures to see if they are the same person. You’re probably already familiar with this idea from watching any primetime detective show like CSI: The idea of turning a face into a list of measurements is an example of an encoding. We are taking raw data (a picture of a face) and turning it into a list of measurements that represent it (the encoding). But like we saw in Part 4, we don’t have to come up with a specific list of facial features to measure ourselves. Instead, we can use a neural network to generate measurements from a face. The computer can do a better job than us in figuring out which measurements are best able to differentiate two similar people: This is our encoding. It lets us represent something very complicated (a picture of a face) with something simple (128 numbers). Now comparing two different faces is much easier because we only have to compare these 128 numbers for each face instead of comparing full images. Guess what? We can do the same thing with sentences! We can come up with an encoding that represents every possible different sentence as a series of unique numbers: To generate this encoding, we’ll feed the sentence into the RNN, one word at time. The final result after the last word is processed will be the values that represent the entire sentence: Great, so now we have a way to represent an entire sentence as a set of unique numbers! We don’t know what each number in the encoding means, but it doesn’t really matter. As long as each sentence is uniquely identified by it’s own set of numbers, we don’t need to know exactly how those numbers were generated. Ok, so we know how to use an RNN to encode a sentence into a set of unique numbers. How does that help us? Here’s where things get really cool! What if we took two RNNs and hooked them up end-to-end? The first RNN could generate the encoding that represents a sentence. Then the second RNN could take that encoding and just do the same logic in reverse to decode the original sentence again: Of course being able to encode and then decode the original sentence again isn’t very useful. But what if (and here’s the big idea!) we could train the second RNN to decode the sentence into Spanish instead of English? We could use our parallel corpora training data to train it to do that: And just like that, we have a generic way of converting a sequence of English words into an equivalent sequence of Spanish words! This is a powerful idea: Note that we glossed over some things that are required to make this work with real-world data. For example, there’s additional work you have to do to deal with different lengths of input and output sentences (see bucketing and padding). There’s also issues with translating rare words correctly. If you want to build your own language translation system, there’s a working demo included with TensorFlow that will translate between English and French. However, this is not for the faint of heart or for those with limited budgets. This technology is still new and very resource intensive. Even if you have a fast computer with a high-end video card, it might take about a month of continuous processing time to train your own language translation system. Also, Sequence-to-sequence language translation techniques are improving so rapidly that it’s hard to keep up. Many recent improvements (like adding an attention mechanism or tracking context) are significantly improving results but these developments are so new that there aren’t even wikipedia pages for them yet. If you want to do anything serious with sequence-to-sequence learning, you’ll need to keep with new developments as they occur. So what else can we do with sequence-to-sequence models? About a year ago, researchers at Google showed that you can use sequence-to-sequence models to build AI bots. The idea is so simple that it’s amazing it works at all. First, they captured chat logs between Google employees and Google’s Tech Support team. Then they trained a sequence-to-sequence model where the employee’s question was the input sentence and the Tech Support team’s response was the “translation” of that sentence. When a user interacted with the bot, they would “translate” each of the user’s messages with this system to get the bot’s response. The end result was a semi-intelligent bot that could (sometimes) answer real tech support questions. Here’s part of a sample conversation between a user and the bot from their paper: They also tried building a chat bot based on millions of movie subtitles. The idea was to use conversations between movie characters as a way to train a bot to talk like a human. The input sentence is a line of dialog said by one character and the “translation” is what the next character said in response: This produced really interesting results. Not only did the bot converse like a human, but it displayed a small bit of intelligence: This is only the beginning of the possibilities. We aren’t limited to converting one sentence into another sentence. It’s also possible to make an image-to-sequence model that can turn an image into text! A different team at Google did this by replacing the first RNN with a Convolutional Neural Network (like we learned about in Part 3). This allows the input to be a picture instead of a sentence. The rest works basically the same way: And just like that, we can turn pictures into words (as long as we have lots and lots of training data)! Andrej Karpathy expanded on these ideas to build a system capable of describing images in great detail by processing multiple regions of an image separately: This makes it possible to build image search engines that are capable of finding images that match oddly specific search queries: There’s even researchers working on the reverse problem, generating an entire picture based on just a text description! Just from these examples, you can start to imagine the possibilities. So far, there have been sequence-to-sequence applications in everything from speech recognition to computer vision. I bet there will be a lot more over the next year. If you want to learn more in depth about sequence-to-sequence models and translation, here’s some recommended resources: If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun! Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Chris Dixon,5.3K,12,https://medium.com/@cdixon/eleven-reasons-to-be-excited-about-the-future-of-technology-ef5f9b939cb2?source=tag_archive---------8----------------,Eleven Reasons To Be Excited About The Future of Technology,"In the year 1820, a person could expect to live less than 35 years, 94% of the global population lived in extreme poverty, and less that 20% of the population was literate. Today, human life expectancy is over 70 years, less that 10% of the global population lives in extreme poverty, and over 80% of people are literate. These improvements are due mainly to advances in technology, beginning in the industrial age and continuing today in the information age. There are many exciting new technologies that will continue to transform the world and improve human welfare. Here are eleven of them. Self-driving cars exist today that are safer than human-driven cars in most driving conditions. Over the next 3–5 years they‘ll get even safer, and will begin to go mainstream. The World Health Organization estimates that 1.25 million people die from car-related injuries per year. Half of the deaths are pedestrians, bicyclists, and motorcyclists hit by cars. Cars are the leading cause of death for people ages 15–29 years old. Just as cars reshaped the world in the 20th century, so will self-driving cars in the 21st century. In most cities, between 20–30% of usable space is taken up by parking spaces, and most cars are parked about 95% of the time. Self-driving cars will be in almost continuous use (most likely hailed from a smartphone app), thereby dramatically reducing the need for parking. Cars will communicate with one another to avoid accidents and traffic jams, and riders will be able to spend commuting time on other activities like work, education, and socializing. Attempts to fight climate change by reducing the demand for energy haven’t worked. Fortunately, scientists, engineers, and entrepreneurs have been working hard on the supply side to make clean energy convenient and cost-effective. Due to steady technological and manufacturing advances, the price of solar cells has dropped 99.5% since 1977. Solar will soon be more cost efficient than fossil fuels. The cost of wind energy has also dropped to an all-time low, and in the last decade represented about a third of newly installed US energy capacity. Forward thinking organizations are taking advantage of this. For example, in India there is an initiative to convert airports to self-sustaining clean energy. Tesla is making high-performance, affordable electric cars, and installing electric charging stations worldwide. There are hopeful signs that clean energy could soon be reaching a tipping point. For example, in Japan, there are now more electric charging stations than gas stations. And Germany produces so much renewable energy, it sometimes produces even more than it can use. Computer processors only recently became fast enough to power comfortable and convincing virtual and augmented reality experiences. Companies like Facebook, Google, Apple, and Microsoft are investing billions of dollars to make VR and AR more immersive, comfortable, and affordable. People sometimes think VR and AR will be used only for gaming, but over time they will be used for all sorts of activities. For example, we’ll use them to manipulate 3-D objects: To meet with friends and colleagues from around the world: And even for medical applications, like treating phobias or helping rehabilitate paralysis victims: VR and AR have been dreamed about by science fiction fans for decades. In the next few years, they’ll finally become a mainstream reality. GPS started out as a military technology but is now used to hail taxis, get mapping directions, and hunt Pokémon. Likewise, drones started out as a military technology, but are increasingly being used for a wide range of consumer and commercial applications. For example, drones are being used to inspect critical infrastructure like bridges and power lines, to survey areas struck by natural disasters, and many other creative uses like fighting animal poaching. Amazon and Google are building drones to deliver household items. The startup Zipline uses drones to deliver medical supplies to remote villages that can’t be accessed by roads. There is also a new wave of startups working on flying cars (including two funded by the cofounder of Google, Larry Page). Flying cars use the same advanced technology used in drones but are large enough to carry people. Due to advances in materials, batteries, and software, flying cars will be significantly more affordable and convenient than today’s planes and helicopters. Artificial intelligence has made rapid advances in the last decade, due to new algorithms and massive increases in data collection and computing power. AI can be applied to almost any field. For example, in photography an AI technique called artistic style transfer transforms photographs into the style of a given painter: Google built an AI system that controls its datacenter power systems, saving hundreds of millions of dollars in energy costs. The broad promise of AI is to liberate people from repetitive mental tasks the same way the industrial revolution liberated people from repetitive physical tasks. Some people worry that AI will destroy jobs. History has shown that while new technology does indeed eliminate jobs, it also creates new and better jobs to replace them. For example, with advent of the personal computer, the number of typographer jobs dropped, but the increase in graphic designer jobs more than made up for it. It is much easier to imagine jobs that will go away than new jobs that will be created. Today millions of people work as app developers, ride-sharing drivers, drone operators, and social media marketers— jobs that didn’t exist and would have been difficult to even imagine ten years ago. By 2020, 80% of adults on earth will have an internet-connected smartphone. An iPhone 6 has about 2 billion transistors, roughly 625 times more transistors than a 1995 Intel Pentium computer. Today’s smartphones are what used to be considered supercomputers. Internet-connected smartphones give ordinary people abilities that, just a short time ago, were only available to an elite few: Protocols are the plumbing of the internet. Most of the protocols we use today were developed decades ago by academia and government. Since then, protocol development mostly stopped as energy shifted to developing proprietary systems like social networks and messaging apps. Cryptocurrency and blockchain technologies are changing this by providing a new business model for internet protocols. This year alone, hundreds of millions of dollars were raised for a broad range of innovative blockchain-based protocols. Protocols based on blockchains also have capabilities that previous protocols didn’t. For example, Ethereum is a new blockchain-based protocol that can be used to create smart contracts and trusted databases that are immune to corruption and censorship. While college tuition skyrockets, anyone with a smartphone can study almost any topic online, accessing educational content that is mostly free and increasingly high-quality. Encyclopedia Britannica used to cost $1,400. Now anyone with a smartphone can instantly access Wikipedia. You used to have to go to school or buy programming books to learn computer programming. Now you can learn from a community of over 40 million programmers at Stack Overflow. YouTube has millions of hours of free tutorials and lectures, many of which are produced by top professors and universities. The quality of online education is getting better all the time. For the last 15 years, MIT has been recording lectures and compiling materials that cover over 2000 courses. As perhaps the greatest research university in the world, MIT has always been ahead of the trends. Over the next decade, expect many other schools to follow MIT’s lead. Earth is running out of farmable land and fresh water. This is partly because our food production systems are incredibly inefficient. It takes an astounding 1799 gallons of water to produce 1 pound of beef. Fortunately, a variety of new technologies are being developed to improve our food system. For example, entrepreneurs are developing new food products that are tasty and nutritious substitutes for traditional foods but far more environmentally friendly. The startup Impossible Foods invented meat products that look and taste like the real thing but are actually made of plants. Their burger uses 95% less land, 74% less water, and produces 87% less greenhouse gas emissions than traditional burgers. Other startups are creating plant-based replacements for milk, eggs, and other common foods. Soylent is a healthy, inexpensive meal replacement that uses advanced engineered ingredients that are much friendlier to the environment than traditional ingredients. Some of these products are developed using genetic modification, a powerful scientific technique that has been widely mischaracterized as dangerous. According to a study by the Pew Organization, 88% of scientists think genetically modified foods are safe. Another exciting development in food production is automated indoor farming. Due to advances in solar energy, sensors, lighting, robotics, and artificial intelligence, indoor farms have become viable alternatives to traditional outdoor farms. Compared to traditional farms, automated indoor farms use roughly 10 times less water and land. Crops are harvested many more times per year, there is no dependency on weather, and no need to use pesticides. Until recently, computers have only been at the periphery of medicine, used primarily for research and record keeping. Today, the combination of computer science and medicine is leading to a variety of breakthroughs. For example, just fifteen years ago, it cost $3B to sequence a human genome. Today, the cost is about a thousand dollars and continues to drop. Genetic sequencing will soon be a routine part of medicine. Genetic sequencing generates massive amounts of data that can be analyzed using powerful data analysis software. One application is analyzing blood samples for early detection of cancer. Further genetic analysis can help determine the best course of treatment. Another application of computers to medicine is in prosthetic limbs. Here a young girl is using prosthetic hands she controls using her upper-arm muscles: Soon we’ll have the technology to control prothetic limbs with just our thoughts using brain-to-machine interfaces. Computers are also becoming increasingly effective at diagnosing diseases. An artificial intelligence system recently diagnosed a rare disease that human doctors failed to diagnose by finding hidden patterns in 20 million cancer records. Since the beginning of the space age in the 1950s, the vast majority of space funding has come from governments. But that funding has been in decline: for example, NASA’s budget dropped from about 4.5% of the federal budget in the 1960s to about 0.5% of the federal budget today. The good news is that private space companies have started filling the void. These companies provide a wide range of products and services, including rocket launches, scientific research, communications and imaging satellites, and emerging speculative business models like asteroid mining. The most famous private space company is Elon Musk’s SpaceX, which successfully sent rockets into space that can return home to be reused. Perhaps the most intriguing private space company is Planetary Resources, which is trying to pioneer a new industry: mining minerals from asteroids. If successful, asteroid mining could lead to a new gold rush in outer space. Like previous gold rushes, this could lead to speculative excess, but also dramatically increased funding for new technologies and infrastructure. These are just a few of the amazing technologies we’ll see developed in the coming decades. 2016 is just the beginning of a new age of wonders. As futurist Kevin Kelly says: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. www.cdixon.org/about " Tal Perry,2.6K,17,https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------9----------------,Deep Learning the Stock Market – Tal Perry – Medium,"Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going. I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. Why NLP is relevant to Stock prediction In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance. In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure). Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will. You shall know a word by the company it keeps (Firth, J. R. 1957:11) There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort. Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them. The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog. But we can embed more than just words. We can do, say , stock market embeddings. Market2Vec The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie. Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college. The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself. We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How? So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful. Now What Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry. So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either. Going deeper I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning. So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote. Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market. Inside the magical black box What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text. A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope. We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that. Connecting Market2Vec and LSTMs The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them: We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior. What Comes out Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile. In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next. In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were: 1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do. 3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk. 5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index. Sidenote — Why predict the VIX What makes the VIX an interesting target is that Back to our LSTM outputs and the SoftMax How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like: We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ). So before we get into how this thing learns, let’s recap what we’ve done so far How does this thing learn? Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn. So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data. To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is Or in English, the square root of the square of the difference between what actually happened and what we predicted. Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error. This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task. It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task. It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task. Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error. What’s next Now that I’ve laid this out in writing and it still makes sense to me I want So, if you’ve come this far please point out my errors and share your inputs. Other thoughts Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work. Liquidity and efficient use of capital Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs. A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500. This stuff is new The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets. Multiple Time Frames While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most. I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with. MarketVectors When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them. My original consideration was to use an auto-encoder like in this paper but end to end training is cooler. A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder. But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them. The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage. CNNs Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun. " Gil Fewster,3.3K,5,https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------0----------------,The mind-blowing AI announcement from Google that you probably missed.,"Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text. I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own. In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System. This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock. Luckily, I’m here to bring you up to speed. Here’s the deal. Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance. Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results. This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input. All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning. The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative. Google Translate invented its own language to help it translate more effectively. What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation. Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes) To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post: Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated. And this leads the Google engineers onto that truly astonishing discovery of creation: So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics. I just thought maybe you’d want to know. Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly. Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective. Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A tinkerer Our community publishes stories worth reading on development, design, and data science. " David Venturi,10.6K,20,https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------1----------------,"Every single Machine Learning course on the internet, ranked by your reviews","A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost. I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science. For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization. For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below. For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews. Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources. Each course must fit three criteria: We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only. There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out. We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. We made subjective syllabus judgment calls based on three factors: A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience. A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps: The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves. First off, let’s define deep learning. Here is a succinct description: As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article: My top three recommendations from that list would be: Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline. Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar. Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews. Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available. Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning. Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice: Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course. A few prominent reviewers noted the following: Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews. The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding). Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase. Below are a few of the aforementioned sparkling reviews: Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered. It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R. As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course. Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings. A few prominent reviewers noted the following: Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here. The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews. Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews. Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews. Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent. Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews. Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews. Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews. Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews. Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews. Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews. Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews. Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews. AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews. Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews. StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews. Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews. From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews. Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews. Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews. Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews. Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews. Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews. Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews. Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews. Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews. Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews. Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews. Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews. Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews. The following courses had one or no reviews as of May 2017. Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review. Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available. Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free. Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free. Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase. Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase. Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase. Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks. Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required. The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course. Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours. Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours. Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours. Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours. Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours. Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours. Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free. Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free. Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below). Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well. This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth. The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering. If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page. If you enjoyed reading this, check out some of Class Central’s other pieces: If you have suggestions for courses I missed, let me know in the responses! If you found this helpful, click the 💚 so more people will see it here on Medium. This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program. Our community publishes stories worth reading on development, design, and data science. " Vishal Maini,32K,10,https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------2----------------,A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium,"Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future. Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent. Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs. Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models. Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD). Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications. Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem. Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum. This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series. Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic. The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions. The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C. Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe). In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie. Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste. In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing. Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models. The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task. Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself. And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years. The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel. Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start. You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have: Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence. Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale. Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process. And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning! More from Machine Learning for Humans 🤖👶 A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact. " Tim Anglade,7K,23,https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------3----------------,"How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native","The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs. While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches. The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps. If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet. While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways. Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.” Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one). All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section. We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done. The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors: For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone. Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack. It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs). While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition. The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images. One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation. Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods. Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons. First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner. At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with. We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile. To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come. The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source). So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million. This has two advantages: There are tradeoffs of course: During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions. After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations. So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture. Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time. With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with. Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular: So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features. Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were: The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit. Our data augmentation rules were as follows: These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation. The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be. The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours. We trained the network in 3 phases: While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training. In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well. Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users. This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries. Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future. If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again. There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future: Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience. UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case. DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch). Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks. For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects. Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide. Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration. ... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com " Sophia Ciocca,53K,9,https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe?source=tag_archive---------4----------------,How Does Spotify Know You So Well? – Member Feature Stories – Medium,"Member Feature Story A software engineer explains the science behind personalized music recommendations Photo by studioEAST/Getty Images Photo by studioEAST/Getty Images This Monday — just like every Monday before it — over 100 million Spotify users found a fresh new playlist waiting for them called Discover Weekly. It’s a custom mixtape of 30 songs they’ve never listened to before but will probably love, and it’s pretty much magic. I’m a huge fan of Spotify, and particularly Discover Weekly. Why? It makes me feel seen. It knows my musical tastes better than any person in my entire life ever has, and I’m consistently delighted by how satisfyingly just right it is every week, with tracks I probably would never have found myself or known I would like. For those of you who live under a soundproof rock, let me introduce you to my virtual best friend: As it turns out, I’m not alone in my obsession with Discover Weekly. The user base goes crazy for it, which has driven Spotify to rethink its focus, and invest more resources into algorithm-based playlists. Ever since Discover Weekly debuted in 2015, I’ve been dying to know how it works (What’s more, I’m a Spotify fangirl, so I sometimes like to pretend that I work there and research their products.) After three weeks of mad Googling, I feel like I’ve finally gotten a glimpse behind the curtain. So how does Spotify do such an amazing job of choosing those 30 songs for each person each week? Let’s zoom out for a second to look at how other music services have tackled music recommendations, and how Spotify’s doing it better. Back in the 2000s, Songza kicked off the online music curation scene using manual curation to create playlists for users. This meant that a team of “music experts” or other human curators would put together playlists that they just thought sounded good, and then users would listen to those playlists. (Later, Beats Music would employ this same strategy.) Manual curation worked alright, but it was based on that specific curator’s choices, and therefore couldn’t take into account each listener’s individual music taste. Like Songza, Pandora was also one of the original players in digital music curation. It employed a slightly more advanced approach, instead manually tagging attributes of songs. This meant a group of people listened to music, chose a bunch of descriptive words for each track, and tagged the tracks accordingly. Then, Pandora’s code could simply filter for certain tags to make playlists of similar-sounding music. Around that same time, a music intelligence agency from the MIT Media Lab called The Echo Nest was born, which took a radical, cutting-edge approach to personalized music. The Echo Nest used algorithms to analyze the audio and textual content of music, allowing it to perform music identification, personalized recommendation, playlist creation, and analysis. Finally, taking another approach is Last.fm, which still exists today and uses a process called collaborative filtering to identify music its users might like, but more on that in a moment. So if that’s how other music curation services have handled recommendations, how does Spotify’s magic engine run? How does it seem to nail individual users’ tastes so much more accurately than any of the other services? Spotify doesn’t actually use a single revolutionary recommendation model. Instead, they mix together some of the best strategies used by other services to create their own uniquely powerful discovery engine. To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Let’s dive into how each of these recommendation models work! First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users. After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model. Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song. But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation: What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T. Collaborative filtering then uses that data to say: “Hmmm... You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.” Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right? But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences? With matrix math, done with Python libraries! In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database. Then, the Python library runs this long, complicated matrix factorization formula: When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile. Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared. To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question. Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP. The second type of recommendation models that Spotify employs are Natural Language Processing (NLP) models. The source data for these models, as the name suggests, are regular ol’ words: track metadata, news articles, blogs, and other text around the internet. Natural Language Processing, which is the ability of a computer to understand human speech as it is spoken, is a vast field unto itself, often harnessed through sentiment analysis APIs. The exact mechanisms behind NLP are beyond the scope of this article, but here’s what happens on a very high level: Spotify crawls the web constantly looking for blog posts and other written text about music to figure out what people are saying about specific artists and songs — which adjectives and what particular language is frequently used in reference to those artists and songs, and which other artists and songs are also being discussed alongside them. While I don’t know the specifics of how Spotify chooses to then process this scraped data, I can offer some insight based on how the Echo Nest used to work with them. They would bucket Spotify’s data up into what they call “cultural vectors” or “top terms.” Each artist and song had thousands of top terms that changed on the daily. Each term had an associated weight, which correlated to its relative importance — roughly, the probability that someone will describe the music or artist with that term. Then, much like in collaborative filtering, the NLP model uses these terms and weights to create a vector representation of the song that can be used to determine if two pieces of music are similar. Cool, right? First, a question. You might be thinking: First of all, adding a third model further improves the accuracy of the music recommendation service. But this model also serves a secondary purpose: unlike the first two types, raw audio models take new songs into account. Take, for example, a song your singer-songwriter friend has put up on Spotify. Maybe it only has 50 listens, so there are few other listeners to collaboratively filter it against. It also isn’t mentioned anywhere on the internet yet, so NLP models won’t pick it up. Luckily, raw audio models don’t discriminate between new tracks and popular tracks, so with their help, your friend’s song could end up in a Discover Weekly playlist alongside popular songs! But how can we analyze raw audio data, which seems so abstract? With convolutional neural networks! Convolutional neural networks are the same technology used in facial recognition software. In Spotify’s case, they’ve been modified for use on audio data instead of pixels. Here’s an example of a neural network architecture: This particular neural network has four convolutional layers, seen as the thick bars on the left, and three dense layers, seen as the more narrow bars on the right. The inputs are time-frequency representations of audio frames, which are then concatenated, or linked together, to form the spectrogram. The audio frames go through these convolutional layers, and after passing through the last one, you can see a “global temporal pooling” layer, which pools across the entire time axis, effectively computing statistics of the learned features across the time of the song. After processing, the neural network spits out an understanding of the song, including characteristics like estimated time signature, key, mode, tempo, and loudness. Below is a plot of data for a 30-second snippet of “Around the World” by Daft Punk. Ultimately, this reading of the song’s key characteristics allows Spotify to understand fundamental similarities between songs and therefore which users might enjoy them, based on their own listening history. That covers the basics of the three major types of recommendation models feeding Spotify’s Recommendations Pipeline, and ultimately powering the Discover Weekly playlist! Of course, these recommendation models are all connected to Spotify’s larger ecosystem, which includes giant amounts of data storage and uses lots of Hadoop clusters to scale recommendations and make these engines work on enormous matrices, endless online music articles, and huge numbers of audio files. I hope this was informative and piqued your curiosity like it did mine. For now, I’ll be working my way through my own Discover Weekly, finding my new favorite music while appreciating all the machine learning that’s going on behind the scenes. 🎶 Thanks also to ladycollective for reading this article and suggesting edits. Software engineer, writer, and generally creative human. Interested in art, feminism, mindfulness, and authenticity. http://sophiaciocca.com Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade " François Chollet,35K,18,https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec?source=tag_archive---------5----------------,The impossibility of intelligence explosion – François Chollet – Medium,"In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility. The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment — as seen in the science-fiction movie Transcendence (2014), for instance. Superintelligence would thus imply near-omnipotence, and would pose an existential threat to humanity. This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems. I attempt to base my points on concrete observations about intelligent systems and recursive systems. The reasoning behind intelligence explosion, like many of the early theories about AI that arose in the 1960s and 1970s, is sophistic: it considers “intelligence” in a completely abstract way, disconnected from its context, and ignores available evidence about both intelligent systems and recursively self-improving systems. It doesn’t have to be that way. We are, after all, on a planet that is literally packed with intelligent systems (including us) and self-improving systems, so we can simply observe them and learn from them to answer the questions at hand, instead of coming up with evidence-free circular reasonings. To talk about intelligence and its possible self-improving properties, we should first introduce necessary background and context. What are we talking about when we talk about intelligence? Precisely defining intelligence is in itself a challenge. The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. This is not quite the full picture, so let’s use this definition as a starting point, and expand on it. The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation. A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses — your sensorimotor affordances — are a fundamental part of your mind. Your environment is a fundamental part of your mind. Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself. In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human. What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but we do know that cognitive development in humans and animals is driven by hardcoded, innate dynamics. Human babies are born with an advanced set of reflex behaviors and innate learning templates that drive their early sensorimotor development, and that are fundamentally intertwined with the structure of the human sensorimotor space. The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body. It has even been convincingly argued, for instance by Chomsky, that very high-level human cognitive features, such as our ability to develop language, are innate. Similarly, one can imagine that the octopus has its own set of hardcoded cognitive primitives required in order to learn how to use an octopus body and survive in its octopus environment. The brain of a human is hyper specialized in the human condition — an innate specialization extending possibly as far as social behaviors, language, and common sense — and the brain of an octopus would likewise be hyper specialized in octopus behaviors. A human baby brain properly grafted in an octopus body would most likely fail to adequately take control of its unique sensorimotor space, and would quickly die off. Not so smart now, Mr. Superior Brain. What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization. Saturday Mthiyane, raised by monkeys in South Africa and found at five, kept behaving like a monkey into adulthood — jumping and walking on all four, incapable of language, and refusing to eat cooked food. Feral children who have human contact for at least some of their most formative years tend to have slightly better luck with reeducation, although they rarely graduate to fully-functioning humans. If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment. If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. In practice, geniuses with exceptional cognitive abilities usually live overwhelmingly banal lives, and very few of them accomplish anything of note. In Terman’s landmark “Genetic Studies of Genius”, he notes that most of his exceptionally gifted subjects would pursue occupations “as humble as those of policeman, seaman, typist and filing clerk”. There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news. Of the people who have actually attempted to take over the world, hardly any seem to have had an exceptional intelligence; anecdotally, Hitler was a high-school dropout, who failed to get into the Vienna Academy of Art — twice. People who do end up making breakthroughs on hard problems do so through a combination of circumstances, character, education, intelligence, and they make their breakthroughs through incremental improvement over the work of their predecessors. Success — expressed intelligence — is sufficient ability meeting a great problem at the right time. Most of these remarkable problem-solvers are not even that clever — their skills seem to be specialized in a given field and they typically do not display greater-than-average abilities outside of their own domain. Some people achieve more because they were better team players, or had more grit and work ethic, or greater imagination. Some just happened to have lived in the right context, to have the right conversation at the right time. Intelligence is fundamentally situational. Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances. However, it is a well-documented fact that raw cognitive ability — as measured by IQ, which may be debatable — correlates with social attainment for slices of the spectrum that are close to the mean. This was first evidenced in Terman’s study, and later confirmed by others — for instance, an extensive 2006 metastudy by Strenze found a visible, if somewhat weak, correlation between IQ and socioeconomic success. So, a person with an IQ of 130 is statistically far more likely to succeed in navigating the problem of life than a person with an IQ of 70 — although this is never guaranteed at the individual level — but here’s the thing: this correlation breaks down after a certain point. There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists. At the same time, of the roughly 50,000 humans alive today who have astounding IQs of 170 or higher, how many will solve any problem a tenth as significant as Professor Watson? Why would the real-world utility of raw cognitive ability stall past a certain threshold? This points to a very intuitive fact: that high attainment requires sufficient cognitive ability, but that the current bottleneck to problem-solving, to expressed intelligence, is not latent cognitive ability itself. The bottleneck is our circumstances. Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve. All evidence points to the fact that our current environment, much like past environments over the previous 200,000 years of human history and prehistory, does not allow high-intelligence individuals to fully develop and utilize their cognitive potential. A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential. A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice. It’s not just that our bodies, senses, and environment determine how much intelligence our brains can develop — crucially, our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing. The most fundamental of all cognitive prosthetics is of course language itself — essentially an operating system for cognition, without which we couldn’t think very far. These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms — across time, space, and importantly, across individuality. These cognitive prosthetics, not our brains, are where most of our cognitive abilities reside. We are our tools. An individual human is pretty much useless on its own — again, humans are just bipedal apes. It’s a collective accumulation of knowledge and external systems over thousands of years — what we call “civilization” — that has elevated us above our animal nature. When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation — the researcher offloads large extents of the problem-solving process to computers, to other researchers, to paper notes, to mathematical notation, etc. And they are only able to succeed because they are standing on the shoulder of giants — their own work is but one last subroutine in a problem-solving process that spans decades and thousands of individuals. Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip. An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred. However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence. On an individual level, we are but vectors of civilization, building upon previous work and passing on our findings. We are the momentary transistors on which the problem-solving algorithm of civilization runs. Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves. However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces. AI, in this sense, is no different than computers, or books, or language itself: it’s a technology that empowers our civilization. The advent of superhuman AI will thus be no more of a singularity than the advent of computers, or books, or language. Civilization will develop AI, and just march on. Civilization will eventually transcend what we are now, much like it has transcended what we were 10,000 years ago. It’s a gradual process, not a sudden shift. The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false. Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process. In this case, you may ask, isn’t civilization itself the runaway self-improving brain? Is our civilizational intelligence exploding? No. Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time. Not an explosion. But why? Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) * a, a > 1`. No system exists in a vacuum, and especially not intelligence, nor human civilization. We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them. So we know exactly how such systems behave — in a variety of contexts and over a variety of timescales. You are, yourself, a recursively self-improving system: educating yourself makes you smarter, in turn allowing you to educate yourself more efficiently. Likewise, human civilization is recursively self-improving, over a much longer timescale. Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make. Examples abound. Consider, for instance, software. Writing software obviously empowers software-writing: first, we programmed compilers, that could perform “automated programming”, then we used compilers to develop new languages implementing more powerful programming paradigms. We used these languages to develop advanced developer tools — debuggers, IDEs, linters, bug predictors. In the future, software will even write itself. And what is the end result of this recursively self-improving process? Can you do 2x more with your the software on your computer than you could last year? Will you be able to do 2x more next year? Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it. The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992. But why? Primarily, because the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain. Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it — in software, this would be resource consumption, feature creep, UX issues. When it comes to personal investing, your own rate of spending is one such antagonistic process — the more money you have, the more money you spend. When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them; a society with smarter individuals will need to invest far more in networking and communication, etc. It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses. It is also perhaps not random happenstance that military empires of the past have ended up collapsing after surpassing a certain size. Exponential progress, meet exponential friction. One specific example that is worth paying attention to is that of scientific progress, because it is conceptually very close to intelligence itself — science, as a problem-solving system, is very close to being a runaway superhuman AI. Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science — whether lab hardware (e.g. quantum physics led to lasers, which enabled a wealth of new quantum physics experiments), conceptual tools (e.g. a new theorem, a new theory), cognitive tools (e.g. mathematical notation), software tools, communications protocols that enable scientists to better collaborate (e.g. the Internet)... Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity. How comes? What bottlenecks and adversarial counter-reactions are slowing down recursive self-improvement in science? So many, I can’t even count them. Here are a few. Importantly, every single one of them would also apply to recursively self-improving AIs. In practice, system bottlenecks, diminishing returns, and adversarial reactions end up squashing recursive self-improvement in all of the recursive processes that surround us. Self-improvement does indeed lead to progress, but that progress tends to be linear, or at best, sigmoidal. Your first “seed dollar” invested will not typically lead to a “wealth explosion”; instead, a balance between investment returns and growing spending will usually lead to a roughly linear growth of your savings over time. And that’s for a system that is orders of magnitude simpler than a self-improving mind. Likewise, the first superhuman AI will just be another step on a visibly linear ladder of progress, that we started climbing long ago. The expansion of intelligence can only come from a co-evolution of brains (biological or digital), sensorimotor affordances, environment, and culture — not from merely tuning the gears of some brain in a jar, in isolation. Such a co-evolution has already been happening for eons, and will continue as intelligence moves to an increasingly digital substrate. No “intelligence explosion” will occur, as this process advances at a roughly linear pace. @fchollet, November 2017 Marketing footnote: my book Deep Learning with Python has just been released. If you have Python skills, and you want to understand what deep learning can and cannot do, and how to use it to solve difficult real-world problems, this book was written for you. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Max Pechyonkin,23K,8,https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------8----------------,Understanding Hinton’s Capsule Networks. Part I: Intuition.,"Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur " Slav Ivanov,3.9K,17,https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------9----------------,"The $1700 great Deep Learning box: Assembly, setup and benchmarks","Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Geoff Nesnow,14.9K,19,https://medium.com/@DonotInnovate/73-mind-blowing-implications-of-a-driverless-future-58d23d1f338d?source=tag_archive---------1----------------,73 Mind-Blowing Implications of a Driverless Future,"I originally wrote and published a version of this article in September 2016. Since then, quite a bit has happened, further cementing my view that these changes are coming and that the implications will be even more substantial. I decided it was time to update this article with some additional ideas and a few changes. As I write this, Uber just announced that it just ordered 24,000 self-driving Volvos. Tesla just released an electric, long-haul tractor trailer with extraordinary technical specs (range, performance) and self-driving capabilities (UPS just preordered 125!). And, Tesla just announced what will probably be the quickest production car ever made — perhaps the fastest. It will go zero to sixty in about the time it takes you to read zero to sixty. And, of course, it will be able to drive itself. The future is quickly becoming now. Google just ordered thousands of Chryslers for its self-driving fleet (that are already on the roads in AZ). In September of 2016, Uber had just rolled out its first self-driving taxis in Pittsburgh, Tesla and Mercedes were rolling out limited self-driving capabilities and cities around the world were negotiating with companies who want to bring self-driving cars and trucks to their cities. Since then, all of the major car companies have announced significant steps towards mostly or entirely electric vehicles, more investments have been made in autonomous vehicles, driverless trucks now seem to be leading rather than following in terms of the first large scale implementations and there’ve been a few more incidents (i.e. accidents). I believe that the timeframe for significant adoption of this technology has shrunk in the past year as technology has gotten better faster and as the trucking industry has increased its level of interest and investment. I believe that my daughter, who is now just over 1 years old, will never have to learn to drive or own a car. The impact of driverless vehicles will be profound and impact almost every part of our lives. Below are my updated thoughts about what a driverless future will be like. Some of these updates are from feedback to my original article (thanks to those who contributed!!!), some are based on technology advances in the past year and others are just my own speculations. What could happen when cars and trucks drive themselves? 1. People won’t own their own cars. Transport will be delivered as a service from companies who own fleets of self-driving vehicles. There are so many technical, economic, safety advantages to the transportation-as-a-service that this change may come much faster than most people expect. Owning a vehicle as an individual will become a novelty for collectors and maybe competitive racers. 2. Software/technology companies will own more of the world’s economy as companies like Uber, Google and Amazon turn transportation into a pay-as-you-go service. Software will indeed eat this world. Over time, they’ll own so much data about people, patterns, routes and obstacles that new entrants will have huge barriers to enter the market 3. Without government intervention (or some sort of organized movement), there will be a tremendous transfer of wealth to a very small number of people who own the software, battery/power manufacturing, vehicle servicing and charging/power generation/maintenance infrastructure. There will be massive consolidation of companies serving these markets as scale and efficiency will become even more valuable. Cars (perhaps they’ll be renamed with some sort-of-clever acronym) will become like the routers that run the Internet — most consumers won’t know or care who made them or who owns them. 4. Vehicle designs will change radically — vehicles won’t need to withstand crashes in the same way, all vehicles will be electric (self-driving + software + service providers = all electric). They may look different, come in very different shapes and sizes, maybe attach to each other in some situations. There will likely be many significant innovations in materials used for vehicle construction — for example, tires and brakes will be re-optimized with very different assumptions, especially around variability of loads and much more controlled environments. The bodies will likely be primarily made of composites (like carbon fiber and fiberglass) and 3D printed. Electric vehicles with no driver controls will require 1/10th or fewer the number of parts (perhaps even 1/100th) and thus will be quicker to produce and require much less labor. There may even be designs with almost no moving parts (other than wheels and motors, obviously). 5. Vehicles will mostly swap batteries rather than serve as the host of battery charging. Batteries will be charged in distributed and highly optimized centers — likely owned by the same company as the vehicles or another national vendor. There may be some entrepreneurial opportunity and a marketplace for battery charging and swapping, but this industry will likely be consolidated quickly. The batteries will be exchanged without human intervention — likely in a carwash-like drive thru 6. Vehicles (being electric) will be able to provide portable power for a variety of purposes (which will also be sold as a service) — construction job sites (why use generators), disaster/power failures, events, etc. They may even temporarily or permanently replace power distribution networks (i.e. power lines) for remote locations — imagine a distributed power generation network with autonomous vehicles providing “last mile” services to some locations 7. Driver’s licenses will slowly go away as will the Department of Motor Vehicles in most states. Other forms of ID may emerge as people no longer carry driver’s licenses. This will probably correspond with the inevitable digitization of all personal identification — via prints, retina scans or other biometric scanning 8. There won’t be any parking lots or parking spaces on roads or in buildings. Garages will be repurposed — maybe as mini loading docks for people and deliveries. Aesthetics of homes and commercial buildings will change as parking lots and spaces go away. There will be a multi-year boom in landscaping and basement and garage conversions as these spaces become available 9. Traffic policing will become redundant. Police transport will also likely change quite a bit. Unmanned police vehicles may become more common and police officers may use commercial transportation to move around routinely. This may dramatically change the nature of policing, with newfound resources from the lack of traffic policing and dramatically less time spent moving around 10. There will be no more local mechanics, car dealers, consumer car washes, auto parts stores or gas stations. Towns that have been built around major thoroughfares will change or fade 11. The auto insurance industry as we know it will go away (as will the significant investing power of the major players of this industry). Most car companies will go out of business, as will most of their enormous supplier networks. There will be many fewer net vehicles on the road (maybe 1/10th, perhaps even less) that are also more durable, made of fewer parts and much more commoditized 12. Traffic lights and signs will become obsolete. Vehicles may not even have headlights as infrared and radar take the place of the human light spectrum. The relationship between pedestrians (and bicycles) and cars and trucks will likely change dramatically. Some will come in the form of cultural and behavioral changes as people travel in groups more regularly and walking or cycling becomes practical in places where it isn’t today 13. Multi-modal transportation will become a more integrated and normal part of our ways of moving around. In other words, we’ll often take one type of vehicle to another, especially when traveling longer distances. With coordination and integration, the elimination of parking and more deterministic patterns, it will become ever-more efficient to combine modes of transport 14. The power grid will change. Power stations via alternative power sources will become more competitive and local. Consumers and small businesses with solar panels, small scale tidal or wave power generators, windmills and other local power generation will be able to sell KiloWattHours to the companies who own the vehicles. This will change “net metering” rules and possibly upset the overall power delivery model. It might even be the beginning of truly distributed power creation and transport. There will likely be a significant boom in innovation in power production and delivery models. Over time, ownership of these services will probably be consolidated across a very small number of companies 15. Traditional petroleum products (and other fossil fuels) will become much less valuable as electric cars replace fuel powered vehicles and as alternative energy sources become more viable with portability of power (transmission and conversion eat tons of power). There are many geopolitical implications to this possible shift. As implications of climate change become ever-clearer and present, these trends will likely accelerate. Petroleum will continue to be valuable for making plastics and other derived materials, but will not be burned for energy at any scale. Many companies, oil-rich countries and investors have already begun accommodating for these changes 16. Entertainment funding will change as the auto industry’s ad spending goes away. Think about how many ads you see or hear about cars, car financing, car insurance, car accessories and car dealers. There are likely to be many other structural and cultural changes that come from the dramatic changes to the transportation industry. We’ll stop saying “shift into high gear” and other driving-related colloquialisms as the references will be lost on future generations 17. The recent corporate tax rate reductions in the “..Act to Provide for Reconciliation Pursuant to Titles II and V of the Concurrent Resolution on the Budget for Fiscal Year 2018” will accelerate investments in automation including self-driving vehicles and other forms of transportation automation. Flush with new cash and incentives to invest capital soon, many businesses will invest in technology and solutions that reduce their labor costs. 18. The car financing industry will go away, as will the newly huge derivative market for packaged sub-prime auto loans which will likely itself cause a version of the 2008–2009 financial crisis as it blows up. 19. Increases in unemployment, increased student loan, vehicle and other debt defaults could quickly spiral into a full depression. The world that emerges on the other side will likely have even more dramatic income and wealth stratification as entry level jobs related to transportation and the entire supply chain of the existing transportation system go away. The convergence of this with hyper-automation in production and service delivery (AI, robotics, low-cost computing, business consolidation, etc) may permanently change how societies are organized and how people spend their time 20. There will be many new innovations in luggage and bags as people no longer keep stuff in cars and loading and unloading packages from vehicles becomes much more automated. The traditional trunk size and shape will change. Trailers or other similar detachable devices will become much more commonplace to add storage space to vehicles. Many additional on demand services will become available as transportation for goods and services becomes more ubiquitous and cheaper. Imagine being able to design, 3D print and put on an outfit as you travel to a party or the office (if you’re still going to an office)... 21. Consumers will have more money as transportation (a major cost, especially for lower income people and families) gets much cheaper and ubiquitous — though this may be offset by dramatic reductions in employment as technology changes many times faster than people’s ability to adapt to new types of work 22. Demand for taxi and truck drivers will go down, eventually to zero. Someone born today might not understand what a truck driver is or even understand why someone would do that job — much like people born in the last 30 years don’t understand how someone could be employed as a switchboard operator 23. The politics will get ugly as lobbyists for the auto and oil industries unsuccessfully try to stop the driverless car. They’ll get even uglier as the federal government deals with assuming huge pension obligations and other legacy costs associated with the auto industry. My guess is that these pension obgligations won’t ultimately be honored and certain communities will be devastated. The same may be true of pollution clean-up efforts around the factories and chemical plants that were once major components of the vehicle supply chain 24. The new players in vehicle design and manufacturing will be a mix of companies like Uber, Google and Amazon and companies you don’t yet know. There will probably be 2 or 3 major players who control >80% of the customer-facing transportation market. There may become API-like access to these networks for smaller players — much like app marketplaces for iPhone and Android. However, the majority of the revenue will flow to a few large players as it does today to Apple and Google for smartphones 25. Supply chains will be disrupted as shipping changes. Algorithms will allow trucks to be fuller. Excess (latent) capacity will be priced cheaper. New middlemen and warehousing models will emerge. As shipping gets cheaper, faster and generally easier, retail storefronts will continue to lose footing in the marketplace. 26. The role of malls and other shopping areas will continue to shift — to be replaced by places people go for services, not products. There will be virtually no face to face purchases of physical goods. 27. Amazon and/or a few other large players will put Fedex, UPS and USPS out of business as their transportation network becomes orders of magnitude more cost efficient than existing models — largely from a lack of legacy costs like pensions, higher union labor costs and regulations (especially USPS) that won’t keep up with the pace of technology change. 3D printing will also contribute to this as many day-to-day products are printed at home rather than purchased. 28. The same vehicles will often transport people and goods as algorithms optimize all routes. And, off-peak utilization will allow for other very inexpensive delivery options. In other words, packages will be increasingly delivered at night. Add autonomous drone aircraft to this mix and there’ll be very little reason to believe that traditional carriers (Fedex, USPS, UPS, etc) will survive at all. 29. Roads will be much emptier and smaller (over time) as self-driving cars need much less space between them (a major cause of traffic today), people will share vehicles more than today (carpooling), traffic flow will be better regulated and algorithmic timing (i.e. leave at 10 versus 9:30) will optimize infrastructure utilization. Roads will also likely be smoother and turns optimally banked for passenger comfort. High speed underground and above ground tunnels (maybe integrating hyperloop technology or this novel magnetic track solution) will become the high speed network for long haul travel. 30. Short hop domestic air travel may be largely displaced by multi-modal travel in autonomous vehicles. This may be countered by the advent of lower cost, more automated air travel. This too may become part of integrated, multi-modal transportation. 31. Roads will wear out much more slowly with fewer vehicle miles, lighter vehicles (with less safety requirements). New road materials will be developed that drain better, last longer and are more environmentally friendly. These materials might even be power generating (solar or reclamation from vehicle kinetic energy). At the extreme, they may even be replaced by radically different designs — tunnels, magnetic tracks, other hyper-optimized materials 32. Premium vehicle services will have more compartmentalized privacy, more comfort, good business features (quiet, wifi, bluetooth for each passenger, etc), massage services and beds for sleeping. They may also allow for meaningful in-transit real and virtual meetings. This will also likely include aromatherapy, many versions of in-vehicle entertainment systems and even virtual passengers to keep you company. 33. Exhilaration and emotion will almost entirely leave transportation. People won’t brag about how nice, fast, comfortable their cars are. Speed will be measured by times between end points, not acceleration, handling or top speed. 34. Cities will become much more dense as fewer roads and vehicles will be needed and transport will be cheaper and more available. The “walkable city” will continue to be more desirable as walking and biking become easier and more commonplace. When costs and timeframes of transit change, so will the dynamics of who lives and works where. 35. People will know when they leave, when they’ll get where they’re going. There will be few excuses for being late. We will be able to leave later and cram more into a day. We’ll also be able to better track kids, spouses, employees and so forth. We’ll be able to know exactly when someone will arrive and when someone needs to leave to be somewhere at a particular time. 36. There will be no more DUI/OUI offenses. Restaurants and bars will sell more alcohol. People will consume more as they no longer need to consider how to get home and will be able to consume inside vehicles 37. We’ll have less privacy as interior cameras and usage logs will track when and where we go and have gone. Exterior cameras will also probably record surroundings, including people. This may have a positive impact on crime, but will open up many complex privacy issues and likely many lawsuits. Some people may find clever ways to game the system — with physical and digital disguises and spoofing. 38. Many lawyers will lose sources of revenue — traffic offenses, crash litigation will reduce dramatically. Litigation will more likely be “big company versus big company” or “individuals against big companies”, not individuals against each other. These will settle more quickly with less variability. Lobbyists will probably succeed in changing the rules of litigation to favor the bigger companies, further reducing the legal revenue related to transportation. Forced arbitration and other similar clauses will become an explicit component of our contractual relationship with transportation providers. 39. Some countries will nationalize parts of their self-driving transportation networks which will result in lower costs, fewer disruptions and less innovation. 40. Cities, towns and police forces will lose revenue from traffic tickets, tolls (likely replaced, if not eliminated) and fuel tax revenues drop precipitously. These will probably be replaced by new taxes (probably on vehicle miles). These may become a major political hot-button issue differentiating parties as there will probably be a range of regressive versus progressive tax models. Most likely, this will be a highly regressive tax in the US, as fuel taxes are today. 41. Some employers and/or government programs will begin partially or entirely subsidizing transportation for employees and/or people who need the help. The tax treatment of this perk will also be very political. 42. Ambulance and other emergency vehicles will likely be used less and change in nature. More people will take regular autonomous vehicles instead of ambulances. Ambulances will transport people faster. Same may be true of military vehicles. 43. There will be significant innovations in first response capabilities as dependencies on people become reduced over time and as distributed staging of capacity becomes more common. 44. Airports will allow vehicles right into the terminals, maybe even onto the tarmac, as increased controls and security become possible. Terminal design may change dramatically as transportation to and from becomes normalized and integrated. The entire nature of air travel may change as integrated, multi-modal transport gets more sophisticated. Hyper-loops, high speed rail, automated aircraft and other forms of rapid travel will gain as traditional hub and spoke air travel on relatively large planes lose ground. 45. Innovative app-like marketplaces will open up for in-transit purchases, ranging from concierge services to food to exercise to merchandise to education to entertainment purchases. VR will likely play a large role in this. With integrated systems, VR (via headsets or screens or holograms) will become standard fare for trips more than a few minutes in duration. 46. Transportation will become more tightly integrated and packaged into many services — dinner includes the ride, hotel includes local transport, etc. This may even extend to apartments, short-term rentals (like AirBnB) and other service providers. 47. Local transport of nearly everything will become ubiquitous and cheap — food, everything in your local stores. Drones will likely be integrated into vehicle designs to deal with “last few feet” on pickup and delivery. This will accelerate the demise of traditional retail stores and their local economic impact. 48. Biking and walking will become easier, safer and more common as roads get safer and less congested, new pathways (reclaimed from roads/parking lots/roadside parking) come online and with cheap, reliable transport available as a backup. 49. More people will participate in vehicle racing (cars, off road, motorcycles) to replace their emotional connection to driving. Virtual racing experiences may also grow in popularity as fewer people have the real experience of driving. 50. Many, many fewer people will be injured or killed on roads, though we’ll expect zero and be disproportionately upset when accidents do happen. Hacking and non-malicious technical issues will replace traffic as the main cause of delays. Over time, resilience will increase in the systems. 51. Hacking of vehicles will be a serious issue. New software and communications companies and technologies will emerge to address these issues. We’ll see the first vehicle hacking and its consequences. Highly distributed computing, perhaps using some form of blockchain, will likely become part of the solution as a counterbalance to systemic catastrophes — such as many vehicles being affected simultaneously. There will probably be a debate about whether and how law enforcement can control, observe and restrict transportation. 52. Many roads and bridges will be privatized as a small number of companies control most transport and make deals with municipalities. Over time, government may entirely stop funding roads, bridges and tunnels. There will be a significant legislative push to privatize more and more of the transportation network. Much like Internet traffic, there will likely become tiers of prioritization and some notion of in-network versus out-of-network travel and tolls for interconnection. Regulators will have a tough time keeping up with these changes. Most of this will be transparent to end users, but will probably create enormous barriers to entry for transportation start-ups and ultimately reduce options for consumers. 53. Innovators will come along with many awesome uses for driveways and garages that no longer contain cars. 54. There will be a new network of clean, safe, pay-to-use restrooms and other services (food, drinks, etc) that become part of the value-add of competing service providers 55. Mobility for seniors and people with disabilities will be greatly improved (over time) 56. Parents will have more options to move around their kids on their own. Premium secure end-to-end children’s transport services will likely emerge. This may change many family relationships and increase the accessibility of services to parents and children. It may also further stratify the experiences of families with higher income and those with lower income. 57. Person to person movement of goods will become cheaper and open up new markets — think about borrowing a tool or buying something on Craigslist. Latent capacity will make transporting goods very inexpensive. This may also open up new opportunities for P2P services at a smaller scale — like preparing food or cleaning clothes. 58. People will be able to eat/drink in transit (like on a train or plane), consume more information (reading, podcasts, video, etc). This will open up time for other activities and perhaps increased productivity. 59. Some people may have their own “pods” to get into which will then be picked up by an autonomous vehicle, moved between vehicles automatically for logistic efficiencies. These may come in varieties of luxury and quality — the Louis Vuitton pod may replace the Louis Vuitton trunk as the mark of luxury travel 60. There will be no more getaway vehicles or police vehicle chases. 61. Vehicles will likely be filled to the brim with advertising of all sorts (much of which you could probably act on in-route), though there will probably be ways to pay more to have an ad free experience. This will include highly personalized en route advertising that is particularly relevant to who you are, where you’re going. 62. These innovations will make it to the developing world where congestion today is often remarkably bad and hugely costly. Pollution levels will come down dramatically. Even more people will move to the cities. Productivity levels will go up. Fortunes will be made as these changes happen. Some countries and cities will be transformed for the better. Some others will likely experience hyper-privatization, consolidation and monopoly-like controls. This may play out much like the roll-out of cell services in these countries — fast, consolidated and inexpensive. 63. Payment options will be greatly expanded, with packaged deals like cell phones, pre-paid models, pay-as-you-go models being offered. Digital currency transacted automatically via phones/devices will probably quickly replace traditional cash or credit card payments. 64. There will likely be some very clever innovations for movement of pets, equipment, luggage and other non-people items. Autonomous vehicles in the medium future (10–20 years) may have radically different designs that support carrying significantly more payload. 65. Some creative marketers will offer to partially or fully subsidize rides where customers deliver value — by taking surveys, by participating in virtual focus groups, by promoting their brand via social media, etc. 66. Sensors of all sorts will be embedded in vehicles that will have secondary uses — like improving weather forecasting, crime detection and prevention, finding fugitives, infrastructure conditions (such as potholes). This data will be monetized, likely by the companies who own the transportation services. 67. Companies like Google and Facebook will add to their databases everything about customer movements and locations. Unlike GPS chips that only tell them where someone is at the moment (and where they’ve been), autonomous vehicle systems will know where you’re going in real-time (and with whom). 68. Autonomous vehicles will create some new jobs and opportunities for entrepreneurs. However, these will be off-set many times by extraordinary job losses by nearly everyone in the transportation value chain today. In the autonomous future, a large number of jobs will go away. This includes drivers (which is in many states today the most common job), mechanics, gas station employees, most of the people who make cars and car parts or support those who do (due to huge consolidation of makers and supply chains and manufacturing automation), the marketing supply chain for vehicles, many people who work on and build roads/bridges, employees of vehicle insurance and financing companies (and their partners/suppliers), toll booth operators (most of whom have already been displaced), many employees of restaurants that support travelers, truck stops, retail workers and all the people whose businesses support these different types of companies and workers. 69. There will be some hardcore hold-outs who really like driving. But, over time, they’ll become a less statistically relevant voting group as younger people, who’ve never driven, will outnumber them. At first, this may be a 50 state regulated system — where driving yourself may actually become illegal in some states in the next 10 years while other states may continue to allow it for a long time. Some states will try, unsuccessfully, to block autonomous vehicles. 70. There will be lots of discussions about new types of economic systems — from universal basic income to new variations of socialism to a more regulated capitalist system — that will result from the enormous impacts of autonomous vehicles. 71. In the path to a truly driverless future, there will be a number of key tipping points. At the moment, freight delivery may push autonomous vehicle use sooner than people transport. Large trucking companies may have the financial means and legislative influence to make rapid, dramatic changes. They are also better positioned to support hybrid approaches where only parts of their fleet or parts of the routes are automated. 72. Autonomous vehicles will radically change the power centers of the world. They will be the beginning of the end of burning hydrocarbons. The powerful interests who control these industries today will fight viciously to stop this. There may even be wars to slow down this process as oil prices start to plummet and demand dries up. 73. Autonomous vehicles will continue to play a larger role in all aspects of war — from surveillance to troop/robot movement to logistics support to actual engagement. Drones will be complemented by additional on-the-ground, in-space, in-the-water and under-the-water autonomous vehicles. Note: My original article was inspired by a presentation by Ryan Chin, CEO of Optimus Ridespeak at an MIT event about autonomous vehicles. He really got me thinking about how profound these advances could be to our lives. I’m sure some of my thoughts above came from him. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder @mycityatpeace | Faculty @hult_biz | Producer @couragetolisten | Naturally curious dot-connector | More at www.geoffnesnow.com " Blaise Aguera y Arcas,8.7K,15,https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------2----------------,Do algorithms reveal sexual orientation or just expose our stereotypes?,"by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft. " François Chollet,16.8K,17,https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704?source=tag_archive---------6----------------,What worries me about AI – François Chollet – Medium,"Disclaimer: These are my own personal views. I do not speak for my employer. If you quote this article, please have the honesty to present these ideas as what they are: personal, speculative opinions, to be judged on their own merits. If you were around in the 1980s and 1990s, you may remember the now-extinct phenomenon of “computerphobia”. I have personally witnessed it a few times as late as the early 2000s — as personal computers were introduced into our lives, in our workplaces and homes, quite a few people would react with anxiety, fear, or even aggressivity. While some of us were fascinated by computers and awestruck by the potential they could glimpse in them, most people didn’t understand them. They felt alien, abstruse, and in many ways, threatening. People feared getting replaced by technology. Most of us react to technological shifts with unease at best, panic at worst. Maybe that is true of any change at all. But remarkably, most of what we worry about ends up never happening. Fast-forward a few years, and the computer-haters have learned to live with them and to use them for their own benefit. Computers did not replace us and trigger mass unemployment — and nowadays we couldn’t imagine life without our laptops, tablets, and smartphones. Threatening change has become comfortable status quo. But at the same time as our fears failed to materialize, computers and the internet have enabled threats that almost no one was warning us about in the 1980s and 1990s. Ubiquitous mass surveillance. Hackers going after our infrastructure or our personal data. Psychological alienation on social media. The loss of our patience and our ability to focus. The political or religious radicalization of easily-influenced minds online. Hostile foreign powers hijacking social networks to disrupt Western democracies. If most of our fears turn out to be irrational, inversely, most of the truly worrying developments that have happened in the past as a result of technological change stem from things that most people didn’t worry about until it was already there. A hundred years ago, we couldn’t really forecast that the transportation and manufacturing technologies we were developing would enable a new form of industrial warfare that would wipe out tens of millions in two World Wars. We didn’t recognize early on that the invention of the radio would enable a new form of mass propaganda that would facilitate the rise of fascism in Italy and Germany. The progress of theoretical physics in the 1920s and 1930s wasn’t accompanied by anxious press articles about how these developments would soon enable thermonuclear weapons that would place the world forever under the threat of imminent annihilation. And today, even as alarms have been sounding for decades about the most dire problem of our times, climate, a large fraction (44%) of the American public still chooses to ignore it. As a civilization, we seem to be really bad at correctly identifying future threats and rightfully worrying about them, just as we seem to be extremely prone to panic due to irrational fears. Today, like many times in the past, we are faced with a new wave of radical change: cognitive automation, which could be broadly summed up under the keyword “AI”. And like many time in the past, we are worried that this new set of technologies will harm us — that AI will lead to mass unemployment, or that AI will gain an agency of its own, become superhuman, and choose to destroy us. But what if we’re worrying about the wrong thing, like we have almost every single time before? What if the real danger of AI was far remote from the “superintelligence” and “singularity” narratives that many are panicking about today? In this post, I’d like to raise awareness about what really worries me when it comes to AI: the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments. Of course, this is not the only tangible risk that arises from the development of cognitive technologies — there are many others, in particular issues related to the harmful biases of machine learning models. Other people are raising awareness of these problems far better than I could. I chose to write about mass population manipulation specifically because I see this risk as pressing and direly under-appreciated. This risk is already a reality today, and a number of long-term technological trends are going to considerably amplify it over the next few decades. As our lives become increasingly digitized, social media companies get ever greater visibility into our lives and minds. At the same time, they gain increasing access to behavioral control vectors — in particular via algorithmic newsfeeds, which control our information consumption. This casts human behavior as an optimization problem, as an AI problem: it becomes possible for social media companies to iteratively tune their control vectors in order to achieve specific behaviors, just like a game AI would iterative refine its play strategy in order to beat a level, driven by score feedback. The only bottleneck to this process is the intelligence of the algorithm in the loop — and as it happens, the largest social network company is currently investing billions in fundamental AI research. Let me explain in detail. In the past 20 years, our private and public lives have moved online. We spend an ever greater fraction of each day staring at screens. Our world is moving to a state where most of what we do consists of digital information consumption, modification, or creation. A side effect of this long-term trend is that corporations and governments are now collecting staggering amounts of data about us, in particular through social network services. Who we communicate with. What we say. What content we’ve been consuming — images, movies, music, news. What mood we are in at specific times. Ultimately, almost everything we perceive and everything we do will end up recorded on some remote server. This data, in theory, allows the entities that collect it to build extremely accurate psychological profiles of both individuals and groups. Your opinions and behavior can be cross-correlated with that of thousands of similar people, achieving an uncanny understanding of what makes you tick — probably more predictive than what yourself could achieve through mere introspection (for instance, Facebook “likes” enable algorithms to better assess your personality that your own friends could). This data makes it possible to predict a few days in advance when you will start a new relationship (and with whom), and when you will end your current one. Or who is at risk of suicide. Or which side you will ultimately vote for in an election, even while you’re still feeling undecided. And it’s not just individual-level profiling power — large groups can be even more predictable, as aggregating data points erases randomness and individual outliers. Passive data collection is not where it ends. Increasingly, social network services are in control of what information we consume. What see in our newsfeeds has become algorithmically “curated”. Opaque social media algorithms get to decide, to an ever-increasing extent, which political articles we read, which movie trailers we see, who we keep in touch with, whose feedback we receive on the opinions we express. Integrated over many years of exposure, the algorithmic curation of the information we consume gives the algorithms in charge considerable power over our lives — over who we are, who we become. If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your worldview and your political beliefs. Facebook’s business lies in influencing people. That’s what the service it sells to its customers — advertisers, including political advertisers. As such, Facebook has built a fine-tuned algorithmic engine that does just that. This engine isn’t merely capable of influencing your view of a brand or your next smart-speaker purchase. It can influence your mood, tuning the content it feeds you in order to make you angry or happy, at will. It may even be able to swing elections. In short, social network companies can simultaneously measure everything about us, and control the information we consume. And that’s an accelerating trend. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior, in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A large subset of the field of AI — in particular “reinforcement learning” — is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the target at hand — in this case, us. By moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms. This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack: From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume. Remarkably, mass population manipulation — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat — current technology may well suffice. Social network companies have been working on it for a few years, with significant results. And while they may only be trying to maximize “engagement” and to influence your purchase decisions, rather than to manipulate your view of the world, the tools they’ve developed are already being hijacked by hostile state actors for political purposes — as seen in the 2016 Brexit referendum or the 2016 US presidential election. This is already our reality. But if mass population manipulation is already possible today — in theory — why hasn’t the world been upended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. Until 2015, all ad targeting algorithms across the industry were running on mere logistic regression. In fact, that’s still true to a large extent today — only the biggest players have switched to more advanced models. Logistic regression, an algorithm that predates the computing era, is one of the most basic techniques you could use for personalization. It is the reason why so many of the ads you see online are desperately irrelevant. Likewise, the social media bots used by hostile state actors to sway public opinion have little to no AI in them. They’re all extremely primitive. For now. Machine learning and AI have been making fast progress in recent years, and that progress is only beginning to get deployed in targeting algorithms and social media bots. Deep learning has only started to make its way into newsfeeds and ad networks in 2016. Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. When your product is a social newsfeed, what use are you going to make of natural language processing and reinforcement learning? We’re looking at a company that builds fine-grained psychological profiles of almost two billion humans, that serves as a primary news source for many of them, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it scares me. And consider that Facebook may not even be the most worrying threat here. Ponder, for instance, China’s use of information control to enable unprecedented forms of totalitarianism, such as its “social credit system”. Many people like to pretend that large corporations are the all-powerful rulers of the modern world, but what power they hold is dwarfed by that of governments. If given algorithmic control over our minds, governments may well turn into far worst actors than corporations. Now, what can we do about it? How can we defend ourselves? As technologists, what can we do to avert the risk of mass manipulation via our social newsfeeds? Importantly, the existence of this threat doesn’t mean that all algorithmic curation is bad, or that all targeted advertising is bad. Far from it. Both of these can serve a valuable purpose. With the rise of the Internet and AI, placing algorithms in charge of our information diet isn’t just an inevitable trend — it’s a desirable one. As our lives become increasingly digital and connected, and as our world becomes increasingly information-intensive, we will need AI to serve as our interface to the world. In the long-run, education and self-development will be some of the most impactful applications of AI — and this will happen through dynamics that almost entirely mirror that of a nefarious AI-enabled newsfeed trying to manipulate you. Algorithmic information management has tremendous potential to help us, to empower individuals to realize more of their potential, and to help society better manage itself. The issue is not AI itself. The issue is control. Instead of letting newsfeed algorithms manipulate the user to achieve opaque goals, such as swaying their political opinions, or maximally wasting their time, we should put the user in charge of the goals that the algorithms optimize for. We are talking, after all, about your news, your worldview, your friends, your life — the impact that technology has on you should naturally be placed under your own control. Information management algorithms should not be a mysterious force inflicted on us to serve ends that run opposite to our own interests; instead, they should be a tool in our hand. A tool that we can use for our own purposes, say, for education and personal instead of entertainment. Here’s an idea — any algorithmic newsfeed with significant adoption should: We should build AI to serve humans, not to manipulate them for profit or political gain. What if newsfeed algorithms didn’t operate like casino operators or propagandists? What if instead, they were closer to a mentor or a good librarian, someone who used their keen understanding of your psychology — and that of millions of other similar people — to recommend to you that next book that will most resonate with your objectives and make you grow. A sort of navigation tool for your life — an AI capable of guiding you through the optimal path in experience space to get where you want to go. Can you imagine looking at your own life through the lens of a system that has seen millions of lives unfold? Or writing a book together with a system that has read every book? Or conducting research in collaboration with a system that sees the full scope of current human knowledge? In products where you are fully in control of the AI that interacts with you, a more sophisticated algorithm, instead of being a threat, would be a net positive, letting you achieve your own goals more efficiently. In summary, our future is one where AI will be our interface to the world — a world made of digital information. This can equally lead to empowering individuals to gain greater control over their lives, or to a total loss of agency. Unfortunately, social media is currently engaged on the wrong road. But it’s still early enough that we can reverse course. As an industry, we need to develop product categories and markets where the incentives are aligned with placing the user in charge of the algorithms that affect them, instead of using AI to exploit the user’s mind for profit or political gain. We need to strive towards products that are the anti-Facebook. In the far future, such products will likely take the form of AI assistants. Digital mentors programmed to help you, that put you in control of the objectives they pursue in their interactions with you. And in the present, search engines could be seen as an early, more primitive example of an AI-driven information interface that serves users instead of seeking to hijack their mental space. Search is a tool that you deliberately use to reach specific goals, rather than a passive always-on feed that elects what to show you. You tell it what to it should do for you. And instead of seeking to maximally waste your time, a search engine attempts to minimize the time it takes to go from question to answer, from problem to solution. You may be thinking, since a search engine is still an AI layer between us and the information we consume, could it bias its results to attempt to manipulate us? Yes, that risk is latent in every information-management algorithm. But in stark contrast with social networks, market incentives in this case are actually aligned with users needs, pushing search engines to be as relevant and objective as possible. If they fail to be maximally useful, there’s essentially no friction for users to move to a competing product. And importantly, a search engine would have a considerably smaller psychological attack surface than a social newsfeed. The threat we’ve profiled in this post requires most of the following to be present in a product: Most AI-driven information-management products don’t meet these requirements. Social networks, on the other hand, are a frightening combination of risk factors. As technologists, we should gravitate towards products that do not feature these characteristics, and push back against products that combine them all, if only because of their potential for dangerous misuse. Build search engines and digital assistants, not social newsfeeds. Make your recommendation engines transparent, configurable, and constructive, rather than slot-like machines that maximize “engagement” and wasted hours of human time. Invest your UI, UX, and AI expertise into building great configuration panels for your algorithm, to enable your users to use your product on their own terms. And importantly, we should educate users about these issues, so that they reject manipulative products, generating enough market pressure to align the incentives of the technology industry with that of consumers. Conclusion: the fork in the road ahead One path leads to a place that really scares me. The other leads to a more humane future. There’s still time to take the better one. If you work on these technologies, keep this in mind. You may not have evil intentions. You may simply not care. You may simply value your RSUs more than our shared future. But whether or not you care, because you have a hand in shaping the infrastructure of the digital world, your choices affect us all. And you may eventually be held responsible for them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Simon Greenman,10.2K,16,https://towardsdatascience.com/who-is-going-to-make-money-in-ai-part-i-77a2f30b8cef?source=tag_archive---------7----------------,Who Is Going To Make Money In AI? Part I – Towards Data Science,"We are in the midst of a gold rush in AI. But who will reap the economic benefits? The mass of startups who are all gold panning? The corporates who have massive gold mining operations? The technology giants who are supplying the picks and shovels? And which nations have the richest seams of gold? We are currently experiencing another gold rush in AI. Billions are being invested in AI startups across every imaginable industry and business function. Google, Amazon, Microsoft and IBM are in a heavyweight fight investing over $20 billion in AI in 2016. Corporates are scrambling to ensure they realise the productivity benefits of AI ahead of their competitors while looking over their shoulders at the startups. China is putting its considerable weight behind AI and the European Union is talking about a $22 billion AI investment as it fears losing ground to China and the US. AI is everywhere. From the 3.5 billion daily searches on Google to the new Apple iPhone X that uses facial recognition to Amazon Alexa that cutely answers our questions. Media headlines tout the stories of how AI is helping doctors diagnose diseases, banks better assess customer loan risks, farmers predict crop yields, marketers target and retain customers, and manufacturers improve quality control. And there are think tanks dedicated to studying the physical, cyber and political risks of AI. AI and machine learning will become ubiquitous and woven into the fabric of society. But as with any gold rush the question is who will find gold? Will it just be the brave, the few and the large? Or can the snappy upstarts grab their nuggets? Will those providing the picks and shovel make most of the money? And who will hit pay dirt? As I started thinking about who was going to make money in AI I ended up with seven questions. Who will make money across the (1) chip makers, (2) platform and infrastructure providers, (3) enabling models and algorithm providers, (4) enterprise solution providers, (5) industry vertical solution providers, (6) corporate users of AI and (7) nations? While there are many ways to skin the cat of the AI landscape, hopefully below provides a useful explanatory framework — a value chain of sorts. The companies noted are representative of larger players in each category but in no way is this list intended to be comprehensive or predictive. Even though the price of computational power has fallen exponentially, demand is rising even faster. AI and machine learning with its massive datasets and its trillions of vector and matrix calculations has a ferocious and insatiable appetite. Bring on the chips. NVIDIA’s stock is up 1500% in the past two years benefiting from the fact that their graphical processing unit (GPU) chips that were historically used to render beautiful high speed flowing games graphics were perfect for machine learning. Google recently launched its second generation of Tensor Processing Units (TPUs). And Microsoft is building its own Brainwave AI machine learning chips. At the same time startups such as Graphcore, who has raised over $110M, is looking to enter the market. Incumbents chip providers such as IBM, Intel, Qualcomm and AMD are not standing still. Even Facebook is rumoured to be building a team to design its own AI chips. And the Chinese are emerging as serious chip players with Cambricon Technology announcing the first cloud AI chip this past week. What is clear is that the cost of designing and manufacturing chips then sustaining a position as a global chip leader is very high. It requires extremely deep pockets and a world class team of silicon and software engineers. This means that there will be very few new winners. Just like the gold rush days those that provide the cheapest and most widely used picks and shovels will make a lot of money. The AI race is now also taking place in the cloud. Amazon realised early that startups would much rather rent computers and software than buy it. And so it launched Amazon Web Services (AWS) in 2006. Today AI is demanding so much compute power that companies are increasingly turning to the cloud to rent hardware through Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. The fight is on among the tech giants. Microsoft is offering their hybrid public and private Azure cloud service that allegedly has over one million computers. And in the past few weeks they announced that their Brainwave hardware solutionsdramatically accelerate machine learning with their own Bing search engine performance improving by a factor of ten. Google is rushing to play catchup with its own GoogleCloud offering. And we are seeing the Chinese Alibaba starting to take global share. Amazon — Microsoft — Google and IBM are going to continue to duke this one out. And watch out for the massively scaled cloud players from China. The big picks and shovels guys will win again. Today Google is the world’s largest AI company attracting the best AI minds, spending small country size GDP budgets on R&D, and sitting on the best datasets gleamed from the billions of users of their services. AI is powering Google’s search, autonomous vehicles, speech recognition, intelligent reasoning, massive search and even its own work on drug discovery and disease diangosis. And the incredible AI machine learning software and algorithms that are powering all of Google’s AI activity — TensorFlow — is now being given away for free. Yes for free! TensorFlow is now an open source software project available to the world. And why are they doing this? As Jeff Dean, head of Google Brain, recently said there are 20 million organisations in the world that could benefit from machine learning today. If millions of companies use this best in class free AI software then they are likely to need lots of computing power. And who is better served to offer that? Well Google Cloud is of course optimised for TensorFlow and related AI services. And once you become reliant on their software and their cloud you become a very sticky customer for many years to come. No wonder it is a brutal race for global AI algorithm dominance with Amazon — Microsoft — IBM also offering their own cheap or free AI software services. We are also seeing a fight for not only machine learning algorithms but cognitive algorithms that offer services for conversational agents and bots, speech, natural language processing (NLP) and semantics, vision, and enhanced core algorithms. One startup in this increasingly contested space is Clarifai who provides advanced image recognition systems for businesses to detect near-duplicates and visual searches. It has raised nearly $40M over the past three years. The market for vision related algorithms and services is estimated to be a cumulative $8 billion in revenue between 2016 and 2025. The giants are not standing still. IBM, for example, is offering its Watson cognitive products and services. They have twenty or so APIs for chatbots, vision, speech, language, knowledge management and empathy that can be simply be plugged into corporate software to create AI enabled applications. Cognitive APIs are everywhere. KDnuggets lists here over 50 of the top cognitive services from the giants and startups. These services are being put into the cloud as AI as a Service (AIaaS) to make them more accessible. Just recently Microsoft’s CEO Satya Nadella claimed that a million developers are using their AI APIs, services and tools for building AI-powered apps and nearly 300,000 developers are using their tools for chatbots. I wouldn’t want to be a startup competing with these Goliaths. The winners in this space are likely to favour the heavyweights again. They can hire the best research and engineering talent, spend the most money, and have access to the largest datasets. To flourish startups are going to have to be really well funded, supported by leading researchers with a whole battery of IP patents and published papers, deep domain expertise, and have access to quality datasets. And they should have excellent navigational skills to sail ahead of the giants or sail different races. There will many startup casualties, but those that can scale will find themselves as global enterprises or quickly acquired by the heavyweights. And even if a startup has not found a path to commercialisation, then they could become acquihires (companies bought for their talent) if they are working on enabling AI algorithms with a strong research oriented team. We saw this in 2014 when DeepMind, a two year old London based company that developed unique reinforcement machine learning algorithms, was acquired by Google for $400M. Enterprise software has been dominated by giants such as Salesforce, IBM, Oracle and SAP. They all recognise that AI is a tool that needs to be integrated into their enterprise offerings. But many startups are rushing to become the next generation of enterprise services filling in gaps where the incumbents don’t currently tread or even attempting to disrupt them. We analysed over two hundred use cases in the enterprise space ranging from customer management to marketing to cybersecurity to intelligence to HR to the hot area of Cognitive Robotic Process Automation (RPA). The enterprise field is much more open than previous spaces with a veritable medley of startups providing point solutions for these use cases. Today there are over 200 AI powered companies just in the recruitment space, many of them AI startups. Cybersecurity leader DarkTrace and RPA leader UiPathhave war chests in the $100 millions. The incumbents also want to make sure their ecosystems stay on the forefront and are investing in startups that enhance their offering. Salesforce has invested in Digital Genius a customer management solution and similarly Unbable that offers enterprise translation services. Incumbents also often have more pressing problems. SAP, for example, is rushing to play catchup in offering a cloud solution, let alone catchup in AI. We are also seeing tools providers trying to simplify the tasks required to create, deploy and manage AI services in the enterprise. Machine learning training, for example, is a messy business where 80% of time can be spent on data wrangling. And an inordinate amount of time is spent on testing and tuning of what is called hyperparameters. Petuum, a tools provider based in Pittsburgh in the US, has raised over $100M to help accelerate and optimise the deployment of machine learning models. Many of these enterprise startup providers can have a healthy future if they quickly demonstrate that they are solving and scaling solutions to meet real world enterprise needs. But as always happens in software gold rushes there will be a handful of winners in each category. And for those AI enterprise category winners they are likely to be snapped up, along with the best in-class tool providers, by the giants if they look too threatening. AI is driving a race for the best vertical industry solutions. There are a wealth of new AI powered startups providing solutions to corporate use cases in the healthcare, financial services, agriculture, automative, legal and industrial sectors. And many startups are taking the ambitious path to disrupt the incumbent corporate players by offering a service directly to the same customers. It is clear that many startups are providing valuable point solutions and can succeed if they have access to (1) large and proprietary data training sets, (2) domain knowledge that gives them deep insights into the opportunities within a sector, (3) a deep pool of talent around applied AI and (4) deep pockets of capital to fund rapid growth. Those startups that are doing well generally speak the corporate commercial language of customers, business efficiency and ROI in the form of well developed go-to-market plans. For example, ZestFinance has raised nearly $300M to help improve credit decision making that will provide fair and transparent credit to everyone. They claim they have the world’s best data scientists. But they would, wouldn’t they? For those startups that are looking to disrupt existing corporate players they need really deep pockets. For example, Affirm, that offers loans to consumers at the point of sale, has raised over $700M. These companies quickly need to create a defensible moat to ensure they remain competitive. This can come from data network effects where more data begets better AI based services and products that gets more revenue and customers that gets more data. And so the flywheel effect continues. And while corporates might look to new vendors in their industry for AI solutions that could enhance their top and bottom line, they are not going to sit back and let upstarts muscle in on their customers. And they are not going to sit still and let their corporate competitors gain the first advantage through AI. There is currently a massive race for corporate innovation. Large companies have their own venture groups investing in startups, running accelerators and building their own startups to ensure that they are leaders in AI driven innovation. Large corporates are in a strong position against the startups and smaller companies due to their data assets. Data is the fuel for AI and machine learning. Who is better placed to take advantage of AI than the insurance company that has reams of historic data on underwriting claims? The financial services company that knows everything about consumer financial product buying behaviour? Or the search company that sees more user searches for information than any other? Corporates large and small are well positioned to extract value from AI. In fact Gartner research predicts AI-derived business value is projected to reach up to $3.9 trillion by 2022. There are hundreds if not thousands of valuable use cases that AI can addresses across organisations. Corporates can improve their customer experience, save costs, lower prices, drive revenues and sell better products and services powered by AI. AI will help the big get bigger often at the expense of smaller companies. But they will need to demonstrate strong visionary leadership, an ability to execute, and a tolerance for not always getting technology enabled projects right on the first try. Countries are also also in a battle for AI supremacy. China has not been shy about its call to arms around AI. It is investing massively in growing technical talent and developing startups. Its more lax regulatory environment, especially in data privacy, helps China lead in AI sectors such as security and facial recognition. Just recently there was an example of Chinese police picking out one most wanted face in a crowd of 50,000 at a music concert. And SenseTime Group Ltd, that analyses faces and images on a massive scale, reported it raised $600M becoming the most valuable global AI startup. The Chinese point out that their mobile market is 3x the size of the US and there are 50x more mobile payments taking place — this is a massive data advantage. The European focus on data privacy regulation could put them at a disadvantage in certain areas of AI even if the Union is talking about a $22B investment in AI. The UK, Germany, France and Japan have all made recent announcements about their nation state AI strategies. For example, President Macron said the French government will spend $1.85 billion over the next five years to support the AI ecosystem including the creation of large public datasets. Companies such as Google’s DeepMind and Samsung have committed to open new Paris labs and Fujitsu is expanding its Paris research centre. The British just announced a $1.4 billion push into AI including funding of 1000 AI PhDs. But while nations are investing in AI talent and the ecosystem, the question is who will really capture the value. Will France and the UK simply be subsidising PhDs who will be hired by Google? And while payroll and income taxes will be healthy on those six figure machine learning salaries, the bulk of the economic value created could be with this American company, its shareholders, and the smiling American Treasury. AI will increase productivity and wealth in companies and countries. But how will that wealth be distributed when the headlines suggest that 30 to 40% of our jobs will be taken by the machines? Economists can point to lessons from hundreds of years of increasing technology automation. Will there be net job creation or net job loss? The public debate often cites Geoffrey Hinton, the godfather of machine learning, who suggested radiologists will lose their jobs by the dozen as machines diagnose diseases from medical images. But then we can look to the Chinese who are using AI to assist radiologists in managing the overwhelming demand to review 1.4 billion CT scans annually for lung cancer. The result is not job losses but an expanded market with more efficient and accurate diagnosis. However there is likely to be a period of upheaval when much of the value will go to those few companies and countries that control AI technology and data. And lower skilled countries whose wealth depends on jobs that are targets of AI automation will likely suffer. AI will favour the large and the technologically skilled. In examining the landscape of AI it has became clear that we are now entering a truly golden era for AI. And there are few key themes appearing as to where the economic value will migrate: In short it looks like the AI gold rush will favour the companies and countries with control and scale over the best AI tools and technology, the data, the best technical workers, the most customers and the strongest access to capital. Those with scale will capture the lion’s share of the economic value from AI. In some ways ‘plus ça change, plus c’est la même chose.’ But there will also be large golden nuggets that will be found by a few choice brave startups. But like any gold rush many startups will hit pay dirt. And many individuals and societies will likely feel like they have not seen the benefits of the gold rush. This is the first part in a series of articles I intend to write on the topic of the economics of AI. I welcome your feedback. Written by Simon Greenman I am a lover of technology and how it can be applied in the business world. I run my own advisory firm Best Practice AI helping executives of enterprises and startups accelerate the adoption of ROI based AI applications . Please get in touch to discuss this. If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. And please post your comments or you can email me directly or find me on LinkedIn or twitter or follow me at Simon Greenman. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI guy. MapQuest guy. Grow, innovate and transform companies with tech. Start-up investor, mentor and geek. Sharing concepts, ideas, and codes. " Aman Agarwal,7K,24,https://medium.freecodecamp.org/explained-simply-how-an-ai-program-mastered-the-ancient-game-of-go-62b8940a9080?source=tag_archive---------8----------------,Explained Simply: How an AI program mastered the ancient game of Go,"This is about AlphaGo, Google DeepMind’s Go playing AI that shook the technology world in 2016 by defeating one of the best players in the world, Lee Sedol. Go is an ancient board game which has so many possible moves at each step that future positions are hard to predict — and therefore it requires strong intuition and abstract thinking to play. Because of this reason, it was believed that only humans could be good at playing Go. Most researchers thought that it would still take decades to build an AI which could think like that. In fact, I’m releasing this essay today because this week (March 8–15) marks the two-year anniversary of the AlphaGo vs Sedol match! But AlphaGo didn’t stop there. 8 months later, it played 60 professional games on a Go website under disguise as a player named “Master”, and won every single game, against dozens of world champions, of course without resting between games. Naturally this was a HUGE achievement in the field of AI and sparked worldwide discussions about whether we should be excited or worried about artificial intelligence. Today we are going to take the original research paper published by DeepMind in the Nature journal, and break it down paragraph-by-paragraph using simple English. After this essay, you’ll know very clearly what AlphaGo is, and how it works. I also hope that after reading this you will not believe all the news headlines made by journalists to scare you about AI, and instead feel excited about it. Worrying about the growing achievements of AI is like worrying about the growing abilities of Microsoft Powerpoint. Yes, it will get better with time with new features being added to it, but it can’t just uncontrollably grow into some kind of Hollywood monster. You DON’T need to know how to play Go to understand this paper. In fact, I myself have only read the first 3–4 lines in Wikipedia’s opening paragraph about it. Instead, surprisingly, I use some examples from basic Chess to explain the algorithms. You just have to know what a 2-player board game is, in which each player takes turns and there is one winner at the end. Beyond that you don’t need to know any physics or advanced math or anything. This will make it more approachable for people who only just now started learning about machine learning or neural networks. And especially for those who don’t use English as their first language (which can make it very difficult to read such papers). If you have NO prior knowledge of AI and neural networks, you can read the “Deep Learning” section of one of my previous essays here. After reading that, you’ll be able to get through this essay. If you want to get a shallow understanding of Reinforcement Learning too (optional reading), you can find it here. Here’s the original paper if you want to try reading it: As for me: Hi I’m Aman, an AI and autonomous robots engineer. I hope that my work will save you a lot of time and effort if you were to study this on your own. Do you speak Japanese? Ryohji Ikebe has kindly written a brief memo about this essay in Japanese, in a series of Tweets. As you know, the goal of this research was to train an AI program to play Go at the level of world-class professional human players. To understand this challenge, let me first talk about something similar done for Chess. In the early 1990s, IBM came out with the Deep Blue computer which defeated the great champion Gary Kasparov in Chess. (He’s also a very cool guy, make sure to read more about him later!) How did Deep Blue play? Well, it used a very brute force method. At each step of the game, it took a look at all the possible legal moves that could be played, and went ahead to explore each and every move to see what would happen. And it would keep exploring move after move for a while, forming a kind of HUGE decision tree of thousands of moves. And then it would come back along that tree, observing which moves seemed most likely to bring a good result. But, what do we mean by “good result”? Well, Deep Blue had many carefully designed chess strategies built into it by expert chess players to help it make better decisions — for example, how to decide whether to protect the king or get advantage somewhere else? They made a specific “evaluation algorithm” for this purpose, to compare how advantageous or disadvantageous different board positions are (IBM hard-coded expert chess strategies into this evaluation function). And finally it chooses a carefully calculated move. On the next turn, it basically goes through the whole thing again. As you can see, this means Deep Blue thought about millions of theoretical positions before playing each move. This was not so impressive in terms of the AI software of Deep Blue, but rather in the hardware — IBM claimed it to be one of the most powerful computers available in the market at that time. It could look at 200 million board positions per second. Now we come to Go. Just believe me that this game is much more open-ended, and if you tried the Deep Blue strategy on Go, you wouldn’t be able to play well. There would be SO MANY positions to look at at each step that it would simply be impractical for a computer to go through that hell. For example, at the opening move in Chess there are 20 possible moves. In Go the first player has 361 possible moves, and this scope of choices stays wide throughout the game. This is what they mean by “enormous search space.” Moreover, in Go, it’s not so easy to judge how advantageous or disadvantageous a particular board position is at any specific point in the game — you kinda have to play the whole game for a while before you can determine who is winning. But let’s say you magically had a way to do both of these. And that’s where deep learning comes in! So in this research, DeepMind used neural networks to do both of these tasks (if you haven’t read about them yet, here’s the link again). They trained a “policy neural network” to decide which are the most sensible moves in a particular board position (so it’s like following an intuitive strategy to pick moves from any position). And they trained a “value neural network” to estimate how advantageous a particular board arrangement is for the player (or in other words, how likely you are to win the game from this position). They trained these neural networks first with human game examples (your good old ordinary supervised learning). After this the AI was able to mimic human playing to a certain degree, so it acted like a weak human player. And then to train the networks even further, they made the AI play against itself millions of times (this is the “reinforcement learning” part). With this, the AI got better because it had more practice. With these two networks alone, DeepMind’s AI was able to play well against state-of-the-art Go playing programs that other researchers had built before. These other programs had used an already popular pre-existing game playing algorithm, called the “Monte Carlo Tree Search” (MCTS). More about this later. But guess what, we still haven’t talked about the real deal. DeepMind’s AI isn’t just about the policy and value networks. It doesn’t use these two networks as a replacement of the Monte Carlo Tree Search. Instead, it uses the neural networks to make the MCTS algorithm work better... and it got so much better that it reached superhuman levels. THIS improved variation of MCTS is “AlphaGo”, the AI that beat Lee Sedol and went down in AI history as one of the greatest breakthroughs ever. So essentially, AlphaGo is simply an improved implementation of a very ordinary computer science algorithm. Do you understand now why AI in its current form is absolutely nothing to be scared of? Wow, we’ve spent a lot of time on the Abstract alone. Alright — to understand the paper from this point on, first we’ll talk about a gaming strategy called the Monte Carlo Tree Search algorithm. For now, I’ll just explain this algorithm at enough depth to make sense of this essay. But if you want to learn about it in depth, some smart people have also made excellent videos and blog posts on this: 1. A short video series from Udacity2. Jeff Bradberry’s explanation of MCTS3. An MCTS tutorial by Fullstack Academy The following section is long, but easy to understand (I’ll try my best) and VERY important, so stay with me! The rest of the essay will go much quicker. Let’s talk about the first paragraph of the essay above. Remember what I said about Deep Blue making a huge tree of millions of board positions and moves at each step of the game? You had to do simulations and look at and compare each and every possible move. As I said before, that was a simple approach and very straightforward approach — if the average software engineer had to design a game playing AI, and had all the strongest computers of the world, he or she would probably design a similar solution. But let’s think about how do humans themselves play chess? Let’s say you’re at a particular board position in the middle of the game. By game rules, you can do a dozen different things — move this pawn here, move the queen two squares here or three squares there, and so on. But do you really make a list of all the possible moves you can make with all your pieces, and then select one move from this long list? No — you “intuitively” narrow down to a few key moves (let’s say you come up with 3 sensible moves) that you think make sense, and then you wonder what will happen in the game if you chose one of these 3 moves. You might spend 15–20 seconds considering each of these 3 moves and their future — and note that during these 15 seconds you don’t have to carefully plan out the future of each move; you can just “roll out” a few mental moves guided by your intuition without TOO much careful thought (well, a good player would think farther and more deeply than an average player). This is because you have limited time, and you can’t accurately predict what your opponent will do at each step in that lovely future you’re cooking up in your brain. So you’ll just have to let your gut feeling guide you. I’ll refer to this part of the thinking process as “rollout”, so take note of it!So after “rolling out” your few sensible moves, you finally say screw it and just play the move you find best. Then the opponent makes a move. It might be a move you had already well anticipated, which means you are now pretty confident about what you need to do next. You don’t have to spend too much time on the rollouts again. OR, it could be that your opponent hits you with a pretty cool move that you had not expected, so you have to be even more careful with your next move.This is how the game carries on, and as it gets closer and closer to the finishing point, it would get easier for you to predict the outcome of your moves — so your rollouts don’t take as much time. The purpose of this long story is to describe what the MCTS algorithm does on a superficial level — it mimics the above thinking process by building a “search tree” of moves and positions every time. Again, for more details you should check out the links I mentioned earlier. The innovation here is that instead of going through all the possible moves at each position (which Deep Blue did), it instead intelligently selects a small set of sensible moves and explores those instead. To explore them, it “rolls out” the future of each of these moves and compares them based on their imagined outcomes.(Seriously — this is all I think you need to understand this essay) Now — coming back to the screenshot from the paper. Go is a “perfect information game” (please read the definition in the link, don’t worry it’s not scary). And theoretically, for such games, no matter which particular position you are at in the game (even if you have just played 1–2 moves), it is possible that you can correctly guess who will win or lose (assuming that both players play “perfectly” from that point on). I have no idea who came up with this theory, but it is a fundamental assumption in this research project and it works. So that means, given a state of the game s, there is a function v*(s) which can predict the outcome, let’s say probability of you winning this game, from 0 to 1. They call it the “optimal value function”. Because some board positions are more likely to result in you winning than other board positions, they can be considered more “valuable” than the others. Let me say it again: Value = Probability between 0 and 1 of you winning the game. But wait — say there was a girl named Foma sitting next to you while you play Chess, and she keeps telling you at each step if you’re winning or losing. “You’re winning... You’re losing... Nope, still losing...” I think it wouldn’t help you much in choosing which move you need to make. She would also be quite annoying. What would instead help you is if you drew the whole tree of all the possible moves you can make, and the states that those moves would lead to — and then Foma would tell you for the entire tree which states are winning states and which states are losing states. Then you can choose moves which will keep leading you to winning states. All of a sudden Foma is your partner in crime, not an annoying friend. Here, Foma behaves as your optimal value function v*(s). Earlier, it was believed that it’s not possible to have an accurate value function like Foma for the game of Go, because the games had so much uncertainty. BUT — even if you had the wonderful Foma, this wonderland strategy of drawing out all the possible positions for Foma to evaluate will not work very well in the real world. In a game like Chess or Go, as we said before, if you try to imagine even 7–8 moves into the future, there can be so many possible positions that you don’t have enough time to check all of them with Foma. So Foma is not enough. You need to narrow down the list of moves to a few sensible moves that you can roll out into the future. How will your program do that? Enter Lusha. Lusha is a skilled Chess player and enthusiast who has spent decades watching grand masters play Chess against each other. She can look at your board position, look quickly at all the available moves you can make, and tell you how likely it would be that a Chess expert would make any of those moves if they were sitting at your table. So if you have 50 possible moves at a point, Lusha will tell you the probability that each move would be picked by an expert. Of course, a few sensible moves will have a much higher probability and other pointless moves will have very little probability. She is your policy function, p(a\s). For a given state s, she can give you probabilities for all the possible moves that an expert would make. Wow — you can take Lusha’s help to guide you in how to select a few sensible moves, and Foma will tell you the likelihood of winning from each of those moves. You can choose the move that both Foma and Lusha approve. Or, if you want to be extra careful, you can roll out the moves selected by Lusha, have Foma evaluate them, pick a few of them to roll out further into the future, and keep letting Foma and Lusha help you predict VERY far into the game’s future — much quicker and more efficient than to go through all the moves at each step into the future. THIS is what they mean by “reducing the search space”. Use a value function (Foma) to predict outcomes, and use a policy function (Lusha) to give you grand-master probabilities to help narrow down the moves you roll out. These are called “Monte Carlo rollouts”. Then while you backtrack from future to present, you can take average values of all the different moves you rolled out, and pick the most suitable action. So far, this has only worked on a weak amateur level in Go, because the policy functions and value functions that they used to guide these rollouts weren’t that great. Phew. The first line is self explanatory. In MCTS, you can start with an unskilled Foma and unskilled Lusha. The more you play, the better they get at predicting solid outcomes and moves. “Narrowing the search to a beam of high probability actions” is just a sophisticated way of saying, “Lusha helps you narrow down the moves you need to roll out by assigning them probabilities that an expert would play them”. Prior work has used this technique to achieve strong amateur level AI players, even with simple (or “shallow” as they call it) policy functions. Yeah, convolutional neural networks are great for image processing. And since a neural network takes a particular input and gives an output, it is essentially a function, right? So you can use a neural network to become a complex function. So you can just pass in an image of the board position and let the neural network figure out by itself what’s going on. This means it’s possible to create neural networks which will behave like VERY accurate policy and value functions. The rest is pretty self explanatory. Here we discuss how Foma and Lusha were trained. To train the policy network (predicting for a given position which moves experts would pick), you simply use examples of human games and use them as data for good old supervised learning. And you want to train another slightly different version of this policy network to use for rollouts; this one will be smaller and faster. Let’s just say that since Lusha is so experienced, she takes some time to process each position. She’s good to start the narrowing-down process with, but if you try to make her repeat the process , she’ll still take a little too much time. So you train a *faster policy network* for the rollout process (I’ll call it... Lusha’s younger brother Jerry? I know I know, enough with these names). After that, once you’ve trained both of the slow and fast policy networks enough using human player data, you can try letting Lusha play against herself on a Go board for a few days, and get more practice. This is the reinforcement learning part — making a better version of the policy network. Then, you train Foma for value prediction: determining the probability of you winning. You let the AI practice through playing itself again and again in a simulated environment, observe the end result each time, and learn from its mistakes to get better and better. I won’t go into details of how these networks are trained. You can read more technical details in the later section of the paper (‘Methods’) which I haven’t covered here. In fact, the real purpose of this particular paper is not to show how they used reinforcement learning on these neural networks. One of DeepMind’s previous papers, in which they taught AI to play ATARI games, has already discussed some reinforcement learning techniques in depth (And I’ve already written an explanation of that paper here). For this paper, as I lightly mentioned in the Abstract and also underlined in the screenshot above, the biggest innovation was the fact that they used RL with neural networks for improving an already popular game-playing algorithm, MCTS. RL is a cool tool in a toolbox that they used to fine-tune the policy and value function neural networks after the regular supervised training. This research paper is about proving how versatile and excellent this tool it is, not about teaching you how to use it. In television lingo, the Atari paper was a RL infomercial and this AlphaGo paper is a commercial. A quick note before you move on. Would you like to help me write more such essays explaining cool research papers? If you’re serious, I’d be glad to work with you. Please leave a comment and I’ll get in touch with you. So, the first step is in training our policy NN (Lusha), to predict which moves are likely to be played by an expert. This NN’s goal is to allow the AI to play similar to an expert human. This is a convolutional neural network (as I mentioned before, it’s a special kind of NN that is very useful in image processing) that takes in a simplified image of a board arrangement. “Rectifier nonlinearities” are layers that can be added to the network’s architecture. They give it the ability to learn more complex things. If you’ve ever trained NNs before, you might have used the “ReLU” layer. That’s what these are. The training data here was in the form of random pairs of board positions, and the labels were the actions chosen by humans when they were in those positions. Just regular supervised learning. Here they use “stochastic gradient ASCENT”. Well, this is an algorithm for backpropagation. Here, you’re trying to maximise a reward function. And the reward function is just the probability of the action predicted by a human expert; you want to increase this probability. But hey — you don’t really need to think too much about this. Normally you train the network so that it minimises a loss function, which is essentially the error/difference between predicted outcome and actual label. That is called gradient DESCENT. In the actual implementation of this research paper, they have indeed used the regular gradient descent. You can easily find a loss function that behaves opposite to the reward function such that minimising this loss will maximise the reward. The policy network has 13 layers, and is called “SL policy” network (SL = supervised learning). The data came from a... I’ll just say it’s a popular website on which millions of people play Go. How good did this SL policy network perform? It was more accurate than what other researchers had done earlier. The rest of the paragraph is quite self-explanatory. As for the “rollout policy”, you do remember from a few paragraphs ago, how Lusha the SL policy network is slow so it can’t integrate well with the MCTS algorithm? And we trained another faster version of Lusha called Jerry who was her younger brother? Well, this refers to Jerry right here. As you can see, Jerry is just half as accurate as Lusha BUT it’s thousands of times faster! It will really help get through rolled out simulations of the future faster, when we apply the MCTS. For this next section, you don’t *have* to know about Reinforcement Learning already, but then you’ll have to assume that whatever I say works. If you really want to dig into details and make sure of everything, you might want to read a little about RL first. Once you have the SL network, trained in a supervised manner using human player moves with the human moves data, as I said before you have to let her practice by itself and get better. That’s what we’re doing here. So you just take the SL policy network, save it in a file, and make another copy of it. Then you use reinforcement learning to fine-tune it. Here, you make the network play against itself and learn from the outcomes. But there’s a problem in this training style. If you only forever practice against ONE opponent, and that opponent is also only practicing with you exclusively, there’s not much of new learning you can do. You’ll just be training to practice how to beat THAT ONE player. This is, you guessed it, overfitting: your techniques play well against one opponent, but don’t generalize well to other opponents. So how do you fix this? Well, every time you fine-tune a neural network, it becomes a slightly different kind of player. So you can save this version of the neural network in a list of “players”, who all behave slightly differently right? Great — now while training the neural network, you can randomly make it play against many different older and newer versions of the opponent, chosen from that list. They are versions of the same player, but they all play slightly differently. And the more you train, the MORE players you get to train even more with! Bingo! In this training, the only thing guiding the training process is the ultimate goal, i.e winning or losing. You don’t need to specially train the network to do things like capture more area on the board etc. You just give it all the possible legal moves it can choose from, and say, “you have to win”. And this is why RL is so versatile; it can be used to train policy or value networks for any game, not just Go. Here, they tested how accurate this RL policy network was, just by itself without any MCTS algorithm. As you would remember, this network can directly take a board position and decide how an expert would play it — so you can use it to single-handedly play games.Well, the result was that the RL fine-tuned network won against the SL network that was only trained on human moves. It also won against other strong Go playing programs. Must note here that even before training this RL policy network, the SL policy network was already better than the state of the art — and now, it has further improved! And we haven’t even come to the other parts of the process like the value network. Did you know that baby penguins can sneeze louder than a dog can bark? Actually that’s not true, but I thought you’d like a little joke here to distract from the scary-looking equations above. Coming to the essay again: we’re done training Lusha here. Now back to Foma — remember the “optimal value function”: v*(s) -> that only tells you how likely you are to win in your current board position if both players play perfectly from that point on?So obviously, to train an NN to become our value function, we would need a perfect player... which we don’t have. So we just use our strongest player, which happens to be our RL policy network. It takes the current state board state s, and outputs the probability that you will win the game. You play a game and get to know the outcome (win or loss). Each of the game states act as a data sample, and the outcome of that game acts as the label. So by playing a 50-move game, you have 50 data samples for value prediction. Lol, no. This approach is naive. You can’t use all 50 moves from the game and add them to the dataset. The training data set had to be chosen carefully to avoid overfitting. Each move in the game is very similar to the next one, because you only move once and that gives you a new position, right? If you take the states at all 50 of those moves and add them to the training data with the same label, you basically have lots of “kinda duplicate” data, and that causes overfitting. To prevent this, you choose only very distinct-looking game states. So for example, instead of all 50 moves of a game, you only choose 5 of them and add them to the training set. DeepMind took 30 million positions from 30 million different games, to reduce any chances of there being duplicate data. And it worked! Now, something conceptual here: there are two ways to evaluate the value of a board position. One option is a magical optimal value function (like the one you trained above). The other option is to simply roll out into the future using your current policy (Lusha) and look at the final outcome in this roll out. Obviously, the real game would rarely go by your plans. But DeepMind compared how both of these options do. You can also do a mixture of both these options. We will learn about this “mixing parameter” a little bit later, so make a mental note of this concept! Well, your single neural network trying to approximate the optimal value function is EVEN BETTER than doing thousands of mental simulations using a rollout policy! Foma really kicked ass here. When they replaced the fast rollout policy with the twice-as-accurate (but slow) RL policy Lusha, and did thousands of simulations with that, it did better than Foma. But only slightly better, and too slowly. So Foma is the winner of this competition, she has proved that she can’t be replaced. Now that we have trained the policy and value functions, we can combine them with MCTS and give birth to our former world champion, destroyer of grand masters, the breakthrough of a generation, weighing two hundred and sixty eight pounds, one and only Alphaaaaa GO! In this section, ideally you should have a slightly deeper understanding of the inner workings of the MCTS algorithm, but what you have learned so far should be enough to give you a good feel for what’s going on here. The only thing you should note is how we’re using the policy probabilities and value estimations. We combine them during roll outs, to narrow down the number of moves we want to roll out at each step. Q(s,a) represents the value function, and u(s,a) is a stored probability for that position. I’ll explain. Remember that the policy network uses supervised learning to predict expert moves? And it doesn’t just give you most likely move, but rather gives you probabilities for each possible move that tell how likely it is to be an expert move. This probability can be stored for each of those actions. Here they call it “prior probability”, and they obviously use it while selecting which actions to explore. So basically, to decide whether or not to explore a particular move, you consider two things: First, by playing this move, how likely are you to win? Yes, we already have our “value network” to answer this first question. And the second question is, how likely is it that an expert would choose this move? (If a move is super unlikely to be chosen by an expert, why even waste time considering it. This we get from the policy network) Then let’s talk about the “mixing parameter” (see came back to it!). As discussed earlier, to evaluate positions, you have two options: one, simply use the value network you have been using to evaluate states all along. And two, you can try to quickly play a rollout game with your current strategy (assuming the other player will play similarly), and see if you win or lose. We saw how the value function was better than doing rollouts in general. Here they combine both. You try giving each prediction 50–50 importance, or 40–60, or 0–100, and so on. If you attach a % of X to the first, you’ll have to attach 100-X to the second. That’s what this mixing parameter means. You’ll see these hit and trial results later in the paper. After each roll out, you update your search tree with whatever information you gained during the simulation, so that your next simulation is more intelligent. And at the end of all simulations, you just pick the best move. Interesting insight here! Remember how the RL fine-tuned policy NN was better than just the SL human-trained policy NN? But when you put them within the MCTS algorithm of AlphaGo, using the human trained NN proved to be a better choice than the fine-tuned NN. But in the case of the value function (which you would remember uses a strong player to approximate a perfect player), training Foma using the RL policy works better than training her with the SL policy. “Doing all this evaluation takes a lot of computing power. We really had to bring out the big guns to be able to run these damn programs.” Self explanatory. “LOL, our program literally blew the pants off of every other program that came before us” This goes back to that “mixing parameter” again. While evaluating positions, giving equal importance to both the value function and the rollouts performed better than just using one of them. The rest is self explanatory, and reveals an interesting insight! Self explanatory. Self explanatory. But read that red underlined sentence again. I hope you can see clearly now that this line right here is pretty much the summary of what this whole research project was all about. Concluding paragraph. “Let us brag a little more here because we deserve it!” :) Oh and if you’re a scientist or tech company, and need some help in explaining your science to non-technical people for marketing, PR or training etc, I can help you. Drop me a message on Twitter: @mngrwl From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Engineer, teacher, learner of foreign languages, lover of history, cinema and art. Our community publishes stories worth reading on development, design, and data science. " Lance Ulanoff,15.1K,5,https://medium.com/@LanceUlanoff/did-google-duplex-just-pass-the-turing-test-ffcfe6868b02?source=tag_archive---------9----------------,Did Google Duplex just pass the Turing Test? – Lance Ulanoff – Medium,"I think it was the first “Um.” That was the moment when I realized I was hearing something extraordinary: A computer carrying out a completely natural and very human-sounding conversation with a real person. And it wasn’t just a random talk. This conversation had a purpose, a destination: to make an appointment at a hair salon. The entity making the call and appointment was Google Assistant running Duplex, Google’s still experimental AI voice system and the venue was Google I/O, Google’s yearly developer conference, which this year focused heavily on the latest developments in AI, Machine- and Deep-Learning. Google CEO Sundar Pichai explained that what we were hearing was a real phone call made to a hair salon that didn’t know it was part of an experiment or that they were talking to a computer. He launched Duplex by asking Google Assistant to book a haircut appointment for Tuesday morning. The AI did the rest. Duplex made the call and, when someone at the salon picked up, the voice AI started the conversation with: “Hi, I’m calling to book a woman’s hair cut appointment for a client, um, I’m looking for something on May third?” When the attendant asked Duplex to give her one second, Duplex responded with: “Mmm-hmm.” The conversation continued as the salon representative presented various dates and times and the AI asked about other options. Eventually, the AI and the salon worker agreed on an appointment date and time. What I heard was so convincing I had trouble discerning who was the salon worker and who (what) was the Duplex AI. It was stunning and somewhat disconcerting. I liken it to the feeling you’d get if a store mannequin suddenly smiled at you. It was easily the most remarkable human-computer conversation I’d ever heard and the closest thing I’ve seen a voice AI passing the Turing Test, which is the AI threshold suggested by Computer Scientist Alan Turing in the 1950s. Turing posited that by 2000 computers would be able to fool humans into thinking they were conversing with other humans at least 30% of the time. He was right. In 2014, a chatbot named Eugene Goostman successfully impersonated a wise-ass 14-year old programmer during lengthy text-based chats with unsuspecting humans. Turing, however hadn’t necessarily considered voice-based systems and, for obvious reasons, talking computers are somewhat less adept at fooling humans. Spend a few minutes conversing with your voice assistant of choice and you’ll soon discover their limitations. Their speech can be stilted, pronunciations off and response times can be slow (especially if they’re trying to access a cloud-based server) and forget about conversations. Most can handle two consecutive queries at most and they virtually all require a trigger phrase like “Alexa” or “Hey Siri.” (Google is working on removing unnecessary “Okay Googles” in short back and forth convos with the digital assistant). Google Assistant running Duplex didn’t exhibit any of those short comings. It sounded like a young female assistant carefully scheduling her boss’s haircut. In addition to the natural cadence, Google added speech disfluencies (the verbal ticks, “ums,” “uhs,” and “mm-hmms”) and latency or pauses that naturally occur when people are speaking. The result is a perfectly human voice produced entirely by a computer. The second call demonstration, where a male-voiced Duplex tried to make restaurant reservations, was even more remarkable. The human call participant didn’t entirely understand Duplex’s verbal requests and then told Duplex that, for the number of people it wanted to bring to the restaurant, they didn’t need a reservation. Duplex handled all this without missing a beat. “The amazing thing is that the assistant can actually understand the nuances of conversation,” said Pichai during the keynote. That ability comes by way of neural network technology and intensive machine learning, For as accomplished as Duplex is in making hair appointments and restaurant reservations, it might stumble in deeper or more abstract conversations. In a blog post on Duplex development, Google engineers explained that they constrained Duplex’s training to “closed domains” or well-defined topics (like dinner reservations and hair appointments) This gave them the ability to perform intense exploration of the topics and focus training. Duplex was guided during training within the domain by “experienced operators” who could keep track of mistakes and worked with engineers to improve responses. In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider. Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex? Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more. I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test. It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar. But that’s the future. For now, Duplex’s performance stands as a powerful proof of concept for our long-imagined future of conversational AI’s capable of helping, entertaining and engaging with us. It’s the first major step on the path to the AI depicted in the movie Her where Joaquin Phoenix starred as a man who falls in love with his chatty voice assistant played by the disembodied voice of Scarlett Johansson. So, no, Duplex didn’t pass the Turing test, but I do wonder what Alan Turing would think of it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan. " Gant Laborde,1.3K,7,https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------0----------------,Machine Learning: how to go from Zero to Hero – freeCodeCamp,"If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text. Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough. However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you. A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter. A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players. Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory. I’m talking about Machine Learning. You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm. So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I. Wow! Right? That’s a crazy process! Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane. Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games. Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks! I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML. Secondly, if you don’t influence the world, the world will influence you. Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML. The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work? If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos. If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting. Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley). It’s cool watching data go through a trained model, but you can even watch your neural network get trained. One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model! Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above. These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now. There are tons of resources available. I’ll be recommending two approaches. In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch! If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you. I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone. This one costs money after Level 1. Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper. Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did. This course is 100% free. You only have to pay for a certificate if you want one. With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple. But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways. If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course. TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course. Plenty more information on available courses and rankings can be found here. If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready. I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive! Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now. I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in. It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps! See? I hope this article has inspired you and those around you to learn ML! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech Our community publishes stories worth reading on development, design, and data science. " Emmanuel Ameisen,935,11,https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------1----------------,Reinforcement Learning from scratch – Insight Data,"Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below. In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone. Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here! Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer. This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in. For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here. Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems. 1/It all starts with slot machines Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine. Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests. While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best. In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor. 2/Adding different states The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits. Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world. 3/Learning about the consequences of our actions There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning. First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate? We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward. Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates. 3+/Introducing Monte Carlo Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them. 4/The world is rarely discrete The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function! A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently. The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space. 4+/Leveraging deep learning for representations In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations. One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients. 5/Learning directly from the screen There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment). Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat! As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand. 6/Nuanced actions So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger. We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :). 7/Next steps for the brave There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored: That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here. Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program. Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Lead at Insight AI @EmmanuelAmeisen Insight Fellows Program - Your bridge to a career in data " Irhum Shafkat,2K,15,https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------2----------------,Intuitively Understanding Convolutions for Deep Learning,"The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel. The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer. Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature. This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion. Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides. Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input. The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on. More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes. Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others. So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique. Each filter in a convolution layer produces one and only one output channel, and they do it like so: Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others). Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel. Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel. And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process. Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data. Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer: And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be: (Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2]) The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0. It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer. In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class. Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters. So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good? The answer lies in the feature combinations the prior leads the parameters to learn. Early on in this article, we discussed that: So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image. If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid! Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality. And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution: For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image! So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc. There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning: One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize. Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in. A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see. The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1]. We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field: This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area. (Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween) This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer. Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a. The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge. Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds. The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image. Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train. The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts. By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data. With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model. Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare. Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable. CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding. Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Curious programmer, tinkers around in Python and deep learning. Sharing concepts, ideas, and codes. " Abhishek Parbhakar,937,6,https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------3----------------,Must know Information Theory concepts in Deep Learning (AI),"Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields: In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences: It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first? Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”. Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence. Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences: Also called Information Entropy or Shannon Entropy. Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments: If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy. Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy. A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments. Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it. For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero. For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as: where p(x_i) is the probability of i^th outcome of X. Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are. Cross entropy between two probability distributions p and q defined over same set of outcomes is given by: Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable. Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship. Mutual information of two discrete random variables X and Y is defined as: where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively. Also called Relative Entropy. KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other. Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence. KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’. KL divergence of a probability distribution Q from another probability distribution P is defined as: KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders. Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948. Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings. In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes. " Aman Dalmia,2.3K,17,https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------4----------------,What I learned from interviewing at multiple AI companies and start-ups,"Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work. This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation. NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger. To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps: a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview: As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be: • Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional). Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine: Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles. All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself. b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about. c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise. I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors. Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are. It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile. There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one. This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not: There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow: Problem > 1 or 2 previous approaches > Our approach > Result > Intuition The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly. Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling. Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life. That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄 We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says: This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said: We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely. Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning. If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough. Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA). Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future. Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for. I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills. This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love. Follow our publication to see more product & design stories featured by the Journal team. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Fanatic • Math Lover • Dreamer The official Journal blog " Gaurav Kaila,2.1K,10,https://medium.com/nanonets/how-we-flew-a-drone-to-monitor-construction-projects-in-africa-using-deep-learning-b792f5c9c471?source=---------5----------------,How to easily automate Drone-based monitoring using Deep Learning,"This article is a comprehensive overview of using deep learning based object detection methods for aerial imagery via drones. Did you know Drones and it’s associated functions are set to be a $50 billion industry by 2023? Currently drones being used in domains such as agriculture, construction, public safety and security to name a few and are rapidly being adopted by others. With deep-learning based computer vision now powering these drones, industry experts are now predicting unprecedented use in previously unimaginable or infeasible applications. We explore some of these applications along with challenges in automation of drone-based monitoring through deep learning. Finally, a case-study is presented for automating remote inspection of construction projects in Africa using Nanonets machine learning framework. Man has always been feed fascinated with the view of the world from the top — building watch-towers, high fortwalls, capturing the highest mountain peak. To capture a glimpse and share it with the world, people went to great lengths to defy gravity, enlisting the help of ladders, tall buildings, kites, balloons, planes, and rockets. Today, access to drones that can fly as high as 2kms is possible even for the general public. These drones have high resolution cameras attached to them that are capable of acquiring quality images which can be used for various kinds of analysis. With easier access to drones, we’re seeing a lot of interest and activity by photographers & hobbyists, who are using it to make creative projects such as capturing inequality in South Africa or breathtaking views of New York which might make Woody Allen proud. We explore some here: Energy : Inspection of solar farms Routine inspection and maintenance is a herculean task for solar farms. The traditional manual inspection method can only support the inspection frequency of once in three months. Because of the hostile environment, solar panels may have defects; broken solar panel units reduce the power output efficiency. Agriculture: Early plant disease detection Researchers at Imperial College London is mounting multi-spectral cameras on drones that will use special filters to capture reflected light from selected regions of the electromagnetic spectrum. Stressed plants typically display a ‘spectral signature’ that distinguishes them from healthy plants. Public Safety: Shark detection Analysis of overhead view of a large mass of land/water can yield a vast amount of information in terms of security and public safety. One such example is spotting sharks in the water off the coast of Australia. Australia-based Westpac Group has developed a deep learning based object detection system to detect sharks in the water. There are various other applications to aerial images such as Civil Engineering (routine bridge inspections, power line surveillance and traffic surveying), Oil and Gas (on- & offshore inspection of oil and gas platforms, drilling rigs), Public Safety (motor vehicle accidents, nuclear accidents, structural fires, ship collisions, plane and train crashes) & Security (Traffic surveillance, Border surveillance,Coastal surveillance, Controlling hostile demonstrations and rioting). To comprehensively capture terrain & landscapes, the process of acquiring aerial images can be summarised in two steps. After image stitching, the generated map can be used for various kinds of analysis for the applications mentioned above. High-resolution aerial imagery is increasingly available at the global scale and contains an abundance of information about features of interest that could be correlated with maintenance, land development, disease control, defect localisation, surveillance, etc. Unfortunately, such data are highly unstructured and thus challenging to extract meaningful insights from at scale, even with intensive manual analysis. For eg, classification of urban land use is typically based on surveys performed by trained professionals. As such, this task is labor-intensive, infrequent, slow, and costly. As a result, such data are mostly available in developed countries and big cities that have the resources and the vision necessary to collect and curate it. Another motivation for automating the analysis of aerial imagery stems from the urgency of predicting changes in the region of interest. For eg, crowd counting and crowd behaviour is frequently done during large public gatherings such as concerts, football matches, protests, etc. Traditionally, a human is behind the analysis of images being streamed from a CCTV camera directly to the command centre. As you may imagine, there are several problems with this approach such as human latency or error in detecting an event and lack of sufficient views via standard-static CCTV cameras. Below are some of the commonly occurring challenges when using aerial imagery. There are several challenges to overcome when automating the analysis of drone imagery. Following lists a few of them with a prospective solution: Pragmatic Master, a South-African robotics-as-a-service collaborated with Nanonets for automation of remotely monitoring progress of a housing construction project in Africa. We aim to detect the following infrastructure to capture the construction progress of a house in it’s various stages : a foundation (start), wallplate (in-progress), roof (partially complete), apron (finishing touches) and geyser (ready-to-move in) Pragmatic Master chose Nanonets as it’s deep learning provider because of it’s easy-to-use web platform and plug&play APIs. The end-to-end process of using the Nanonets API is as simple as four steps. 2. Labelling of images: Labelling images is probably the hardest and the most time-consuming step in any supervised machine learning pipeline, but at Nanonets we have this covered for you. We have in-house experts that have multiple years of working with aerial images. They will annotate your images with high precision and accuracy to aid better model training. For the Pragmatic Master use-case, we were labelling the following objects and their total count in all the images. 3. Model training: At Nanonets we employ the principle of Transfer Learning while training on your images. This involves re-training a pre-trained model that has already been pre-trained with a large number of aerial images. This helps the model identify micro patterns such as edges, lines and contours easily on your images and focus on the more specific macro patterns such as houses, trees, humans, cars, etc. Transfer learning also gives a boost in term of training time as the model does not need to be trained for a large number of iterations to give a good performance. Our proprietary deep learning software smartly selects the best model along with optimising the hyper-parameters for your use-case. This involves searching through multiple models and through a hyperspace of parameters using advanced search algorithms. The hardest objects to detect are the smallest ones, due to their low resolution. Our model training strategy is optimised to detect very small objects such as Geysers and Aprons which have an area of a few pixels. Following are the mean average precision per class that we get, Roof: 95.1%Geyser: 88%Wallplate: 92%Apron: 81% Note: Adding more images can lead to an increase in the mean average precision. Our API also supports detecting multiple objects in the same image such as Roofs and Aprons in one image. 4. Test & Integrate: Once the model is trained, you can either integrate Nanonet’s API directly into your system or we also provide a docker image with the trained model and inference code that you can use. Docker images can easily scale and provide a fault tolerant inference system. Customer trust is our top priority. We are committed towards providing you ownership and control over your content at all times. We provide two plans for using our service, For both the plans, we use highly sophisticated data privacy and security protocols in collaboration with Amazon Web Services, which is our cloud partner. Your dataset is anonymised and goes through minimal human intervention during the pre-processing and training process. All our human labellers have signed a non-disclosure agreement (NDA) to protect your data from going into wrong hands. As we believe in the philosophy of “Your data is yours!”, you can request us to delete your data from our servers at any stage. NanoNets is a web service that makes it easy to use Deep Learning. You can build a model with your own data to achieve high accuracy & use our APIs to integrate the same in your application. Pragmatic Master is a South African robotics as a service company that provides camera-mounted drones to acquire images of construction, farming and mining sites. These images are analysed to track progress, identify challenges, eliminate inefficiencies and provide an overall aerial view of the site. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Machine Learning Engineer NanoNets: Machine Learning API " James Loy,8.5K,6,https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6?source=---------6----------------,How to build your own Neural Network from scratch in Python,"Motivation: As part of my personal journey to gain a better understanding of Deep Learning, I’ve decided to build a Neural Network from scratch without a deep learning library like TensorFlow. I believe that understanding the inner workings of a Neural Network is important to any aspiring Data Scientist. This article contains what I’ve learned, and hopefully it’ll be useful for you as well! Most introductory texts to Neural Networks brings up brain analogies when describing them. Without delving into brain analogies, I find it easier to simply describe Neural Networks as a mathematical function that maps a given input to a desired output. Neural Networks consist of the following components The diagram below shows the architecture of a 2-layer Neural Network (note that the input layer is typically excluded when counting the number of layers in a Neural Network) Creating a Neural Network class in Python is easy. Training the Neural Network The output ŷ of a simple 2-layer Neural Network is: You might notice that in the equation above, the weights W and the biases b are the only variables that affects the output ŷ. Naturally, the right values for the weights and biases determines the strength of the predictions. The process of fine-tuning the weights and biases from the input data is known as training the Neural Network. Each iteration of the training process consists of the following steps: The sequential graph below illustrates the process. As we’ve seen in the sequential graph above, feedforward is just simple calculus and for a basic 2-layer neural network, the output of the Neural Network is: Let’s add a feedforward function in our python code to do exactly that. Note that for simplicity, we have assumed the biases to be 0. However, we still need a way to evaluate the “goodness” of our predictions (i.e. how far off are our predictions)? The Loss Function allows us to do exactly that. There are many available loss functions, and the nature of our problem should dictate our choice of loss function. In this tutorial, we’ll use a simple sum-of-sqaures error as our loss function. That is, the sum-of-squares error is simply the sum of the difference between each predicted value and the actual value. The difference is squared so that we measure the absolute value of the difference. Our goal in training is to find the best set of weights and biases that minimizes the loss function. Now that we’ve measured the error of our prediction (loss), we need to find a way to propagate the error back, and to update our weights and biases. In order to know the appropriate amount to adjust the weights and biases by, we need to know the derivative of the loss function with respect to the weights and biases. Recall from calculus that the derivative of a function is simply the slope of the function. If we have the derivative, we can simply update the weights and biases by increasing/reducing with it(refer to the diagram above). This is known as gradient descent. However, we can’t directly calculate the derivative of the loss function with respect to the weights and biases because the equation of the loss function does not contain the weights and biases. Therefore, we need the chain rule to help us calculate it. Phew! That was ugly but it allows us to get what we needed — the derivative (slope) of the loss function with respect to the weights, so that we can adjust the weights accordingly. Now that we have that, let’s add the backpropagation function into our python code. For a deeper understanding of the application of calculus and the chain rule in backpropagation, I strongly recommend this tutorial by 3Blue1Brown. Now that we have our complete python code for doing feedforward and backpropagation, let’s apply our Neural Network on an example and see how well it does. Our Neural Network should learn the ideal set of weights to represent this function. Note that it isn’t exactly trivial for us to work out the weights just by inspection alone. Let’s train the Neural Network for 1500 iterations and see what happens. Looking at the loss per iteration graph below, we can clearly see the loss monotonically decreasing towards a minimum. This is consistent with the gradient descent algorithm that we’ve discussed earlier. Let’s look at the final prediction (output) from the Neural Network after 1500 iterations. We did it! Our feedforward and backpropagation algorithm trained the Neural Network successfully and the predictions converged on the true values. Note that there’s a slight difference between the predictions and the actual values. This is desirable, as it prevents overfitting and allows the Neural Network to generalize better to unseen data. Fortunately for us, our journey isn’t over. There’s still much to learn about Neural Networks and Deep Learning. For example: I’ll be writing more on these topics soon, so do follow me on Medium and keep and eye out for them! I’ve certainly learnt a lot writing my own Neural Network from scratch. Although Deep Learning libraries such as TensorFlow and Keras makes it easy to build deep nets without fully understanding the inner workings of a Neural Network, I find that it’s beneficial for aspiring data scientist to gain a deeper understanding of Neural Networks. This exercise has been a great investment of my time, and I hope that it’ll be useful for you as well! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Graduate Student in Machine Learning @ Georgia Tech | LinkedIn: https://www.linkedin.com/in/jamesloy1/ Sharing concepts, ideas, and codes. " Chintan Trivedi,1.2K,8,https://towardsdatascience.com/using-deep-q-learning-in-fifa-18-to-perfect-the-art-of-free-kicks-f2e4e979ee66?source=---------7----------------,Using Deep Q-Learning in FIFA 18 to perfect the art of free-kicks,"A code tutorial in Tensorflow that uses Reinforcement Learning to take free kicks. In my previous article, I presented an AI bot trained to play the game of FIFA using Supervised Learning technique. With this approach, the bot quickly learnt the basics of the game like passing and shooting. However, the training data required to improve it further quickly became cumbersome to gather and provided little-to-no improvements, making this approach very time consuming. For this sake, I decided to switch to Reinforcement Learning, as suggested by almost everyone who commented on that article! In this article, I’ll provide a short description of what Reinforcement Learning is and how I applied it to this game. A big challenge in implementing this is that we do not have access to the game’s code, so we can only make use of what we see on the game screen. Due to this reason, I was unable to train the AI on the full game, but could find a work-around to implement it for skill games in practice mode. For this tutorial, I will be trying to teach the bot to take 30-yard free kicks, but you can modify it to play other skill games as well. Let’s start with understanding the Reinforcement Learning technique and how we can formulate our free kick problem to fit this technique. Contrary to Supervised Learning, we do not need to manually label the training data in Reinforcement Learning. Instead, we interact with our environment and observe the outcome of our interaction. We repeat this process multiple times gaining examples of positive and negative experiences, which acts as our training data. Thus, we learn by experimentation and not imitation. Let’s say our environment is in a particular state s, and upon taking an action a, it changes to state s’. For this particular action, the immediate reward you observe in the environment is r. Any set of actions that follow this action will have their own immediate rewards, until you stop interacting due to a positive or a negative experience. These are called future rewards. Thus, for the current state s, we will try to estimate out of all actions possible which action will fetch us the maximum immediate + future reward, denoted by Q(s,a) called the Q-function. This gives us Q(s,a) = r + γ * Q(s’,a’) which denotes the expected final reward by taking action a in state s. Here, γ is a discount factor to account for uncertainty in predicting the future, thus we want to trust the present a bit more than the future. Deep Q-learning is a special type of Reinforcement Learning technique where the Q-function is learnt by a deep neural network. Given the environment’s state as an image input to this network, it tries to predict the expected final reward for all possible actions like a regression problem. The action with the maximum predicted Q-value is chosen as our action to be taken in the environment. Hence the name Deep Q-Learning. Note: If we had a performance meter in kick-off mode of FIFA like there is in the practice mode, we might have been able to formulate this problem for playing the entire game and not restrict ourselves to just taking free-kicks. That, or we need access to game’s internal code which we don’t have. Anyways, let’s make the most of what we do have. While the bot has not mastered all different kinds of free kicks, it has learnt some situations very well. It almost always hits the target in absence of wall of players but struggles in its presence. Also, when it hasn’t encountered a situation frequently in training like not facing the goal, it behaves bonkers. However, with every training epoch, this behavior was noticed to decrease on an average. As shown in the figure above, the average goal scoring rate grows from 30% to 50% on an average after training for 1000 epochs. This means the current bot scores about half of the free kicks it attempts (for reference, a human would average around 75–80%). Do consider that FIFA tends to behave non-deterministically which makes learning very difficult. More results in video format can be found on my YouTube channel, with the video embedded below. Please subscribe to my channel if you wish to keep track of all my projects. We shall implement this in python using tools like Tensorflow (Keras) for Deep Learning and pytesseract for OCR. The git link is provided below with the requirements setup instructions in the repository description. I would recommend below gists of code only for the purpose of understanding this tutorial since some lines have been removed for brevity. Please use the full code from git while running it. Let’s go over the 4 main parts of the code. We do not have any readymade API available that gives us access to the code. So, let’s make our own API instead! We’ll use game’s screenshots to observe the state, simulated key-presses to take action in the game environment and Optical Character Recognition to read our reward in the game. We have three main methods in our FIFA class: observe(), act(), _get_reward() and an additional method is_over() to check if the free kick has been taken or not. Throughout the training process, we want to store all our experiences and observed rewards. We will use this as the training data for our Q-Learning model. So, for every action we take, we store the experience along with a game_over flag. The target label that our model will try to learn is the final reward for each action which is a real number for our regression problem. Now that we can interact with the game and store our interactions in memory, let’s start training our Q-Learning model. For this, we will attain a balance between exploration (taking a random action in the game) and exploitation (taking action predicted by our model). This way we can perform trial-and-error to obtain different experiences in the game. The parameter epsilon is used for this purpose, which is an exponentially decreasing factor that balances exploration and exploitation. In the beginning, when we know nothing, we want to do more exploration but as number of epochs increases and we learn more, we want to do more exploitation and less exploration. Hence. the decaying value of the epsilon parameter. For this tutorial I have only trained the model for 1000 epochs due to time and performance constraints, but in the future I would like to push it to at least 5000 epochs. At the heart of the Q-Learning process is a 2-layered Dense/Fully Connected Network with ReLU activation. It takes the 128-dimensional feature map as input state and outputs 4 Q-values for each possible action. The action with the maximum predicted Q-value is the desired action to be taken as per the network’s policy for the given state. This is the starting point of execution of this code, but you’ll have to make sure the game FIFA 18 is running in windowed mode on a second display and you load up the free kick practice mode under skill games: shooting menu. Make sure the game controls are in sync with the keys you have hard-coded in the FIFA.py script. Overall, I think the results are quite satisfactory even though it fails to reach human level of performance. Switching from Supervised to Reinforcement technique for learning helps ease the pain of collecting training data. Given enough time to explore, it performs very well in problems like learning how to play simple games. However, Reinforcement setting seems to fail when it encounters unfamiliar situations, which makes me believe formulating it as a regression problem cannot extrapolate information as well as formulating it as a classification problem in supervised setting. Perhaps a combination of the two could address the weaknesses of both these approaches. Maybe that’s where we’ll see the best results in building AI for games. Something for me to try in the future! I would like to acknowledge this tutorial of Deep Q-Learning and this git repository of gaming with python for providing majority of the code. With the exception of the FIFA “custom-API”, most of the code’s backbone has come from these sources. Thanks to these guys! Thank you for reading! If you liked this tutorial, please follow me on medium, github or subscribe to my YouTube channel. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Scientist, AI Enthusiast, Blogger, YouTuber, Chelsea FC Fanatic. Also, looking to build my virtual clone before I die. Sharing concepts, ideas, and codes. " Abhishek Parbhakar,1.7K,3,https://towardsdatascience.com/why-data-scientists-love-gaussian-6e7a7b726859?source=---------8----------------,Why Data Scientists love Gaussian? – Towards Data Science,"For Deep Learning & Machine Learning engineers out of all the probabilistic models in the world, Gaussian distribution model simply stands out. Even if you have never worked on an AI project, there is a significant chance that you have come across the Gaussian model. Gaussian distribution model, often identified with its iconic bell shaped curve, also referred as Normal distribution, is so popular mainly because of three reasons. Incredible number of processes in nature and social sciences naturally follows the Gaussian distribution. Even when they don’t, the Gaussian gives the best model approximation for these processes. Some examples include- Central limit theorem states that when we add large number of independent random variables, irrespective of the original distribution of these variables, their normalized sum tends towards a Gaussian distribution. For example, the distribution of total distance covered in an random walk tends towards a Gaussian probability distribution. The theorem’s implications include that large number of scientific and statistical methods that have been developed specifically for Gaussian models can also be applied to wide range of problems that may involve any other types of distributions. The theorem can also been seen as a explanation why many natural phenomena follow Gaussian distribution. Unlike many other distribution that changes their nature on transformation, a Gaussian tends to remain a Gaussian. For every Gaussian model approximation, there may exist a complex multi-parameter distribution that gives better approximation. But still Gaussian is preferred because it makes the math a lot simpler! Gaussian distribution is named after great mathematician and physicist Carl Friedrich Gauss. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Finding equilibria among AI, philosophy, and economics. Sharing concepts, ideas, and codes. " Leon Zhou,184,6,https://towardsdatascience.com/the-best-words-cf6fc2333c31?source=---------9----------------,The Best Words – Towards Data Science,"Uttered in the heat of a campaign rally in South Carolina on December 30, 2015, this statement was just another of a growing collection of “Trumpisms” by our now-President, Donald J. Trump. These statements both made Donald more beloved by his supporters as their relatable President, while also a cause of ridicule by seemingly everyone else. Regardless of one’s personal views of the man, it cannot be denied Donald has a way of speaking that is, well, so uniquely him — his smatterings of superlatives and apparent disregard for the constraints of traditional sentence structure are just a few of the things that make his speech instantly recognizable from that of his predecessors or peers. It was this unique style that interested me, and I set out to try and capture it using machine learning — to generate text that looked and sounded like something Donald Trump might say. To learn President Trump’s style, I first had to gather sufficient examples of it. I focused my efforts on two primary sources. The obvious first place to look for words by Donald Trump was his Twitter feed. The current president is unique in his use of the platform as a direct and unfiltered connection to the American people. Furthermore, as a figure of interest, his words have naturally been collected and organized for posterity, saving me the hassle of using the ever-changing and restrictive Twitter API. All in all, there were a little under 31,000 Tweets available for my use. In addition to his online persona, however, I also wanted to gain a glimpse into his more formal role as President. For this, I turned to the White House Briefing Statements Archive. With the help of some Python tools, I was able to quickly amass a table of about 420 transcripts of speeches and other remarks by the President. These transcripts covered a variety of events, such as meetings with foreign dignitaries, round tables with Congressional members, and awards presentations. Unlike with the Tweets, where every word was written or dictated by Trump himself, these transcripts involved other politicians and inquisitive reporters. Separating Donald’s words from those of others seemed to be a daunting task. Enter regular expressions — a boring name for a powerful and decidedly not-boring tool. Regular expressions allow you to specify a pattern to search for; this pattern can contain any number of very specific constraints, wildcards, or other restrictions to return exactly what you want, and no more. With some trial and error, I was able to generate a complex regular expression to only return words the President spoke, leaving and discarding any other words or annotations. Typically, one of the first steps in working with text is to normalize it. The extent and complexity of this normalization varies according to one’s needs, ranging from simply removing punctuation or capital letters, to reducing all variants of a word to a base root. An example of this workflow can be seen here. For me, however, the specific idiosyncrasies and patterns that would be lost in normalization were exactly what I needed to preserve. So, in hopes of making my generated text just that much more believable and authentic, I elected to bypass most of the standard normalization workflow. Before diving into a deep learning model, I was curious to explore another frequently used text generation method, the Markov chain. Markov chains have been the go-to for joke text generation for a long time — a quick search will reveal ones for Star Trek, past presidents, the Simpsons, and many others. The quick and dirty of the Markov chain is that it only cares about the current word in determining what should come next. This algorithm looks at every single time a specific word appears, and every word that comes immediately after it. The next word is selected randomly with a probability proportional to its frequency. Let me illustrate with a quick example: Donald Trump says the word “taxes.” If, in real life, 70% of the time after he says “taxes” he follows up with the word “bigly,” the Markov chain will choose the next word to be “bigly” 70% of the time. But sometimes, he doesn’t say “bigly.” Sometimes he ends the sentence, or moves on to a different word. The chain will most likely choose “bigly,” but there’s a chance it’ll go for any of the other available options, thus introducing some variety in our generated text. And repeat ad nauseam, or until the end of the sentence. This is great for quick and dirty applications, but it’s easy to see where it can go wrong. As the Markov chain only ever cares about the current word, it can easily be sidetracked. A sentence that started off talking about the domestic economy could just as easily end talking about The Apprentice. With my limited text data set, most of my Markov chain outputs were nonsensical. But, occasionally there were some flashes of brilliance and hilarity: For passably-real text, however, I needed something more sophisticated. Recurrent Neural Networks (RNNs) have established themselves as the architecture of choice for many text or sequence-based applications. The detailed inner workings of RNNs are outside the scope of this post, but a strong (relatively) beginner-friendly introduction may be found here. The distinguishing feature of these neural units is that they have an internal “memory” of sorts. Word choice and grammar depend heavily on surrounding context, so this “memory” is extremely useful in creating a coherent thought by keeping track of tense, subjects and objects, and so on. The downside of these types of networks is that they are extraordinarily computationally expensive — on my piddly laptop, running the entirety of my text through the model once would take over an hour, and considering I’d need to do so about 200 times, this was no good. This is where cloud computing comes in. A number of established tech companies offer cloud services, the largest being Amazon, Google, and Microsoft. On a heavy-GPU computing instance, that one-hour-plus-per-cycle time became ninety seconds, an over 40x reduction in time! Can you tell if this following statement is real or not? This was text generated off of Trump’s endorsement of the Republican gubernatorial candidate, but it might pass as something that Trump tweeted in the run-up to the 2016 general election. The more complex neural networks I implemented, with hidden fully-connected layers before and after the recurrent layer, were capable of generating internally-consistent text given any seed of 40 characters or less. Less complex networks stumbled a little on consistency, but still captured the tonal feel of President Trump’s speech: While not quite producing text at a level capable of fooling you or me consistently, this attempt opened my eyes to the power of RNNs. In short order, these networks learned spelling, some aspects of grammar, and in some instances, how to use hashtags and hyperlinks — imagine what a better-designed network with more text to learn from, and time to learn might produce. If you’re interested in looking at the code behind these models, you can find the repository here. And, don’t hesitate to reach out with any questions or feedback you may have! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I am a data scientist with a background in chemical engineering and biotech. I am also homeless and live in my car, but that's another thing entirely. Hire me! Sharing concepts, ideas, and codes. " Dr. GP Pulipaka,2,6,https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------3----------------,"3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and...","Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form. Latent semantic analysis Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document. Brief overview of linear algebra The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK. Deep Learning at scale with Google Colab notebooks Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper. Once this step is complete, it shows the following log without errors: The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab If it shows successful retrieval of access token, then Colab is all set. At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file. Once the drive is mounted, Colab has access to the datasets from Google drive. Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook. PyCharm IDE The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal. Results from OSX Terminal Jupyter Notebook on standalone machine Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine: References Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013 Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing. Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience " Erick Muzart Fonseca dos Santos,16,2,https://medium.com/deeplearningbrasilia/o-grupo-de-estudo-em-deep-learning-de-bras%C3%ADlia-est%C3%A1-planejando-o-pr%C3%B3ximo-ciclo-de-encontros-do-4861851ec0ff?source=---------5----------------,O Grupo de Estudo em Deep Learning de Brasília está planejando o próximo ciclo de encontros do...,"O Grupo de Estudo em Deep Learning de Brasília está planejando o próximo ciclo de encontros do grupo, que deve iniciar-se a partir do meio de junho de 2018. Ainda há tempo para manifestar suas preferências para participar do grupo! Para tal, favor preencher o seguinte questionário para que possamos agregar as preferências de nossa comunidade e selecionar as opções que melhor atenderem a todos: https://goo.gl/forms/H4K77sD1DxW6diIt1 Agradecemos se puder divulgar o grupo junto a sua rede de contatos com interesse nos temas de aprendizado automático e Deep Learning, para que possamos iniciar o próximo ciclo já com o máximo de interessados desde o primeiro dia! Seguem abaixo alguns dos resultados iniciais do grupo: Quanto aos resultados inciais do questionário, segue uma síntese das primeiras 50 respostas: Dentre os tópicos de mais interesse destacam-se: 1o Deep Learning: 87,5% 2o Machine Learning: 78,6% 3o Aplicações de Deep Learning em projetos: 69,6% 4o Processamento de Linguagem Natural: 51,8% Preferência por curso: 1o Machine Learning, da fast.ai: 67,9% 2o Deep Learning, parte 2, da fast.ai: 46,4% 3o Deep Learning, parte 1, da fast.ai: 44,6% Atenciosamente, Organização do Grupo de Estudo em Deep Learning, de Brasília From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Publicações dos membros do Grupo de Estudo em Deep Learning de Brasília " Chris Kalahiki,30,15,https://towardsdatascience.com/beethoven-picasso-and-artificial-intelligence-caf644fc72f9?source=---------6----------------,"Beethoven, Picasso, and Artificial Intelligence – Towards Data Science","When people think of the greatest artists who’ve ever lived, they probably think of names like Beethoven or Picasso. No one would ever think of a computer as a great artist. But what if one day, that was indeed the case. Could computers learn to create incredible drawings like the Mona Lisa? Perhaps one day a robot will be capable of composing the next great symphony. Some experts believe this to be the case. In fact, some of the greatest minds in artificial intelligence are diligently working to develop programs that can create drawing and music independently from humans. The use of artificial intelligence in the field of art has even been picked up by tech giants the likes of Google. The projects that are included in this paper could have drastic implications in our everyday lives. They may also change the way we view art. They also showcase the incredible advancement that has been made in the field of artificial intelligence. Image recognition is not as far as the research goes. Nor is the ability to generate music in the styling of the great artists of our past. Although these topics will be touched upon, we will focus on several more advanced achievements such as text descriptions being turned into images and generating art and music that is totally original. Each of these projects bring something new and innovative to the table and show us exactly how the art space is a great place to further explore applications of artificial intelligence. We will be discussing problems that have been faced in these projects and how they have been overcome. The future of AI looks bright. Let’s look at what the future may hold. In doing this, we may be able to better understand the impact that artificial intelligence can have in an area that is driven by human creativity. Machines must be educated. They learn from instruction. How do we lead machines away from emulating what already exists, and have them create new techniques? “No creative artist will create art today that tries to emulate the Baroque or Impressionist style, or any other traditional style, unless trying to do so ironically” [4]. This problem isn’t limited to paintings either. Music can be very structured in some respects, but is also a form of art that requires vast creativity. So how do we go about solving such a problem? The first concept we will discuss is something called GAN (Generative Adversarial Networks). GANs, although quite complex, are becoming an outdated model. If artificial intelligence in the art space is to advance, researchers and developers will have to work to find better methods to allow machines to generate art and music. Two of these such methods are presented in the form of Sketch-RNN and CAN (Creative Adversarial Networks). Each of these methods have their advantages over GANs. First, let’s explore what exactly a GAN is. Below is a small excerpt explaining how a GAN works: Generative Adversarial Network (GAN) has two sub networks, a generator and a discriminator. The discriminator has access to a set of images (training images). The discriminator tries to discriminate between “real” images (from the training set) and “fake” images generated by the generator. The generator tries to generate images similar to the training set without seeing the images [4]. The more images the generator creates, the closer they get to the images from the training set. The idea is that after a certain number of images are generated, the GAN will create images that are very similar to what we consider art. This is a very impressive accomplishment to say the least. But what if we take it a step further? Many issues associated with the GAN are simply limitations on what it can do. The GAN is powerful, but can’t do quite as much as we would like. For example, the generator in the model described above will continue to create images closer and closer to the images given to the discriminator that it isn’t producing original art. Could a GAN be trained to draw alongside a user? It’s not likely. The model wouldn’t be able to turn a text-based description of an image into an actual picture either. As impressive as the GAN may be, we would all agree that it can be improved. Each of the shortcoming mentioned have actually been addressed and, to an extent, solved. Let’s look at how this is done. Sketch-RNN is a recurrent neural network model developed by Google. The goal of Sketch-RNN is to help machines learn to create art in a manner similar to the way a human may learn. It has been used in a Google AI Experiment to be able to sketch alongside a user. While doing so, it can provide the users with suggestions and even complete the user’s sketch when they decide to take a break. Sketch-RNN is exposed to a massive number of sketches provided through a dataset of vector drawings obtained through another Google application that we will discuss later. Each of these sketches are tagged to let the program know what object is in the sketch. The data set represents the sketch as a set of pen strokes. This allows Sketch-RNN to then learn what aspects each sketch of a certain object has in common. If a user begins to draw a cat, Sketch-RNN could then show the user other common features that could be on the cat. This model could have many new creative applications. “The decoder-only model trained on various classes can assist the creative process of an artist by suggesting many possible ways of finishing a sketch” [3]. The Sketch-RNN team even believes that, given a more complex dataset, the applications could be used in an educational sense to teach users how to draw. These applications of Sketch-RNN couldn’t be nearly as easily achieved with GAN alone. Another method used to improve upon GAN is the Creative Adversarial Network. In their paper regarding adversarial networks generating art, several researchers discuss a new way of generating art through CANs. The idea is that the CAN has two adversary networks. One, the generator, has no access to any art. It has no basis to go off of when generating images. The other network, the discriminator, is trained to classify the images generated as being art or not. When an image is generated, the discriminator gives the generator two pieces of information. The first is whether it believes the generated image comes from the same distributor as the pieces of art it was trained on, and the other being how the discriminator can fit the generated image into one of the categories of art it was taught. This technique is fantastic in that it helps the generator create images that are both emulative of past works of art in the sense that it learns what was good about those images and creative in a sense that it is taught to produce new and different artistic concepts. This is a big difference from GAN creating art that emulated the training images. Eventually, the CAN will learn how to produce only new and innovative artwork. One final future for the vanilla GAN is StackGAN. StackGAN is a text to photo-realistic image synthesizer that uses stacked generative adversarial networks. Given a text description, the StackGAN is able to create images that are very much related to the given text. This wouldn’t be doable with a normal GAN model as it would be much too difficult to generate photo-realistic images from a text description even with a state-of-the-art training database. This is where StackGAN comes in. It breaks the problem down into 2 parts. “Low-resolution images are generated by our Stage-I GAN. On the top of our Stage-I GAN, we stack Stage-II GAN to generate realistic high-resolution images conditioned on Stage-I results and text descriptions” [7]. It is through the conditioning on Stage-I results and text descriptions that Stage-II GAN can find details that Stage-I GAN may have missed and create higher resolution images. By breaking the problem down into smaller subproblems, the StackGAN can tackle problems that aren’t possible with a regular GAN. On the next page is an image showing the difference between a regular GAN and each step of the StackGAN. It is through advancements like these that have been made in recent years that we can continue to push the boundaries of what AI can do. We have just seen three ways to improve upon a concept that was already quite complex and innovative. Each of these advancements have a practical, everyday use. As we continue to improve on artificial intelligence techniques, we will able to do more and more in regard to, not just art and music, but a wide variety of tasks to improve our lives. Images aren’t the only type of art that artificial intelligence can impact though. Its effect on music is being explored as we speak. We will now explore some specific cases and their impact on both music and artificial intelligence. In doing this, we should be able to see how art can do as much for AI as AI does for it. Both fields benefit heavily from the types of projects that we are exploring here. Could a machine ever be able to create a piece of music the likes of Johann Sebastian Bach? In a project known as DeepBach, several researchers looked to create pieces similar to Bach’s chorales. The beauty of DeepBach is that it “is able to generate coherent musical phrases and provides, for instance, varied reharmonizations of melodies without plagiarism” [6]. What this means it that DeepBach can create music with correct structure and be original. It is just in the style of Bach. It isn’t just a mashup of his works. DeepBach is creating new content. The developers of DeepBach went on to test whether their product could actually fool listeners. As part of the experiment, over 1,250 people were asked to vote whether pieces presented to them were in fact composed by Bach. The subjects had varying degrees of musical expertise. The results showed that as the model for DeepBach’s complexity increased, the subjects had more and more trouble distinguishing the chorales of Bach from those of DeepBach. This experiment shows us that through the use of artificial intelligence and machine learning, it is quite possible to recreate original works in the likeness of the greats. But is that the limit to what artificial intelligence can do in the field of art and music? DeepBach has achieved something that would have been unheard of in the not so distant past, but it certainly isn’t the fullest extent of what AI can do to benefit the field of music. What if we want to create new and innovative music? Maybe AI can change the way music is created all together. There must be projects that do more to push the envelope. As a matter of fact, that is exactly what the team behind Magenta look to do. Magenta is a project being conducted by the Google Brain team and lead by Douglas Eck. Eck has been working for Google since 2010, but that isn’t where his interest in Music began. Eck helped found Brain Music and Sound, an international laboratory for brain, music, and sound research. He was also involved at the McGill Centre for Interdisciplinary Research in Music Media and Technology, and was an Associate Professor in Computer Science at the University of Montreal. Magenta’s goal is to be “a research project to advance the state of the art in machine intelligence for music and art generation” [2]. It is an open source project that uses TensorFlow. Magenta aims to learn how to generate art and music in a way that is indeed generative. It must go past just emulating existing music. This is distinctly different that projects along the line of DeepBach which set out to emulate existing music in a way that wasn’t plagiarizing existing pieces of music. Eck and company realize that art is about capturing elements of surprise and drawing attention to certain aspects. “This leads to perhaps the biggest challenge: combining generation, attention and surprise to tell a compelling story. So much of machine-generated music and art is good in small chunks, but lacks any sort of long-term narrative arc” [2]. Such a perspective gives computer-generated music more substance, and helps it to become less of a gimmick. One of the projects the magenta team has developed is called NSynth. The idea behind NSynth is to be able to create new sounds that have never been heard before, but beyond that, to reimagine how music synthesis can be done. Unlike ordinary synthesizers that focus on “a specific arrangement of oscillators or an algorithm for sample playback, such as FM Synthesis or Granular Synthesis” [5], NSynth generates sounds on an individual level. To do this, it uses deep neural networks. Google has even launched an experiment that allows users to really see what NSynth can do by allowing them to fuse together the sounds of existing instruments to create new hybrid sounds that have never been heard before. As an example, users can take two instruments such as a banjo and a tuba, and take parts of each of their sounds to create a totally new instrument. The experiment also allowed users to decide what percentage of each instrument would be used. Projects like Magenta go above and beyond in showing us the full extent of what artificial intelligence can do in the way of generating music. They explore new applications of artificial intelligence that can generate new ideas independent of humans. It is the closest we have come to machine creativity. Although machines aren’t yet able to truly think and express creativity, they may soon be able to generate new and unique art and music for us to enjoy. Don’t worry though. Eck doesn’t intend to replace artists with AI. Instead he looks to provide artists with tools to create music in an entirely new way. As we look ahead to a few more of the ways that AI has been used to accomplish new and innovative ideas in the art space, we look at projects like Quick, Draw! and Deep Dream. These projects showcase amazing progress in the space while pointing out some issues that researchers in AI will have to work out in the years to come. Quick, Draw! is an application from the Google Creative Lab, trained to recognize quick drawings much like one would see in a game of Pictionary. The program can recognize simple objects such as cats and apples based on common aspects of the many pictures it was given before. Although the program will not get every picture right each time it is used, it continues to learn from the similarities in the picture drawn and the hundreds of pictures before it. The science behind Quick, Draw! “uses some of the same technology that helps Google Translate recognize your handwriting. To understand handwritings or drawings, you don’t just look at what the person drew. You look at how they actually drew it” [1]. It is presented in the form of a game, with the user drawing a picture of an object chosen by the application. The program then has 20 seconds to recognize the image. In each session, the user is given a total of 6 objects. The images are then stored to the database used to train application. This happens to be the same database we saw earlier in the Sketch-RNN application. This image recognition is a very practical use of artificial intelligence in the realm of art and music. It can do a lot to benefit us in our everyday lives. But this only begins to scratch the surface of what artificial intelligence can do in this field. Although this is very impressive, we might point out that the application doesn’t truly understand what is being drawn. It is just picking up on patterns. In fact, this distinction is part of the gap between simple AI techniques and true artificial general intelligence. Machines that truly understand what the objects in images are don’t appear to be coming in the near future. Another interesting project in the art space is Google’s Deep Dream project, which uses AI to create new and unique images. Unfortunately, the Deep Dream Generator Team wouldn’t go into too much detail about the technology itself (mostly fearing it would be too long for an email) [8]. They did, however, explain that convolutional neural networks train on the famous ImageNet dataset. Those neural networks are then used to create art-like images. Essentially, Deep Dream takes the styling of one image and uses it to modify another image. The results can be anything from a silly fusion to an artistic masterpiece. This occurs when the program identifies the unique stylings of an image provided by the user and imposes those stylings onto another image that the user provides. What can easily be observed through the use of Deep Dream is that computers aren’t yet capable of truly understanding what they are doing with respect to art. They can be fed complex algorithms to generate images, but don’t fundamentally understand what it is they are generating. For example, a computer may see a knife cutting through an onion and assume the knife and onion are one object. The lack of an ability to truly understand the contents of an image is one dilemma that researchers have yet to solve. Perhaps as we continue to make advances in artificial intelligence we will be able to have machines that do truly understand what objects are in an image and even the emotions evoked by their music. The only way for this to be achieved is by reaching true artificial general intelligence (AGI). IN the meantime, the Deep Dream team believes that generative models will be able to create some really interesting pieces of art and digital content. For this section, we will consider where artificial intelligence could be heading in the art space. We will take a look at how AI has impacted the space and in what ways it can continue to do so. We will also look at ways art and music could continue to impact AI in the years to come. Although I don’t feel that we have completely mastered the ability to emulate the great artists of our past, it is just a matter of time before that problem is solved. The real task to be solved is that of creating new innovations in art and music. We need to work towards creation without emulation. It is quite clear that we are headed in that direction through projects like CAN and Magenta. Artificial general intelligence (AGI) is not the only way to complete this task. As a matter of fact, even those who dispute the possibility of AGI would have a hard time disputing the creation of unique works of art by a machine. One path that may be taken to further improve art and music through AI is to create more advanced datasets to use in training the complex networks like Sketch-RNN and Deep Dream. AI needs to be trained to be able to perform as expected. That training has a huge impact on the results we get. Shouldn’t we want to train our machines in the most beneficial way possible. Even developing software like Sketch-RNN to use the ImageNet dataset used in Deep Dream could be huge in educating artists on techniques for drawing complex, realistic images. Complex datasets could very well be our answer to more efficient training. Until our machines can think and learn like we do, we will need to be very careful what data is used to train them. One of the ways that art and music can help to impact AI is by providing another method of Turing Testing machines. For those who dream of creating AGI, what better way to test the machine’s ability that to create something that tests the full extent of human-like creativity? Art is the truest representation of human creativity. That is, in fact, its essence. Although art is probably not the ultimate end game for artificial intelligence, it could be one of the best ways to test the limits of what a machine can do. The day that computers can create original musical composition and create images based on descriptions given by a user could very well be the day that we stop being able to distinguish man from machine. There are many benefits to using artificial intelligence in the music space. Some of them have already been seen in the projects we have discussed so far. We have seen how artificial intelligence could be used for image recognition as well as their ability to turn our words into fantastic images. We have also seen how AI can be used to synthesize new sounds that have never been heard. We know that artificial intelligence can be used to create art alongside us as well as independently from us. It can be taught to mimic music from the past and can create novel ideas. All of these accomplishments are a part of what will drive AI research into the future. Who knows? Perhaps one day we will achieve artificial general intelligence and machines will be able to understand what is really in the images it is given. Maybe our computers will be able to understand how their art makes us feel. There is a clear path showing us where to go from here. I firmly believe that it is up to us to continue this research and test the limits of what artificial intelligence can do, both in the field of art and in our everyday lives. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computer Science student at Louisiana Tech University with an interest in anything AI. Sharing concepts, ideas, and codes. " Adam Geitgey,14.2K,15,https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------0----------------,Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that! This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture: Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished! (If you haven’t already read part 1 and part 2, read them now!) You might have seen this famous xkcd comic before. The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years. In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one. So let’s do it — let’s write a program that can recognize birds! Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”. In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in: We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”. Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set: The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers? The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is: To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers: The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes: Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups. Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone. All that’s left is to train the neural network with images of “8”s and not-“8""s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images. Here’s some of our training data: We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition! It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right? Well, of course it’s not that simple. First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image: But now the really bad news: Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything: This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only. That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered. We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one? This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this! When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image? We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image: Using this technique, we can easily create an endless supply of training data. More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns. To make the network bigger, we just stack up layer upon layer of nodes: We call this a “deep neural network” because it has more layers than a traditional neural network. This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly. But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network. Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects. There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is! As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture: As a human, you instantly recognize the hierarchy in this picture: Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on. But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks. We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up. We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images). Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture. Here’s how it’s going to work, step by step — Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile: By doing this, we turned our original image into 77 equally-sized tiny image tiles. Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile: However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting. We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this: In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting. The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big: To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all! We’ll just look at each 2x2 square of the array and keep the biggest number: The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits. So far, we’ve reduced a giant image down into a fairly small array. Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network. So from start to finish, our whole five-step pipeline looks like this: Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network. When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data. The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize. For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc. Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like: In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories! So how do you know which steps you need to combine to make your image classifier work? Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error! Now finally we know enough to write a program that can decide if a picture is a bird or not. As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics. Here’s a few of the birds from our combined data set: And here’s some of the 52,000 non-bird images: This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data! To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network. Here’s the code to define and train the network: If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer. As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there. Congrats! Our program can now recognize birds in images! Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not. But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time. That seems pretty good, right? Well... it depends! Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things. For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless. We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed. Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories — Using our validation set of 15,000 images, here’s how many times our predictions fell into each category: Why do we break our results down like this? Because not all mistakes are created equal. Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did. Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did: This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one! Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images. You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next? If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Adam Geitgey,15.2K,13,https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------1----------------,Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning,"Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano. Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic: This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do! Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)! So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result. But face recognition is really a series of several related problems: As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects: Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately. We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms: Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib. The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart! If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action: Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline. Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short. To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces: Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it: Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker: If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image: This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve! But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image. To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest. The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way: To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces: Using this technique, we can now easily find faces in any image: If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images. Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer: To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps. To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan. The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face: Here’s the result of locating the 68 face landmarks on our test image: Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations): Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate. If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks. Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting! The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right? There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours. What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about: Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else? It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure. The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face. The training process works by looking at 3 face images at a time: Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart: After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements. Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist. This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy. But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team! So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image: So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person. If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this. This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image. You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work. All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person! So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon: Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show: It works! And look how well it works for faces in different poses — even sideways faces! Let’s review the steps we followed: Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer: UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below! I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself! Original OpenFace instructions: If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter: You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning. Now continue on to Machine Learning is Fun Part 5! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in computers and machine learning. Likes to write about it. " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------2----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Dhruv Parthasarathy,4.3K,12,https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------5----------------,A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN,"At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com " Sebastian Heinz,4.4K,13,https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------6----------------,A simple deep learning model for stock price prediction using TensorFlow,"For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind. Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB). Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format. The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values. A quick look at the S&P time series using pyplot.plot(data['SP500']): Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute. The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017. There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values. Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler. Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction. TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog): In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get ""filled"" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow: After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex. As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1). The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly. The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch. Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail. The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article. It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer. After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications. Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model. The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network. The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets. However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved. The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research. Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp. Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies. Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient. After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets. A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch. The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies. During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice! One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario. Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post. The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;) If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice. Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork. Lastly, follow me on: Twitter | LinkedIn From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts. " Max Pechyonkin,23K,8,https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------7----------------,Understanding Hinton’s Capsule Networks. Part I: Intuition.,"Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur " Slav Ivanov,3.9K,17,https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------8----------------,"The $1700 great Deep Learning box: Assembly, setup and benchmarks","Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Stefan Kojouharov,14.2K,7,https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------9----------------,"Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data","Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Netflix Technology Blog,99,11,https://medium.com/netflix-techblog/distributed-neural-networks-with-gpus-in-the-aws-cloud-ccf71e82056b?source=tag_archive---------0----------------,Distributed Neural Networks with GPUs in the AWS Cloud,"by Alex Chen, Justin Basilico, and Xavier Amatriain As we have described previously on this blog, at Netflix we are constantly innovating by looking for better ways to find the best movies and TV shows for our members. When a new algorithmic technique such as Deep Learning shows promising results in other domains (e.g. Image Recognition, Neuro-imaging, Language Models, and Speech Recognition), it should not come as a surprise that we would try to figure out how to apply such techniques to improve our product. In this post, we will focus on what we have learned while building infrastructure for experimenting with these approaches at Netflix. We hope that this will be useful for others working on similar algorithms, especially if they are also leveraging the Amazon Web Services (AWS) infrastructure. However, we will not detail how we are using variants of Artificial Neural Networks for personalization, since it is an active area of research. Many researchers have pointed out that most of the algorithmic techniques used in the trendy Deep Learning approaches have been known and available for some time. Much of the more recent innovation in this area has been around making these techniques feasible for real-world applications. This involves designing and implementing architectures that can execute these techniques using a reasonable amount of resources in a reasonable amount of time. The first successful instance of large-scale Deep Learning made use of 16000 CPU cores in 1000 machines in order to train an Artificial Neural Network in a matter of days. While that was a remarkable milestone, the required infrastructure, cost, and computation time are still not practical. Andrew Ng and his team addressed this issue in follow up work . Their implementation used GPUs as a powerful yet cheap alternative to large clusters of CPUs. Using this architecture, they were able to train a model 6.5 times larger in a few days using only 3 machines. In another study, Schwenk et al. showed that training these models on GPUs can improve performance dramatically, even when comparing to high-end multicore CPUs. Given our well-known approach and leadership in cloud computing, we sought out to implement a large-scale Neural Network training system that leveraged both the advantages of GPUs and the AWS cloud. We wanted to use a reasonable number of machines to implement a powerful machine learning solution using a Neural Network approach. We also wanted to avoid needing special machines in a dedicated data center and instead leverage the full, on-demand computing power we can obtain from AWS. In architecting our approach for leveraging computing power in the cloud, we sought to strike a balance that would make it fast and easy to train Neural Networks by looking at the entire training process. For computing resources, we have the capacity to use many GPU cores, CPU cores, and AWS instances, which we would like to use efficiently. For an application such as this, we typically need to train not one, but multiple models either from different datasets or configurations (e.g. different international regions). For each configuration we need to perform hyperparameter tuning, where each combination of parameters requires training a separate Neural Network. In our solution, we take the approach of using GPU-based parallelism for training and using distributed computation for handling hyperparameter tuning and different configurations. Some of you might be thinking that the scenario described above is not what people think of as a distributed Machine Learning in the traditional sense. For instance, in the work by Ng et al. cited above, they distribute the learning algorithm itself between different machines. While that approach might make sense in some cases, we have found that to be not always the norm, especially when a dataset can be stored on a single instance. To understand why, we first need to explain the different levels at which a model training process can be distributed. In a standard scenario, we will have a particular model with multiple instances. Those instances might correspond to different partitions in your problem space. A typical situation is to have different models trained for different countries or regions since the feature distribution and even the item space might be very different from one region to the other. This represents the first initial level at which we can decide to distribute our learning process. We could have, for example, a separate machine train each of the 41 countries where Netflix operates, since each region can be trained entirely independently. However, as explained above, training a single instance actually implies training and testing several models, each corresponding to a different combinations of hyperparameters. This represents the second level at which the process can be distributed. This level is particularly interesting if there are many parameters to optimize and you have a good strategy to optimize them, like Bayesian optimization with Gaussian Processes. The only communication between runs are hyperparameter settings and test evaluation metrics. Finally, the algorithm training itself can be distributed. While this is also interesting, it comes at a cost. For example, training ANN is a comparatively communication-intensive process. Given that you are likely to have thousands of cores available in a single GPU instance, it is very convenient if you can squeeze the most out of that GPU and avoid getting into costly across-machine communication scenarios. This is because communication within a machine using memory is usually much faster than communication over a network. The following pseudo code below illustrates the three levels at which an algorithm training process like us can be distributed. In this post we will explain how we addressed level 1 and 2 distribution in our use case. Note that one of the reasons we did not need to address level 3 distribution is because our model has millions of parameters (compared to the billions in the original paper by Ng). Before we addressed distribution problem though, we had to make sure the GPU-based parallel training was efficient. We approached this by first getting a proof-of-concept to work on our own development machines and then addressing the issue of how to scale and use the cloud as a second stage. We started by using a Lenovo S20 workstation with a Nvidia Quadro 600 GPU. This GPU has 98 cores and provides a useful baseline for our experiments; especially considering that we planned on using a more powerful machine and GPU in the AWS cloud. Our first attempt to train our Neural Network model took 7 hours. We then ran the same code to train the model in on a EC2’s cg1.4xlarge instance, which has a more powerful Tesla M2050 with 448 cores. However, the training time jumped from 7 to over 20 hours. Profiling showed that most of the time was spent on the function calls to Nvidia Performance Primitive library, e.g. nppsMulC_32f_I, nppsExp_32f_I. Calling the npps functions repeatedly took 10x more system time on the cg1 instance than in the Lenovo S20. While we tried to uncover the root cause, we worked our way around the issue by reimplementing the npps functions using the customized cuda kernel, e.g. replace nppsMulC_32f_I function with: Replacing all npps functions in this way for the Neural Network code reduced the total training time on the cg1 instance from over 20 hours to just 47 minutes when training on 4 million samples. Training 1 million samples took 96 seconds of GPU time. Using the same approach on the Lenovo S20 the total training time also reduced from 7 hours to 2 hours. This makes us believe that the implementation of these functions is suboptimal regardless of the card specifics. While we were implementing this “hack”, we also worked with the AWS team to find a principled solution that would not require a kernel patch. In doing so, we found that the performance degradation was related to the NVreg_CheckPCIConfigSpace parameter of the kernel. According to RedHat, setting this parameter to 0 disables very slow accesses to the PCI configuration space. In a virtualized environment such as the AWS cloud, these accesses cause a trap in the hypervisor that results in even slower access. NVreg_CheckPCIConfigSpace is a parameter of kernel module nvidia-current, that can be set using: We tested the effect of changing this parameter using a benchmark that calls MulC repeatedly (128x1000 times). Below are the results (runtime in sec) on our cg1.4xlarge instances: As you can see, disabling accesses to PCI space had a spectacular effect in the original npps functions, decreasing the runtime by 95%. The effect was significant even in our optimized Kernel functions saving almost 25% in runtime. However, it is important to note that even when the PCI access is disabled, our customized functions performed almost 60% better than the default ones. We should also point out that there are other options, which we have not explored so far but could be useful for others. First, we could look at optimizing our code by applying a kernel fusion trick that combines several computation steps into one kernel to reduce the memory access. Finally, we could think about using Theano, the GPU Match compiler in Python, which is supposed to also improve performance in these cases. While our initial work was done using cg1.4xlarge EC2 instances, we were interested in moving to the new EC2 GPU g2.2xlarge instance type, which has a GRID K520 GPU (GK104 chip) with 1536 cores. Currently our application is also bounded by GPU memory bandwidth and the GRID K520‘s memory bandwidth is 198 GB/sec, which is an improvement over the Tesla M2050’s at 148 GB/sec. Of course, using a GPU with faster memory would also help (e.g. TITAN’s memory bandwidth is 288 GB/sec). We repeated the same comparison between the default npps functions and our customized ones (with and without PCI space access) on the g2.2xlarge instances. One initial surprise was that we measured worse performance for npps on the g2 instances than the cg1 when PCI space access was enabled. However, disabling it improved performance between 45% and 65% compared to the cg1 instances. Again, our KernelMulC customized functions are over 70% better, with benchmark times under a second. Thus, switching to G2 with the right configuration allowed us to run our experiments faster, or alternatively larger experiments in the same amount of time. Once we had optimized the single-node training and testing operations, we were ready to tackle the issue of hyperparameter optimization. If you are not familiar with this concept, here is a simple explanation: Most machine learning algorithms have parameters to tune, which are called often called hyperparameters to distinguish them from model parameters that are produced as a result of the learning algorithm. For example, in the case of a Neural Network, we can think about optimizing the number of hidden units, the learning rate, or the regularization weight. In order to tune these, you need to train and test several different combinations of hyperparameters and pick the best one for your final model. A naive approach is to simply perform an exhaustive grid search over the different possible combinations of reasonable hyperparameters. However, when faced with a complex model where training each one is time consuming and there are many hyperparameters to tune, it can be prohibitively costly to perform such exhaustive grid searches. Luckily, you can do better than this by thinking of parameter tuning as an optimization problem in itself. One way to do this is to use a Bayesian Optimization approach where an algorithm’s performance with respect to a set of hyperparameters is modeled as a sample from a Gaussian Process. Gaussian Processes are a very effective way to perform regression and while they can have trouble scaling to large problems, they work well when there is a limited amount of data, like what we encounter when performing hyperparameter optimization. We use package spearmint to perform Bayesian Optimization and find the best hyperparameters for the Neural Network training algorithm. We hook up spearmint with our training algorithm by having it choose the set of hyperparameters and then training a Neural Network with those parameters using our GPU-optimized code. This model is then tested and the test metric results used to update the next hyperparameter choices made by spearmint. We’ve squeezed high performance from our GPU but we only have 1–2 GPU cards per machine, so we would like to make use of the distributed computing power of the AWS cloud to perform the hyperparameter tuning for all configurations, such as different models per international region. To do this, we use the distributed task queue Celery to send work to each of the GPUs. Each worker process listens to the task queue and runs the training on one GPU. This allows us, for example, to tune, train, and update several models daily for all international regions. Although the Spearmint + Celery system is working, we are currently evaluating more complete and flexible solutions using HTCondor or StarCluster. HTCondor can be used to manage the workflow of any Directed Acyclic Graph (DAG). It handles input/output file transfer and resource management. In order to use Condor, we need each compute node register into the manager with a given ClassAd (e.g. SLOT1_HAS_GPU=TRUE; STARD_ATTRS=HAS_GPU). Then the user can submit a job with a configuration “Requirements=HAS_GPU” so that the job only runs on AWS instances that have an available GPU. The main advantage of using Condor is that it also manages the distribution of the data needed for the training of the different models. Condor also allows us to run the Spearmint Bayesian optimization on the Manager instead of having to run it on each of the workers. Another alternative is to use StarCluster , which is an open source cluster computing framework for AWS EC2 developed at MIT. StarCluster runs on the Oracle Grid Engine (formerly Sun Grid Engine) in a fault-tolerant way and is fully supported by Spearmint. Finally, we are also looking into integrating Spearmint with Jobman in order to better manage the hyperparameter search workflow. Figure below illustrates the generalized setup using Spearmint plus Celery, Condor, or StarCluster: Implementing bleeding edge solutions such as using GPUs to train large-scale Neural Networks can be a daunting endeavour. If you need to do it in your own custom infrastructure, the cost and the complexity might be overwhelming. Levering the public AWS cloud can have obvious benefits, provided care is taken in the customization and use of the instance resources. By sharing our experience we hope to make it much easier and straightforward for others to develop similar applications. We are always looking for talented researchers and engineers to join our team. So if you are interested in solving these types of problems, please take a look at some of our open positions on the Netflix jobs page . Originally published at techblog.netflix.com on February 10, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Learn more about how Netflix designs, builds, and operates our systems and engineering organizations Learn about Netflix’s world class engineering efforts, company culture, product developments and more. " Francesco Gadaleta,3,4,https://hackernoon.com/gradient-descent-vs-coordinate-descent-9b5657f1c59f?source=tag_archive---------1----------------,Gradient descent vs coordinate descent – Hacker Noon,"When it comes to function minimization, it’s time to open a book of optimization and linear algebra. I am currently working on variable selection and lasso-based solutions in genetics. What lasso does is basically minimizing the loss function and an penalty in order to set to zero some regression coefficients and select only those covariates that are really associated with the response. Pheew, the shortest summary of lasso ever! We all know that, provided the function to be minimized is convex, a good direction to follow, in order to find a local minimum, is towards the negative gradient of the function. Now, my question is how good or bad is following the negative gradient with respect to a coordinate descent approach that loops across all dimensions and minimizes along each? There is no better way to try this with real code and start measuring. Hence, I wrote some code that implements both gradient descent and coordinate descent. The comparison might not be completely fair because the learning rate in the gradient descent procedure is fixed at 0.1 (which in some cases might be slower indeed). But even with some tuning (maybe with some linear search) or adaptive learning rates, it’s quite common to see that coordinate descent overcomes its brother gradient descent many times. This occurs much more often when the number of covariates becomes very high, as in many computational biology problems. In the figure below, I plot the analytical solution in red, the gradient descent minimisation in blue and the coordinate descent in green, across a number of iterations. A small explanation is probably necessary to read the function that performs coordinate descent. For a more mathematical explanation refer to the original post. Coordinate descent will update each variable in a Round Robin fashion. Despite the learning rate of the gradient descent procedure (which could indeed speed up convergence), the comparison between the two is fair at least in terms of complexity. Coordinate descent needs to perform operations for each coordinate update. Gradient descent performs the same number of operations . The R code that performs this comparison and generates the plot above is given below Feel free to download this code (remember to cite me and send me some cookies). Happy descent! Originally published at worldofpiggy.com on May 31, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Machine learning, math, crypto, blockchain, fitchain.io how hackers start their afternoons. " Milo Spencer-Harper,2.2K,3,https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------0----------------,How to build a multi-layered neural network in Python,"In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron. But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be? The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output. Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs. You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers. In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples. The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further. Also available here: https://github.com/miloharper/multi-layer-neural-network This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post. What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”. Ok, let’s try running it using the Terminal command: python main.py You should get a result that looks like this: First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close! You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”. That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Jim Fleming,294,3,https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f?source=tag_archive---------1----------------,Loading a TensorFlow graph with the C++ API – Jim Fleming – Medium,"Check out the related post: Loading TensorFlow graphs from Node.js (using the C API). The current documentation around loading a graph with C++ is pretty sparse so I spent some time setting up a barebones example. In the TensorFlow repo there are more involved examples, such as building a graph in C++. However, the C++ API for constructing graphs is not as complete as the Python API. Many features (including automatic gradient computation) are not available from C++ yet. Another example in the repo demonstrates defining your own operations but most users will never need this. I imagine the most common use case for the C++ API is for loading pre-trained graphs to be standalone or embedded in other applications. Be aware, there are some caveats to this approach that I’ll cover at the end. Let’s start by creating a minimal TensorFlow graph and write it out as a protobuf file. Make sure to assign names to your inputs and operations so they’re easier to assign when we execute the graph later. The node’s do have default names but they aren’t very useful: Variable_1 or Mul_3. Here’s an example created with Jupyter: Let’s create a new folder like tensorflow/tensorflow/ for your binary or library to live. I’m going to call the project loader since it will be loading a graph. Inside this project folder we’ll create a new file called .cc (e.g. loader.cc). If you’re curious, the .cc extension is essentially the same as .cpp but is preferred by Google’s code guidelines. Inside loader.cc we’re going to do a few things: Now we create a BUILD file for our project. This tells Bazel what to compile. Inside we want to define a cc_binary for our program. You can also use the linkshared option on the binary to produce a shared library or the cc_library rule if you’re going to link it using Bazel. Here’s the final directory structure: You could also call bazel run :loader to run the executable directly, however the working directory for bazel run is buried in a temporary folder and ReadBinaryProto looks in the current working directory for relative paths. And that should be all we need to do to compile and run C++ code for TensorFlow. The last thing to cover are the caveats I mentioned: Hopefully someone can shed some light on these last points so we can begin to embed TensorFlow graphs in applications. If you are that person, message me on Twitter or email. If you’d like help deploying TensorFlow in production, I do consulting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CTO and lead ML engineer at Fomoro — focused on machine learning and applying cutting-edge research for businesses — previously @rdio What I’m working on. " Milo Spencer-Harper,1.8K,4,https://medium.com/deep-learning-101/how-to-generate-a-video-of-a-neural-network-learning-in-python-62f5c520e85c?source=tag_archive---------2----------------,Video of a neural network learning – Deep Learning 101 – Medium,"As part of my quest to learn about AI, I generated a video of a neural network learning. Many of the examples on the Internet use matrices (grids of numbers) to represent a neural network. This method is favoured, because it is: However, it’s difficult to understand what is happening. From a learning perspective, being able to visually see a neural network is hugely beneficial. The video you are about to see, shows a neural network trying to solve this pattern. Can you work it out? It’s the same problem I posed in my previous blog post. The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. Our neural network will cycle through these 7 examples, 60,000 times. To speed up the video, I will only show you 13 of these cycles, pausing for a second on each frame. Why the number 13? It ensures the video lasts exactly as long as the music. Each time she considers an example in the training set, you will see her think (you will see her neurons and her synaptic connections glow). She will then calculate the error (the difference between the output and the desired output). She will then propagate this error backwards, adjusting her synaptic connections. Green synaptic connections represent positive weights (a signal flowing through this synapse will excite the next neuron to fire). Red synaptic connections represent negative weights (a signal flowing through this synapse will inhibit the next neuron from firing). Thicker synapses represent stronger connections (larger weights). In the beginning, her synaptic weights are randomly assigned. Notice how some synapses are green (positive) and others are red (negative). If these synapses turn out to be beneficial in calculating the right answer, she will strengthen them over time. However, if they are unhelpful, these synapses will wither. It’s even possible for a synapse which was originally positive to become negative, and vice versa. An example of this, is the first synapse into the output neuron — early on in the video it turns from red to green. In the beginning her brain looks like this: Did you notice that all her neurons are dark? This is because she isn’t currently thinking about anything. The numbers to the right of each neuron, represent the level of neural activity and vary between 0 and 1. Ok. Now she is going to think about the pattern we saw earlier. Watch the video carefully to see her synapses grow thicker as she learns. Did you notice how I slowed the video down at the beginning, by skipping only a small number of cycles? When I first shot the video, I didn’t do this. However, I realised that learning is subject to the ‘Law of diminishing returns’. The neural network changes more rapidly during the initial stage of training, which is why I slowed this bit down. Now that she has learned about the pattern using the 7 examples in the training set, let’s examine her brain again. Do you see how she has strengthened some of her synapses, at the expense of others? For instance, do you remember how the third column in the training set is irrelevant in determining the answer? You can see she has discovered this, because the synapses coming out of her third input neuron have almost withered away, relative to the others. Let’s give her a new situation [1, 1, 0] to think about. You can see her neural pathways light up. She has estimated 0.01. The correct answer is 0. So she was very close! Pretty cool. Traditional computer programs can’t learn. But neural networks can learn and adapt to new situations. Just like the human mind! How did I do it? I used the Python library matplotlib, which provides methods for drawing and animation. I created the glow effects using alpha transparency. You can view my full source code here: Thanks for reading! If you enjoyed reading this article, please click the heart icon to ‘Recommend’. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Fundamentals and Latest Developments in #DeepLearning " Christian Hernandez,364,7,https://medium.com/crossing-the-pond/into-the-age-of-context-f0aed15171d7?source=tag_archive---------3----------------,Into the Age of Context – Crossing the Pond – Medium,"I spent most of my early career proclaiming that “This!” was the “year of mobile”. The year of mobile was actually 2007 when the iPhone launched and accelerated a revolution around mobile computing. As The Economist recently put it “Just eight years later Apple’s iPhone exemplifies the early 21st century’s defining technology.” It’s not a question of whether Smartphones have become our primary computing interaction device, it’s a question of by how much relative to other interaction mediums. So let’s agree that we are currently living in the Era of Mobile. Looking forward to the next 5 year though, I personally believe we will move from the Era of Mobile to the Age of Context. (credit to Robert Scoble and Shel Israel for their book with that same term). Let me first define what I mean by Age of Context. In the Age of Context personal data (ex: calendar and email, location and time) is integrated with publicly available data (ex: traffic data, pollution level) and app-level data (ex: Uber surge pricing, number of steps tracked by my FitBit) to intelligently drive me towards an action (ex: getting me to walk to my next meeting instead of ordering a car). It is an age in which we, and the devices and sensors around us, generate massive reams of data and in which self-teaching algorithms drill into that data to derive insight and recommend or auto-generate an action. It is an era in which our biological computational capacity and actions, are enhanced (and improved) by digital services. The Age of Context is being brought about by a number of technology trends which have been accelerating in a parallel and are now coming together. The first, and most obvious trend, is the proliferation of supercomputers in our pockets. Industry analysts forecast 1.87 billion phones will be shipped by 2018. These devices carry not only a growing amount of processing power, but also the ecosystem of applications and services which integrate with sensors and functionality on the device to allow us to, literally, remote control our life. In the evolution from the current Era of Mobile to the future Age of Context, the supercomputers in our pocket evolve from information delivery and application interaction layers, to notification context-aware action drivers. Smartphones will soon be complemented by wearable computing devices (be that the Apple Watch or a future evolution of Google Glass). These new form factors are ideally suited for an Era in which data needs to be compiled into succinct notifications and action enablers. In the last 10 years, the “web” has evolved into a social web on top of which identities and deep insight into each of us powers services and experiences. It allows Goodreads to associate books with my identity, Vivino to determine that I like earthy red wines, Unilever to best target me for an ad on Facebook and Netflix to mine my data to then commission a show it knows I will like. This identity layer is now being overlayed with a financial layer in which, associated with my digital identity, I also have a secure digital payment mechanism. This transactional financial layer will begin to enable seamless transactions. In the Age of Context, the Starbuck app will know that I usually emerge from the tube at 9:10am and walk to their local store to order a “tall Americano, extra shot.” At 9:11, as I reach street level my phone, or watch, or wearable computing device will know where I am (close to Starbucks and to the office), know my routine, have my payment information stored and simply generate an action-driver that says “Tall Americano, extra shot. Order?” A few minutes later I can pick up my coffee, which has already been paid for. These services are already possible today. A parallel and accelerating trend which will power the Age of Context is the proliferation of intelligent and connected sensors around us. Call that Internet of Things or call it simply a democratization and consumerization of devices that capture data (for now) and act on data (eventually). While the end number varies, industry analysts all believe the number of connected devices starts to get very big very fast. Gartner predicts that by 2020 there will be 25 billion connected devices with the vast majority of those being consumer-centric. Today my Jawbone is a fairly basic data collection device. It knows that I walked 8,000 steps and slept too little, but it doesn’t drive me to action other than providing me with a visualization of the data. In the Age of Context this will change, as larger and larger data sets of sensor data, combined with other data combined with intelligent analytics allows data to become actionable. In the future my Jawbone won’t simply count my steps, it will also be able to integrate with other data sets to generate personal health insights. It will have tracked over time that my blood pressure rises every morning at 9:20 after I have consumed the third coffee of the day. Comparing my blood rate to thousands of others of my age range and demographic background it will know that the levels are unhealthy and it will help me take a conscious decision not to consume that extra coffee through a notification. Data will derive insight and that insight will, hopefully, drive action. One could argue that the parallel trends of mobile, sensors and the social web are already mainstream. What then is bringing them together to weave the Age of Context? The glue is data. The massive amounts of data the growing number of internet users and connected devices generate each day. More critically, the cost of storing this data has dropped to nearly zero. Deloitte estimated that in 1992 the cost of storing a Gigabyte of data was $569 and that by 2012 the cost had dropped to $0.03. But data by itself is just bits and bytes. The second key trend that is weaving the Age of Context is the breakthroughs in algorithms and models to analyze this data in close-to-real-time. For the Age of Context to come about, systems must know how to query and how to act on all the possible contextual data points to drive the simplified actions outlined in the examples above. The advances (and investment) into machine learning and AI are the final piece of the puzzle needed to turn data from information to action. The most visible example of the Age of Context today is Google Now. Google has a lot of information about me: it knows what “work” is as I spend most of the time there between 9am and 7pm, it knows what “home” is as I spend most of the evenings there. Since I use Google Apps it knows what my first meeting is. Since I search for Duke Basketball on a regular basis it knows I care about the scores. Since I usually take the tube and Google has access to the London TfL data, it knows that I will be late to my next meeting. But even though Google Now recently opened up its API to third party developers, it is still fairly Google-biased and Google-optimized. For the Age of Context to thrive the platforms that power it must be interlinked across data and applications. Whether this age comes about through intelligent agents (like Siri or Viv or the character from Her) or a “meta-app” layer sitting across vertical apps and services is still unclear. The missing piece for much of this to come about is a common meta-language for vertical and punctual apps to share data and actions. This common language will likely be an evolution of the various deep-linking standards being developed. Facebook has a flavour, Android has a flavour, and a myriad of startups have flavours. An emerging standard will not only enable the Age of Context but also probably crown the champion of this new era as the standard will also own the interactions, the interlinkages and the paths to monetization across devices and experiences. The trends above are all happening around us, the standards and algorithms are all being built by brilliant minds across the world, the interface layers and devices are already with us. The Age of Context is being created at an accelerating pace and I can’t wait to see what gets built and how our day to day lives are enhanced by this new era. Thanks to John Henderson for his feedback and thoughts on this post. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder and Managing Partner @whitestarvc Former product and mobile guy at smallish companies that became big. Salvadoran-born Londoner. #YGL of the @wef Stories from the White Star Capital team and our portfolio companies on entrepreneurship and scaling globally " Venture Scanner,207,5,https://medium.com/@VentureScanner/the-state-of-artificial-intelligence-in-six-visuals-8bc6e9bf8f32?source=tag_archive---------4----------------,The State of Artificial Intelligence in Six Visuals,"We cover many emerging markets in the startup ecosystem. Previously, we published posts that summarized Financial Technology, Internet of Things, Bitcoin, and MarTech in six visuals. This week, we do the same with Artificial Intelligence (AI). At this time, we are tracking 855 AI companies across 13 categories, with a combined funding amount of $8.75billion. To see all of our AI related posts, check out our blog! The six Artificial Intelligence visuals below help make sense of this dynamic market: Deep Learning/Machine Learning Applications: Machine learning is the technology of computer algorithms that operate based on its learnings from existing data. Deep learning is a subset of machine learning that focuses on deeply layered neural networks. The following companies utilize deep learning/machine learning technology in a specific way or use-case in their products. Computer Vision/Image Recognition: Computer vision is the method of processing and analyzing images to understand and produce information from them. Image recognition is the process of scanning images to identify objects and faces. The following companies either build computer vision/image recognition technology or utilize it as the core offering in their products. Deep Learning/Machine Learning (General): Machine learning is the technology of computer algorithms that operate based on its learning from existing data. Deep learning is a subset of machine learning that focuses on deeply layered neural networks. The following companies either build deep learning/machine learning technology or utilize it as the core offering of their products. Natural Language Processing: Natural language processing is the method through which computers process human language input and convert into understandable representations to derive meaning from them. The following companies either build natural language processing technology or utilize it as the core offering in their products (excluding all speech recognition companies). Smart Robots: Smart robot companies build robots that can learn from their experience and act and react autonomously based on the conditions of their environment. Virtual Personal Assistants: Virtual personal assistants are software agents that use artificial intelligence to perform tasks and services for an individual, such as customer service, etc. Natural Language Processing (Speech Recognition): Speech recognition is a subset of natural language processing that focuses on processing a sound clip of human speech and deriving meaning from it. Computer Vision/Image Recognition: Computer vision is the method of processing and analyzing images to understand and produce information from them. Image recognition is the process of scanning images to identify objects and faces. The following companies utilize computer vision/image recognition technology in a specific way or use-case in their products. Recommendation Engines and Collaborative Filtering: Recommendation engines are systems that predict the preferences and interests of users for certain items (movies, restaurants) and deliver personalized recommendations to them. Collaborative filtering is a method of predicting a user’s preferences and interests by collecting the preference information from many other similar users. Gesture Control: Gesture control is the process through which humans interact and communicate with computers with their gestures, which are recognized and interpreted by the computers. Video Automatic Content Recognition: Video automatic content recognition is the process through which the computer compares a sampling of video content with a source content file to identify what the content is through its unique characteristics. Context Aware Computing: Context aware computing is the process through which computers become aware of their environment and their context of use, such as location, orientation, lighting and adapt their behavior accordingly. Speech to Speech Transition: Speech to speech translation is the process through which human speech in one language is processed by the computer and translated into another language instantly. The bar graph above summarizes the number of companies in each Artificial Intelligence category to show which are dominating the current market. Currently, the “Deep Learning/Machine Learning Applications” category is leading the way with a total of 200 companies, followed by “Natural Language Processing (Speech Recognition)” with 130 companies. The bar graph above summarizes the average company funding per Artificial Intelligence category. Again, the “Deep Learning/Machine Learning Applications” category leads the way with an average of $13.8M per funded company. The SEM category includes companies that help marketers with managing and scaling their paid-search programs. The graph above compares total venture funding in Artificial Intelligence to the number of companies in each category. “Deep Learning/Machine Learning Applications” seems to be the category with the most traction. The following infographic is an updated heat map indicating where Artificial Intelligence startups exist across 62 countries. Currently, the United States is leading the way with 415 companies. The United Kingdom is in second with 67 companies followed by Canada with 29. The bar graph above summarizes Artificial Intelligence by median age of category. The “Speech Recognition” and “Video Content Recognition” categories have the highest median age at 8 years, followed by “Computer Vision (General)” at 6.5 years. As Artificial Intelligence continues to develop, so too will its moving parts. We hope this post provides some big picture clarity on this booming industry. Venture Scanner enables corporations to research, identify, and connect with the most innovative technologies and companies. We do this through a unique combination of our data, technology, and expert analysts. If you have any questions, reach out to info@venturescanner.com. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Technology and analyst powered research firm. Visit us at www.venturescanner.com. " Illia Polosukhin,108,3,https://medium.com/@ilblackdragon/tensorflow-tutorial-part-2-9ffe47049c92?source=tag_archive---------5----------------,Tensorflow Tutorial — Part 2 – Illia Polosukhin – Medium,"In the previous Part 1 of this tutorial, I introduced a bit of TensorFlow and Scikit Flow and showed how to build a simple logistic regression model on Titanic dataset. In this part let’s go deeper and try multi-layer fully connected neural networks, writing your custom model to plug into the Scikit Flow and top it with trying out convolutional networks. Of course, there is not much point of yet another linear/logistic regression framework. An idea behind TensorFlow (and many other deep learning frameworks) is to be able to connect differentiable parts of the model together and optimize them given the same cost (or loss) function. Scikit Flow already implements a convenient wrapper around TensorFlow API for creating many layers of fully connected units, so it’s simple to start with deep model by just swapping classifier in our previous model to the TensorFlowDNNClassifier and specify hidden units per layer: This will create 3 layers of fully connected units with 10, 20 and 10 hidden units respectively, with default Rectified linear unit activations. We will be able to customize this setup in the next part. I didn’t play much with hyperparameters, but previous DNN model actually yielded worse accuracy then a logistic regression. We can explore if this is due to overfitting on under-fitting in a separate post. For the sake of this example, I though want to show how to switch to the custom model where you can have more control. This model is very similar to the previous one, but we changed the activation function from a rectified linear unit to a hyperbolic tangent (rectified linear unit and hyperbolic tangent are most popular activation functions for neural networks). As you can see, creating a custom model is as easy as writing a function, that takes X and y inputs (which are Tensors) and returns two tensors: predictions and loss. This is where you can start learning TensorFlow APIs to create parts of sub-graph. What kind of TensorFlow tutorial would this be without an example of digit recognition? :) This is just an example how you can try different types of datasets and models, not limiting to only floating number features. Here, we take digits dataset and write a custom model: We’ve created conv_model function, that given tensor X and y, runs 2D convolutional layer with the most simple max pooling — just maximum. The result is passed as features to skflow.models.logistic_regression, which handles classification to required number of classes by attaching softmax over classes and computing cross entropy loss. It’s easy now to modify this code to add as many layers as you want (some of the state-of-the-art image recognition models are hundred+ layers of convolutions, max pooling, dropout and etc). The Part 3 is expanding the model for Titanic dataset with handling categorical variables. PS. Thanks to Vlad Frolov for helping with missing articles and pointing mistakes in the draft :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-Founder @ NEAR.AI — teaching machines to code. I’m tweeting as @ilblackdragon. " Derrick Harris,124,9,https://medium.com/s-c-a-l-e/how-baidu-mastered-mandarin-with-deep-learning-and-lots-of-data-1d94032564a5?source=tag_archive---------6----------------,Baidu explains how it’s mastering Mandarin with deep learning,"On Aug. 8 at the International Neural Network Society conference on big data in San Francisco, Baidu senior research engineer Awni Hannun presented on a new model that the Chinese search giant has developed for handling voice queries in Mandarin. The model, which is accurate 94 percent of the time in tests, is based on a powerful deep learning system called Deep Speech that Baidu first unveiled in December 2014. In this lightly edited interview, Hannun explains why his new research is important, why Mandarin is such a tough language to learn and where we can expect to see future advances in deep learning methods. SCALE: How accurate is Deep Speech at translating Mandarin? AWNI HANNUN: It has a 6 percent character error rate, which essentially means that it gets wrong 6 out of 100 characters. To put that in context, this is in my opinion — and to the best of our lab’s knowledge — the best system at transcribing Mandarin voice queries in the world. In fact, we ran an experiment where we had a few people at the lab who speak Chinese transcribe some of the examples that we were testing the system on. It turned out that our system was better at transcribing examples than they were — if we restricted it to transcribing without the help of the internet and such things. What is it about Mandarin that makes it such a challenge compared with other languages? There are a couple of differences with Mandarin that made us think it would be very difficult to have our English speech system work well with it. One is that it’s a tonal language, so when you say a word in a different pitch, it changes the meaning of the word, which is definitely not the case in English. In traditional speech recognition, it’s actually a desirable property that there is some pitch invariance, which essentially means that it tries to ignore pitch when it does the transcription. So you have to change a bunch of things to get a system to work with Mandarin, or any Chinese for that matter. However, for us, it was not the case that we had to change a whole bunch of things, because our pipeline is much simpler than the traditional speech pipeline. We don’t do a whole lot of pre-processing on the audio in order to make it pitch-invariant, but rather just let the model learn what’s relevant from the data to most effectively transcribe it properly. It was actually able to do that fine in Mandarin without having to change the input. The other thing that is very different about Chinese — Mandarin, in this case — is the character set. The English alphabet is 26 letters, whereas in Chinese it’s something like 80,000 different characters. Our system directly outputs a character at a time as it’s building its transcription, so we speculated it would be very challenging to have to do that on 80,000 characters at each step versus 26. That’s a challenge we were able to overcome just by using characters that people commonly say, which is a smaller subset. Baidu has been handling a fairly high volume of voice searches for a while now. How is the Deep Speech system better than the previous system for handling queries in Mandarin? Baidu has a very active system for voice search in Mandarin, and it works pretty well. I think in terms of total query activity, it’s still a relatively small percentage. We want to make that share larger, or at least enable people to use it more by making the accuracy of the system better. Can you describe the difference between a search-based system like Deep Speech and something like Microsoft’s Skype Translate, which is also based on deep learning? Typically, the way it’s done is there are three modules in the pipeline. The first is a speech-transcription module, the second is the machine-translation module and the third would be the speech-synthesis module. What we’re talking about, specifically, is just the speech-transcription module, and I’m sure Microsoft has one as part of Skype Translate. Our system is different than that system in that it’s more what we call end-to-end. Rather than having a lot of human-engineered components that have been developed over decades of speech research — by looking at the system and saying what what features are important or which phonemes the model should predict — we just have some input data, which is an audio .WAV file on which we do very little pre-processing. And then we have a big, deep neural network that outputs directly to characters. We give it enough data that it’s able to learn what’s relevant from the input to correctly transcribe the output, with as little human intervention as possible. One thing that’s pleasantly surprising to us is that we had to do very little changing to it — other than scaling it and giving it the right data — to make this system we showed in December that worked really well on English work remarkably well in Chinese, as well. What’s the usual timeline to get this type of system from R&D into production? It’s not an easy process, but I think it’s easier than the process of getting a model to be very accurate — in the sense that it’s more of an engineering problem than a research problem. We’re actively working on that now, and I’m hopeful our research system will be in production in the near term. Baidu has plans — and products — in other areas, including wearables and other embedded forms of speech recognition. Does the work you’re doing on search relate to these other initiatives? We want to build a speech system that can be used as the interface to any smart device, not just voice search. It turns out that voice search is a very important part of Baidu’s ecosystem, so that’s one place we can have a lot of impact right now. Is the pace of progress and significant advances in deep learning as fast it seems? I think right now, it does feel like the pace is increasing because people are recognizing that if you take tasks where you have some input and are trying to produce some output, you can apply deep learning to that task. If it was some old machine learning task such as machine translation or speech recognition, which has been heavily engineered for the past several decades, you can make significant advances if you try to simplify that pipeline with deep learning and increase the amount of data. We’re just on the crest of that. In particular, processing sequential data with deep learning is something that we’re just figuring out how to do really well. We’ve come up with models that seem to work well, and we’re at the point where we’re going to start squeezing a lot of performance out of these models. And then you’ll see that right and left, benchmarks will be dropping when it comes to sequential data. Beyond that, I don’t know. It’s possible we’ll start to plateau or we’ll start inventing new architectures to do new tasks. I think the moral of this story is: Where there’s a lot of data and where it makes sense to use a deep learning model, success is with high probability going to happen. That’s why it feels like progress is happening so rapidly right now. It really becomes a story of “How can we get right data?” when deep learning is involved. That becomes the big challenge. Architecturally, Deep Speech runs on a powerful GPU-based system. Where are the opportunities to move deep learning algorithms onto smaller systems, such as smartphones, in order to offload processing from Baidu’s (or anyone else’s) servers? That’s something I think about a lot, actually, and I think the future is bright in that regard. It’s certainly the case that deep learning models are getting bigger and bigger but, typically, it also has also been the case that the size and expressivity of the model is more necessary during training than it is during testing. There has been a lot of work that shows that if you take a model that has been trained at, say, 32-bit floating point precision and then compress it to 8-bit fixed point precision, it works just as well at test time. Or it works almost as well. You can reduce it by a factor of four and still have it work just as well. There’s also a lot of work in compressing existing models, like how can we take a giant model that we’ve trained to soak up a lot of data and then, say, train another, much smaller model to duplicate what that large model does. But that small model we can actually put into an embedded device somewhere. Often, the hard part is in training the system. In those cases, it needs to be really big and the servers have to be really beefy. But I do think there’s a lot of promising work with which we can make the models a lot smaller and there’s a future in terms of embedding them in different places. Of course, something like search has to go back to cloud servers unless you’ve somehow indexed the whole web on your smartphone, right? Yeah, that would be challenging. For some additional context on just how powerful a system Deep Speech is — and why Baidu puts so much emphasis on systems architecture for its deep learning efforts — consider this explanation offered by Baidu systems research scientist Bryan Catanzaro: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder/editor/writer of ARCHITECHT. Day job is running content at Replicated. Formerly at Gigaom/Mesosphere/Fortune. What’s next in computing, told by the people behind the software " Kyle McDonald,109,6,https://medium.com/@kcimc/comparing-artificial-artists-7d889428fce4?source=tag_archive---------7----------------,Comparing Artificial Artists – Kyle McDonald – Medium,"Last Wednesday, “A Neural Algorithm of Artistic Style” was posted to ArXiv, featuring some of the most compelling imagery generated by deep convolutional neural networks (DCNNs) since Google Research’s “DeepDream” post. On Sunday, Kai Sheng Tai posted the first public implementation. I immediately stopped working on my implementation and started playing with his. Unfortunately, his results don’t quite match the paper, and it’s unclear why. I’m just getting started with this topic, so as I learn I want to share my understanding of the algorithm here, along with some results I got from testing his code. In two parts, the paper describes an algorithm for rendering a photo in the style of a given painting: 2. Instead of trying to match the activations exactly, try to match the correlation of the activations. They call this “style reconstruction”, and depending on the layer you reconstruct you get varying levels of abstraction. The correlation feature they use is called a Gram matrix: the dot product between the vectorized feature activation matrix and its transpose. If this sounds confusing, see the footnotes. Finally, instead of optimizing for just one of these things, they optimize for both simultaneously: the style of one image, and the content of another image. Here is an attempt to recreate the results from the paper using Kai’s implementation: Not quite the same, and possibly explained by a few differences between Kai’s implementation and the original paper: As a final comparison, consider the images Andrej Karpathy posted from his own implementation. The same large-scale, high-level features are missing here, just like in the style reconstruction of “Seated Nude” above. Beside’s Kai’s, I’ve seen one more implementation from a PhD student named Satoshi: a brief example in Python with Chainer. I haven’t spent as much time with it, as I had to adapt it to run on my CPU due to lack of memory. But I did notice: After running Tübingen in the style of The Starry Night with a 1:10e3 ratio and 100 iterations, it seems to converge on something matching the general structure but lacking the overall palette: I’d like to understand this algorithm well enough to generalize it to other media (mainly thinking about sound right now), so if you have an insights or other implementations please share them in the comments! I’ve started testing another implementation that popped up this morning from Justin Johnson. His follow the original paper very closely, except for using unequal weights when balancing different layers used for style reconstruction. All the following examples were run for 100 iterations with the default ratio of 1:10e0. Justin switched his implementation to use L-BFGS and equally weighted layers, and to my eyes this matches the results in the original paper. Here are his results for one of the harder content/style pairs: Other implementations that look great, but I haven’t tested enough: The definition of the Gram matrix confused me at first, so I wrote it out as code. Using a literal translation of equation 3 in the paper, you would write in Python, with numpy: It turns out that the original description is computed more efficiently than this literal translation. For example, Kai writes in Lua, with Torch: Satoshi computes it for all the layers simultaneously in Python with Chainer: Or again in Python, with numpy and Caffe layers: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Artist working with code. " Jim Fleming,165,4,https://medium.com/jim-fleming/highway-networks-with-tensorflow-1e6dfa667daa?source=tag_archive---------8----------------,Highway Networks with TensorFlow – Jim Fleming – Medium,"This week I implemented highway networks to get an intuition for how they work. Highway networks, inspired by LSTMs, are a method of constructing networks with hundreds, even thousands, of layers. Let’s see how we construct them using TensorFlow. TL;DR Fully-connected highway repo and convolutional highway repo. For comparison, let’s start with a standard fully-connected (or “dense”) layer. We need a weight matrix and a bias vector then we’ll compute the following for the layer output: Here’s what a dense layer looks like as a graph in TensorBoard: For the highway layer what we want are two “gates” that control the flow of information. The “transform” gate controls how much of the activation we pass through and the “carry” gate controls how much of the unmodified input we pass through. Otherwise, the layer largely resembles a dense layer with a few additions: What happens is that when the transform gate is 1, we pass through our activation (H) and suppress the carry gate (since it will be 0). When the carry gate is 1, we pass through the unmodified input (x), while the activation is suppressed. Here’s what the highway layer graph looks in TensorBoard: Using a highway layer in a network is also straightforward. One detail to keep in mind is that consecutive highway layers must be the same size but you can use fully-connected layers to change dimensionality. This becomes especially complicated in convolutional layers where each layer can change the output dimensions. We can use padding (‘SAME’) to maintain each layers dimensionality. Otherwise, by simply using hyperparameters from the TensorFlow docs (i.e. no hyperparameter search) the fully-connected highway network performed much better than a fully-connected network. Using MNIST as my simple trial: Now that we have a highway network, I wanted to answer a few questions that came up for me while reading the paper. For instance, how deep will the network converge? The paper briefly mentions 1000 layers: Can we train with 1000 layers on MNIST? Yes, also reaching around 95% accuracy. Try it out with a carry bias around -20.0 for MNIST (from the paper the network will only utilize ~15 layers anyway). The network can probably even go deeper since the it’s just learning to carry the last 980 layers or so. We can’t do much useful at or past 1000 layers so that seems sufficient for now. What happens if you set very low or very high carry biases? In either extreme the network simply fails to converge in a reasonable amount of time. In the case of low biases (more positive), the network starts as if the carry gates aren’t present at all. In the case of high biases (more negative), we’re putting more emphasis on carrying and the network can take a long time to overcome that. Otherwise, the biases don’t seem to need to be exact, at least on this simple example. When in doubt start with high biases (more negative) since it’s easier to learn to overcome carrying than without carry gates (which is just a plain network). Overall I was happy with how easy highway networks were to implement. They’re fully differentiable with only a single additional hyperparameter for the initial carry bias. One downside is that highway layers do require additional parameters for the transform weights and biases. However, since we can go deeper, the layers do not need to be as wide which can compensate. Here’s are the complete notebooks if you want to play with the code: fully-connected highway repo and convolutional highway repo. Follow me on Twitter for more posts like these. If you’d like building very deep networks in production, I do consulting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CTO and lead ML engineer at Fomoro — focused on machine learning and applying cutting-edge research for businesses — previously @rdio What I’m working on. " Nathan Benaich,264,10,https://medium.com/@NathanBenaich/investing-in-artificial-intelligence-a-vc-perspective-afaf6adc82ea?source=tag_archive---------9----------------,Investing in Artificial Intelligence – Nathan Benaich – Medium,"My (expanded) talking points from a presentation I gave at the Re.Work Investing in Deep Learning dinner in London on 1st December 2015. TL;DR Check out the slides here. It’s my belief that artificial intelligence is one of the most exciting and transformative opportunities of our time. There’s a few reasons why that’s so. Consumers worldwide carry 2 billion smartphones, they’re increasingly addicted to these devices and 40% of the world is online (KPCB). This means we’re creating new data assets that never existed before (user behavior, preferences, interests, knowledge, connections). The costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing. We’ve seen improvements in learning methods, architectures and software infrastructure. The pace of innovation can therefore only be accelerating. Indeed, we don’t fully appreciate what tomorrow will look and feel like. AI-driven products are already out in the wild and improving the performance of search engines, recommender systems (e.g. e-commerce, music), ad serving and financial trading (amongst others). Companies with the resources to invest in AI are already creating an impetus for others to follow suit or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks. More on this discussion here. A key consideration, in my view, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productising technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are: proprietary data access/creation, experienced talent and addictive products. Operational Commercial Financial There are two big factors that make involving the user in an AI-driven product paramount. 1) Machines don’t yet recapitulate human cognition. In order to pick up where software falls short, we need to call on the user for help. 2) Buyers/users of software products have more choice today than ever. As such, they’re often fickle (avg. 90-day retention for apps is 35%). Returning expected value out of the box is key to building habits (hyperparameter optimisation can help). Here are some great examples of products which prove that involving the user-in-the-loop improves performance: We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand. To put this discussion into context, let’s first look at the global VC market. Q1-Q3 2015 saw $47.2bn invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA). We’re likely to breach $55bn by year end. There are circa 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies. So far, we’ve seen circa 300 deals into AI companies (defined as businesses whose description includes keywords: artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning from Jan 1st 2015 thru 1st Dec 2015, CB Insights). In the UK, companies like Ravelin, Signal and Gluru raised seed rounds. Circa $2bn was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339m debt+credit), ZestFinance ($150m debt), LiftForward ($250m credit) and Argon Credit ($75m credit). Importantly, 80% of deals were < $5m in size and 90% of the cash was invested into US companies vs. 13% in Europe. 75% of rounds were in the US. The exit market has seen 33 M&A transactions and 1 IPO (Adgorithms on the LSE). Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532m; $17m raised), Elastica/Blue Coat Systems ($280m; $45m raised) and SupersonicAds/IronSource ($150m; $21m raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7ppl median. Altogether, AI investments will have accounted for circa 5% of total VC investments for 2015. That’s higher than the 2% claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software. The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the US. Businesses must therefore have exposure to this market. I spent a number of summers in university and 3 years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is a very challenging, expensive, lengthy, regulated and ultimately offers a transient solution to treating disease. Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real-time, drive down cost of care over a patient’s lifetime, while consequently improving outcomes. Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by 3rd parties). Sure, the news might paint a different, but the fact is that we’re still using the web and it’s wealth of products. On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge. Look at today’s clinical model: a patient presents into the hospital when they feel something is wrong. The doctor has to conduct a battery of tests to derive a diagnosis. These tests address a single (often late stage) time point, at which moment little can be done to reverse damage (e.g. in the case of cancer). Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There’s loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions... Some companies are already hacking away at this problem: A point worth noting is that the UK has a slight leg up on the data access front. Initiatives like the UK Biobank (500k patient records), Genomics England (100k genomes sequenced), HipSci (stem cells) and the NHS care.data programme are leading the way in creating centralised data repositories for public health and therapeutic research. Cheers for pointing out, Hari Arul. Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9tn by 2020 (BAML). Coupled to the efficiency gains worth $1.9tn driven by robots, I reckon there’s a chance for near complete automation of core, repetitive businesses functions in the future. Think of all the productised SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making. Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfilment and shipping. Of course, probably a ways off :) I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long term innovation, especially that far less is occurring within Universities. VC was born to fund moonshots. We must remember that access to technology will, over time, become commoditised. It’s therefore key to understand your use case, your user, the value you bring and how it’s experience and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering. Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g. the user experience layer — thanks for highlighting this Hari Arul). As such, there’s a renewed focused on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses. Finally, you must have exposure to the US market where the lion’s share of value is created and realised. We have an opportunity to catalyse the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond first-hand. Working in the space? We’d love to get to know you :) Sign up to my newsletter covering AI news and analysis from the tech world, research lab and private/public company market. I’m an investor at Playfair Capital, a London-based investment firm focusing on early stage technology companies that change the way we live, work and play. We invest across Europe and the US and our focus is on core technologies and user experiences. 25% of our portfolio is AI: Mapillary, DueDil, Jukedeck, Seldon, Clarify, Gluru and Ravelin. We want to take risk on technologists creating new markets or reinventing existing ones. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Advancing human progress with intelligent systems. Venture Partner @PointNineCap. Former scientist, photographer, perpetual foodie. nathan.ai @LDN_AI @TwentyBN " Tal Perry,2.6K,17,https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------3----------------,Deep Learning the Stock Market – Tal Perry – Medium,"Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going. I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. Why NLP is relevant to Stock prediction In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance. In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure). Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will. You shall know a word by the company it keeps (Firth, J. R. 1957:11) There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort. Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them. The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog. But we can embed more than just words. We can do, say , stock market embeddings. Market2Vec The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie. Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college. The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself. We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How? So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful. Now What Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry. So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either. Going deeper I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning. So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote. Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market. Inside the magical black box What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text. A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope. We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that. Connecting Market2Vec and LSTMs The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them: We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior. What Comes out Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile. In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next. In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were: 1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do. 3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk. 5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index. Sidenote — Why predict the VIX What makes the VIX an interesting target is that Back to our LSTM outputs and the SoftMax How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like: We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ). So before we get into how this thing learns, let’s recap what we’ve done so far How does this thing learn? Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn. So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data. To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is Or in English, the square root of the square of the difference between what actually happened and what we predicted. Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error. This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task. It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task. It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task. Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error. What’s next Now that I’ve laid this out in writing and it still makes sense to me I want So, if you’ve come this far please point out my errors and share your inputs. Other thoughts Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work. Liquidity and efficient use of capital Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs. A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500. This stuff is new The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets. Multiple Time Frames While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most. I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with. MarketVectors When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them. My original consideration was to use an auto-encoder like in this paper but end to end training is cooler. A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder. But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them. The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage. CNNs Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun. " Andrej Karpathy,9.2K,7,https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------4----------------,Yes you should understand backprop – Andrej Karpathy – Medium,"When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets. " Erik Hallström,2.5K,7,https://medium.com/@erikhallstrm/hello-world-rnn-83cd7105b767?source=tag_archive---------5----------------,How to build a Recurrent Neural Network in TensorFlow (1/7),"In this tutorial I’ll explain how to build a simple working Recurrent Neural Network in TensorFlow. This is the first in a series of seven parts where various aspects and techniques of building Recurrent Neural Networks in TensorFlow are covered. A short introduction to TensorFlow is available here. For now, let’s get started with the RNN! It is short for “Recurrent Neural Network”, and is basically a neural network that can be used when your data is treated as a sequence, where the particular order of the data-points matter. More importantly, this sequence can be of arbitrary length. The most straight-forward example is perhaps a time-series of numbers, where the task is to predict the next value given previous values. The input to the RNN at every time-step is the current value as well as a state vector which represent what the network has “seen” at time-steps before. This state-vector is the encoded memory of the RNN, initially set to zero. The best and most comprehensive article explaining RNN:s I’ve found so far is this article by researchers at UCSD, highly recommended. For now you only need to understand the basics, read it until the “Modern RNN architectures”-section. That will be covered later. Although this article contains some explanations, it is mostly focused on the practical part, how to build it. You are encouraged to look up more theory on the Internet, there are plenty of good explanations. We will build a simple Echo-RNN that remembers the input data and then echoes it after a few time-steps. First let’s set some constants we’ll need, what they mean will become clear in a moment. Now generate the training data, the input is basically a random binary vector. The output will be the “echo” of the input, shifted echo_step steps to the right. Notice the reshaping of the data into a matrix with batch_size rows. Neural networks are trained by approximating the gradient of loss function with respect to the neuron-weights, by looking at only a small subset of the data, also known as a mini-batch. The theoretical reason for doing this is further elaborated in this question. The reshaping takes the whole dataset and puts it into a matrix, that later will be sliced up into these mini-batches. TensorFlow works by first building up a computational graph, that specifies what operations will be done. The input and output of this graph is typically multidimensional arrays, also known as tensors. The graph, or parts of it can then be executed iteratively in a session, this can either be done on the CPU, GPU or even a resource on a remote server. The two basic TensorFlow data-structures that will be used in this example are placeholders and variables. On each run the batch data is fed to the placeholders, which are “starting nodes” of the computational graph. Also the RNN-state is supplied in a placeholder, which is saved from the output of the previous run. The weights and biases of the network are declared as TensorFlow variables, which makes them persistent across runs and enables them to be updated incrementally for each batch. The figure below shows the input data-matrix, and the current batch batchX_placeholder is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3, and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code. The series order index is shown as numbers in a few of the data-points. Now it’s time to build the part of the graph that resembles the actual RNN computation, first we want to split the batch data into adjacent time-steps. As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step. The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows. Next let’s build the part of the graph that does the actual RNN computation. Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch. You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch. This is the final part of the graph, a fully connected softmax layer from the state to the output that will make the classes one-hot encoded, and then calculating the loss of the batch. The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally. Notice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size]. There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch. It’s time to wrap up and train the network, in TensorFlow the graph is executed in a session. New data is generated on each epoch (not the usual way to do it, but it works in this case since everything is predictable). You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below. Also realize that this is just simple example to explain how a RNN works, this functionality could easily be programmed in just a few lines of code. The network will be able to exactly learn the echo behavior so there is no need for testing data. The program will update the plot as training progresses, shown in the picture below. Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Our algorithm will fairly quickly learn the task. The graph in the top-left corner shows the output of the loss function, but why are there spikes in the curve? Think of it for a moment, answer is below. The reason for the spikes is that we are starting on a new epoch, and generating new data. Since the matrix is reshaped, the first element on each row is adjacent to the last element in the previous row. The first few elements on all rows (except the first) have dependencies that will not be included in the state, so the net will always perform badly on the first batch. This is the whole runnable program, just copy-paste and run. After each part in the article series the whole runnable program will be presented. If a line is referenced by number, these are the line numbers that we mean. In the next post in this series we will be simplify the computational graph creation by using the native TensorFlow RNN API. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Engineering Physics and in Machine Learning at Royal Institute of Technology in Stockholm. Also been living in Taiwan 學習中文. Interested in Deep Learning. " Stefan Kojouharov,1.5K,23,https://chatbotslife.com/ultimate-guide-to-leveraging-nlp-machine-learning-for-you-chatbot-531ff2dd870c?source=tag_archive---------6----------------,Ultimate Guide to Leveraging NLP & Machine Learning for your Chatbot,"Code Snippets and Github Included Over the past few months I have been collecting the best resources on NLP and how to apply NLP and Deep Learning to Chatbots. Every once in awhile, I would run across an exception piece of content and I quickly started putting together a master list. Soon I found myself sharing this list and some of the most useful articles with developers and other people in bot community. In process, my list became a Guide and after some urging, I have decided to share it or at least a condensed version of it -for length reasons. This guide is mostly based on the work done by Denny Britz who has done a phenomenal job exploring the depths of Deep Learning for Bots. Code Snippets and Github included! Without further ado... Let us Begin! Chatbots, are a hot topic and many companies are hoping to develop bots to have natural conversations indistinguishable from human ones, and many are claiming to be using NLP and Deep Learning techniques to make this possible. But with all the hype around AI it’s sometimes difficult to tell fact from fiction. In this series I want to go over some of the Deep Learning techniques that are used to build conversational agents, starting off by explaining where we are right now, what’s possible, and what will stay nearly impossible for at least a little while. Retrieval-based models (easier) use a repository of predefined responses and some kind of heuristic to pick an appropriate response based on the input and context. The heuristic could be as simple as a rule-based expression match, or as complex as an ensemble of Machine Learning classifiers. These systems don’t generate any new text, they just pick a response from a fixed set. Generative models (harder) don’t rely on pre-defined responses. They generate new responses from scratch. Generative models are typically based on Machine Translation techniques, but instead of translating from one language to another, we “translate” from an input to an output (response). Both approaches have some obvious pros and cons. Due to the repository of handcrafted responses, retrieval-based methods don’t make grammatical mistakes. However, they may be unable to handle unseen cases for which no appropriate predefined response exists. For the same reasons, these models can’t refer back to contextual entity information like names mentioned earlier in the conversation. Generative models are “smarter”. They can refer back to entities in the input and give the impression that you’re talking to a human. However, these models are hard to train, are quite likely to make grammatical mistakes (especially on longer sentences), and typically require huge amounts of training data. Deep Learning techniques can be used for both retrieval-based or generative models, but research seems to be moving into the generative direction. Deep Learning architectures likeSequence to Sequence are uniquely suited for generating text and researchers are hoping to make rapid progress in this area. However, we’re still at the early stages of building generative models that work reasonably well. Production systems are more likely to be retrieval-based for now. LONG VS. SHORT CONVERSATIONS The longer the conversation the more difficult to automate it. On one side of the spectrum areShort-Text Conversations (easier) where the goal is to create a single response to a single input. For example, you may receive a specific question from a user and reply with an appropriate answer. Then there are long conversations (harder) where you go through multiple turns and need to keep track of what has been said. Customer support conversations are typically long conversational threads with multiple questions. In an open domain (harder) setting the user can take the conversation anywhere. There isn’t necessarily have a well-defined goal or intention. Conversations on social media sites like Twitter and Reddit are typically open domain — they can go into all kinds of directions. The infinite number of topics and the fact that a certain amount of world knowledge is required to create reasonable responses makes this a hard problem. In a closed domain (easier) setting the space of possible inputs and outputs is somewhat limited because the system is trying to achieve a very specific goal. Technical Customer Support or Shopping Assistants are examples of closed domain problems. These systems don’t need to be able to talk about politics, they just need to fulfill their specific task as efficiently as possible. Sure, users can still take the conversation anywhere they want, but the system isn’t required to handle all these cases — and the users don’t expect it to. There are some obvious and not-so-obvious challenges when building conversational agents most of which are active research areas. To produce sensible responses systems may need to incorporate both linguistic context andphysical context. In long dialogs people keep track of what has been said and what information has been exchanged. That’s an example of linguistic context. The most common approach is toembed the conversation into a vector, but doing that with long conversations is challenging. Experiments in Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models and Attention with Intention for a Neural Network Conversation Model both go into that direction. One may also need to incorporate other kinds of contextual data such as date/time, location, or information about a user. When generating responses the agent should ideally produce consistent answers to semantically identical inputs. For example, you want to get the same reply to “How old are you?” and “What is your age?”. This may sound simple, but incorporating such fixed knowledge or “personality” into models is very much a research problem. Many systems learn to generate linguistic plausible responses, but they are not trained to generate semantically consistent ones. Usually that’s because they are trained on a lot of data from multiple different users. Models like that in A Persona-Based Neural Conversation Model are making first steps into the direction of explicitly modeling a personality. The ideal way to evaluate a conversational agent is to measure whether or not it is fulfilling its task, e.g. solve a customer support problem, in a given conversation. But such labels are expensive to obtain because they require human judgment and evaluation. Sometimes there is no well-defined goal, as is the case with open-domain models. Common metrics such as BLEUthat are used for Machine Translation and are based on text matching aren’t well suited because sensible responses can contain completely different words or phrases. In fact, in How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation researchers find that none of the commonly used metrics really correlate with human judgment. A common problem with generative systems is that they tend to produce generic responses like “That’s great!” or “I don’t know” that work for a lot of input cases. Early versions of Google’s Smart Reply tended to respond with “I love you” to almost anything. That’s partly a result of how these systems are trained, both in terms of data and in terms of actual training objective/algorithm. Some researchers have tried to artificially promote diversity through various objective functions. However, humans typically produce responses that are specific to the input and carry an intention. Because generative systems (and particularly open-domain systems) aren’t trained to have specific intentions they lack this kind of diversity. Given all the cutting edge research right now, where are we and how well do these systems actually work? Let’s consider our taxonomy again. A retrieval-based open domain system is obviously impossible because you can never handcraft enough responses to cover all cases. A generative open-domain system is almost Artificial General Intelligence (AGI) because it needs to handle all possible scenarios. We’re very far away from that as well (but a lot of research is going on in that area). This leaves us with problems in restricted domains where both generative and retrieval based methods are appropriate. The longer the conversations and the more important the context, the more difficult the problem becomes. In a recent interview, Andrew Ng, now chief scientist of Baidu, puts it well: Many companies start off by outsourcing their conversations to human workers and promise that they can “automate” it once they’ve collected enough data. That’s likely to happen only if they are operating in a pretty narrow domain — like a chat interface to call an Uber for example. Anything that’s a bit more open domain (like sales emails) is beyond what we can currently do. However, we can also use these systems to assist human workers by proposing and correcting responses. That’s much more feasible. Grammatical mistakes in production systems are very costly and may drive away users. That’s why most systems are probably best off using retrieval-based methods that are free of grammatical errors and offensive responses. If companies can somehow get their hands on huge amounts of data then generative models become feasible — but they must be assisted by other techniques to prevent them from going off the rails like Microsoft’s Tay did. The Code and data for this tutorial is on Github. The vast majority of production systems today are retrieval-based, or a combination of retrieval-based and generative. Google’s Smart Reply is a good example. Generative models are an active area of research, but we’re not quite there yet. If you want to build a conversational agent today your best bet is most likely a retrieval-based model. In this post we’ll work with the Ubuntu Dialog Corpus (paper, github). The Ubuntu Dialog Corpus (UDC) is one of the largest public dialog datasets available. It’s based on chat logs from the Ubuntu channels on a public IRC network. The paper goes into detail on how exactly the corpus was created, so I won’t repeat that here. However, it’s important to understand what kind of data we’re working with, so let’s do some exploration first. The training data consists of 1,000,000 examples, 50% positive (label 1) and 50% negative (label 0). Each example consists of a context, the conversation up to this point, and an utterance, a response to the context. A positive label means that an utterance was an actual response to a context, and a negative label means that the utterance wasn’t — it was picked randomly from somewhere in the corpus. Here is some sample data. Note that the dataset generation script has already done a bunch of preprocessing for us — it hastokenized, stemmed, and lemmatized the output using the NLTK tool. The script also replaced entities like names, locations, organizations, URLs, and system paths with special tokens. This preprocessing isn’t strictly necessary, but it’s likely to improve performance by a few percent. The average context is 86 words long and the average utterance is 17 words long. Check out the Jupyter notebook to see the data analysis. The data set comes with test and validations sets. The format of these is different from that of the training data. Each record in the test/validation set consists of a context, a ground truth utterance (the real response) and 9 incorrect utterances called distractors. The goal of the model is to assign the highest score to the true utterance, and lower scores to wrong utterances. The are various ways to evaluate how well our model does. A commonly used metric is recall@k. Recall@k means that we let the model pick the k best responses out of the 10 possible responses (1 true and 9 distractors). If the correct one is among the picked ones we mark that test example as correct. So, a larger k means that the task becomes easier. If we set k=10 we get a recall of 100% because we only have 10 responses to pick from. If we set k=1 the model has only one chance to pick the right response. At this point you may be wondering how the 9 distractors were chosen. In this data set the 9 distractors were picked at random. However, in the real world you may have millions of possible responses and you don’t know which one is correct. You can’t possibly evaluate a million potential responses to pick the one with the highest score — that’d be too expensive. Google’sSmart Reply uses clustering techniques to come up with a set of possible responses to choose from first. Or, if you only have a few hundred potential responses in total you could just evaluate all of them. Before starting with fancy Neural Network models let’s build some simple baseline models to help us understand what kind of performance we can expect. We’ll use the following function to evaluate our recall@k metric: Here, y is a list of our predictions sorted by score in descending order, and y_test is the actual label. For example, a y of [0,3,1,2,5,6,4,7,8,9] Would mean that the utterance number 0 got the highest score, and utterance 9 got the lowest score. Remember that we have 10 utterances for each test example, and the first one (index 0) is always the correct one because the utterance column comes before the distractor columns in our data. Intuitively, a completely random predictor should get a score of 10% for recall@1, a score of 20% for recall@2, and so on. Let’s see if that’s the case. Great, seems to work. Of course we don’t just want a random predictor. Another baseline that was discussed in the original paper is a tf-idf predictor. tf-idf stands for “term frequency — inverse document” frequency and it measures how important a word in a document is relative to the whole corpus. Without going into too much detail (you can find many tutorials about tf-idf on the web), documents that have similar content will have similar tf-idf vectors. Intuitively, if a context and a response have similar words they are more likely to be a correct pair. At least more likely than random. Many libraries out there (such as scikit-learn) come with built-in tf-idf functions, so it’s very easy to use. Let’s build a tf-idf predictor and see how well it performs. We can see that the tf-idf model performs significantly better than the random model. It’s far from perfect though. The assumptions we made aren’t that great. First of all, a response doesn’t necessarily need to be similar to the context to be correct. Secondly, tf-idf ignores word order, which can be an important signal. With a Neural Network model we can do a bit better. The Deep Learning model we will build in this post is called a Dual Encoder LSTM network. This type of network is just one of many we could apply to this problem and it’s not necessarily the best one. You can come up with all kinds of Deep Learning architectures that haven’t been tried yet — it’s an active research area. For example, the seq2seq model often used in Machine Translation would probably do well on this task. The reason we are going for the Dual Encoder is because it has been reported to give decent performance on this data set. This means we know what to expect and can be sure that our implementation is correct. Applying other models to this problem would be an interesting project. The Dual Encoder LSTM we’ll build looks like this (paper): It roughly works as follows: To train the network, we also need a loss (cost) function. We’ll use the binary cross-entropy loss common for classification problems. Let’s call our true label for a context-response pair y. This can be either 1 (actual response) or 0 (incorrect response). Let’s call our predicted probability from 4. above y’. Then, the cross entropy loss is calculated as L= −y * ln(y’) − (1 − y) * ln(1−y’). The intuition behind this formula is simple. If y=1 we are left with L = -ln(y’), which penalizes a prediction far away from 1, and if y=0 we are left with L= −ln(1−y’), which penalizes a prediction far away from 0. For our implementation we’ll use a combination of numpy, pandas, Tensorflow and TF Learn (a combination of high-level convenience functions for Tensorflow). The dataset originally comes in CSV format. We could work directly with CSVs, but it’s better to convert our data into Tensorflow’s proprietary Example format. (Quick side note: There’s alsotf.SequenceExample but it doesn’t seem to be supported by tf.learn yet). The main benefit of this format is that it allows us to load tensors directly from the input files and let Tensorflow handle all the shuffling, batching and queuing of inputs. As part of the preprocessing we also create a vocabulary. This means we map each word to an integer number, e.g. “cat” may become 2631. The TFRecord files we will generate store these integer numbers instead of the word strings. We will also save the vocabulary so that we can map back from integers to words later on. Each Example contains the following fields: The preprocessing is done by the prepare_data.py Python script, which generates 3 files:train.tfrecords, validation.tfrecords and test.tfrecords. You can run the script yourself or download the data files here. In order to use Tensorflow’s built-in support for training and evaluation we need to create an input function — a function that returns batches of our input data. In fact, because our training and test data have different formats, we need different input functions for them. The input function should return a batch of features and labels (if available). Something along the lines of: Because we need different input functions during training and evaluation and because we hate code duplication we create a wrapper called create_input_fn that creates an input function for the appropriate mode. It also takes a few other parameters. Here’s the definition we’re using: The complete code can be found in udc_inputs.py. On a high level, the function does the following: We already mentioned that we want to use the recall@k metric to evaluate our model. Luckily, Tensorflow already comes with many standard evaluation metrics that we can use, including recall@k. To use these metrics we need to create a dictionary that maps from a metric name to a function that takes the predictions and label as arguments: Above, we use functools.partial to convert a function that takes 3 arguments to one that only takes 2 arguments. Don’t let the name streaming_sparse_recall_at_k confuse you. Streaming just means that the metric is accumulated over multiple batches, and sparse refers to the format of our labels. This brings is to an important point: What exactly is the format of our predictions during evaluation? During training, we predict the probability of the example being correct. But during evaluation our goal is to score the utterance and 9 distractors and pick the best one — we don’t simply predict correct/incorrect. This means that during evaluation each example should result in a vector of 10 scores, e.g. [0.34, 0.11, 0.22, 0.45, 0.01, 0.02, 0.03, 0.08, 0.33, 0.11], where the scores correspond to the true response and the 9 distractors respectively. Each utterance is scored independently, so the probabilities don’t need to add up to 1. Because the true response is always element 0 in array, the label for each example is 0. The example above would be counted as classified incorrectly by recall@1because the third distractor got a probability of 0.45 while the true response only got 0.34. It would be scored as correct by recall@2 however. Before writing the actual neural network code I like to write the boilerplate code for training and evaluating the model. That’s because, as long as you adhere to the right interfaces, it’s easy to swap out what kind of network you are using. Let’s assume we have a model functionmodel_fn that takes as inputs our batched features, labels and mode (train or evaluation) and returns the predictions. Then we can write general-purpose code to train our model as follows: Here we create an estimator for our model_fn, two input functions for training and evaluation data, and our evaluation metrics dictionary. We also define a monitor that evaluates our model every FLAGS.eval_every steps during training. Finally, we train the model. The training runs indefinitely, but Tensorflow automatically saves checkpoint files in MODEL_DIR, so you can stop the training at any time. A more fancy technique would be to use early stopping, which means you automatically stop training when a validation set metric stops improving (i.e. you are starting to overfit). You can see the full code in udc_train.py. Two things I want to mention briefly is the usage of FLAGS. This is a way to give command line parameters to the program (similar to Python’s argparse). hparams is a custom object we create in hparams.py that holds hyperparameters, nobs we can tweak, of our model. This hparams object is given to the model when we instantiate it. Now that we have set up the boilerplate code around inputs, parsing, evaluation and training it’s time to write code for our Dual LSTM neural network. Because we have different formats of training and evaluation data I’ve written a create_model_fn wrapper that takes care of bringing the data into the right format for us. It takes a model_impl argument, which is a function that actually makes predictions. In our case it’s the Dual Encoder LSTM we described above, but we could easily swap it out for some other neural network. Let’s see what that looks like: The full code is in dual_encoder.py. Given this, we can now instantiate our model function in the main routine in udc_train.py that we defined earlier. That’s it! We can now run python udc_train.py and it should start training our networks, occasionally evaluating recall on our validation data (you can choose how often you want to evaluate using the — eval_every switch). To get a complete list of all available command line flags that we defined using tf.flags and hparams you can run python udc_train.py — help. ... INFO:tensorflow:Results after 270 steps (0.248 sec/batch): recall_at_1 = 0.507581018519, recall_at_2 = 0.689699074074, recall_at_5 = 0.913020833333, recall_at_10 = 1.0, loss = 0.5383 ... After you’ve trained the model you can evaluate it on the test set using python udc_test.py — model_dir=$MODEL_DIR_FROM_TRAINING, e.g. python udc_test.py — model_dir=~/github/chatbot-retrieval/runs/1467389151. This will run the recall@k evaluation metrics on the test set instead of the validation set. Note that you must call udc_test.py with the same parameters you used during training. So, if you trained with — embedding_size=128 you need to call the test script with the same. After training for about 20,000 steps (around an hour on a fast GPU) our model gets the following results on the test set: While recall@1 is close to our TFIDF model, recall@2 and recall@5 are significantly better, suggesting that our neural network assigns higher scores to the correct answers. The original paper reported 0.55, 0.72 and 0.92 for recall@1, recall@2, and recall@5 respectively, but I haven’t been able to reproduce scores quite as high. Perhaps additional data preprocessing or hyperparameter optimization may bump scores up a bit more. You can modify and run udc_predict.py to get probability scores for unseen data. For example python udc_predict.py — model_dir=./runs/1467576365/ outputs: You could imagine feeding in 100 potential responses to a context and then picking the one with the highest score. In this post we’ve implemented a retrieval-based neural network model that can assign scores to potential responses given a conversation context. There is still a lot of room for improvement, however. One can imagine that other neural networks do better on this task than a dual LSTM encoder. There is also a lot of room for hyperparameter optimization, or improvements to the preprocessing step. The Code and data for this tutorial is on Github, so check it out. Denny’s Blogs: http://blog.dennybritz.com/ & http://www.wildml.com/ Mark Clark: https://www.linkedin.com/in/markwclark I hope you have found this Condensed NLP Guide Helpful. I wanted to publish a longer version (imagine if this was 5x longer) however I don’t want to scare the readers away. As someone who develops the front end of bots (user experience, personality, flow, etc) I find it extremely helpful to the understand the stack, know the technological pros and cons and so to be able to effectively design around NLP/NLU limitations. Ultimately a lot of the issues bots face today (eg: context) can be designed around, effectively. If you have any suggestions on regarding this article and how it can be improved, feel free to drop me a line. Creator of 10+ bots, including Smart Notes Bot. Founder of Chatbot’s Life, where we help companies create great chatbots and share our insights along the way. Want to Talk Bots? Best way to chat directly and see my latest projects is via my Personal Bot: Stefan’s Bot. Currently, I’m consulting a number of companies on their chatbot projects. To get feedback on your Chatbot project or to Start a Chatbot Project, contact me. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More. " Arthur Juliani,3.5K,8,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------7----------------,Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C),"In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series. So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments. Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself. Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse. Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods. Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately. The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow: Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage. In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation. In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository. The general outline of the code architecture is: The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself. Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU. ~ From here we go asynchronous ~ Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network. Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment. Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action. A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy. A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment. Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again. To view the full and functional code, see the Github repository here. The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as: pip install vizdoom Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory. The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks. I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs. (There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.) If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani. If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Alexandr Honchar,1.91K,7,https://medium.com/machine-learning-world/neural-networks-for-algorithmic-trading-part-one-simple-time-series-forecasting-f992daa1045a?source=tag_archive---------8----------------,Neural networks for algorithmic trading. Simple time series forecasting,"Ciao, people! This is first part of my experiments on application of deep learning to finance, in particular to algorithmic trading. I want to implement trading system from scratch based only on deep learning approaches, so for any problem we have here (price prediction, trading strategy, risk management) we gonna use different variations of artificial neural networks (ANNs) and check how well they can handle this. Now I plan to work on next sections: I highly recommend you to check out code and IPython Notebook in this repository. In this, first part, I want to show how MLPs, CNNs and RNNs can be used for financial time series prediction. In this part we are not going to use any feature engineering. Let’s just consider historical dataset of S&P 500 index price movements. We have information from 1950 to 2016 about open, close, high, low prices for every day in the year and volume of trades. First, we will try just to predict close price in the end of the next day, second, we will try to predict return (close price — open price). Download the dataset from Yahoo Finance or from this repository. We will consider our problem as 1) regression problem (trying to forecast exactly close price or return next day) 2) binary classification problem (price will go up [1; 0] or down [0; 1]). For training NNs we gonna use framework Keras. First let’s prepare our data for training. We want to predict t+1 value based on N previous days information. For example, having close prices from past 30 days on the market we want to predict, what price will be tomorrow, on the 31st day. We use first 90% of time series as training set (consider it as historical data) and last 10% as testing set for model evaluation. Here is example of loading, splitting into training samples and preprocessing of raw input data: It will be just 2-hidden layer perceptron. Number of hidden neurons is chosen empirically, we will work on hyperparameters optimization in next sections. Between two hidden layers we add one Dropout layer to prevent overfitting. Important thing is Dense(1), Activation(‘linear’) and ‘mse’ in compile section. We want one output that can be in any range (we predict real value) and our loss function is defined as mean squared error. Let’s see what happens if we just pass chunks of 20-days close prices and predict price on 21st day. Final MSE= 46.3635263557, but it’s not very representative information. Below is plot of predictions for first 150 points of test dataset. Black line is actual data, blue one — predicted. We can clearly see that our algorithm is not even close by value, but can learn the trend. Let’s scale our data using sklearn’s method preprocessing.scale() to have our time series zero mean and unit variance and train the same MLP. Now we have MSE = 0.0040424330518 (but it is on scaled data). On the plot below you can see actual scaled time series (black)and our forecast (blue) for it: For using this model in real world we should return back to unscaled time series. We can do it, by multiplying or prediction by standard deviation of time series we used to make prediction (20 unscaled time steps) and add it’s mean value: MSE in this case equals 937.963649937. Here is the plot of restored predictions (red) and real data (green): Not bad, isn’t it? But let’s try more sophisticated algorithms for this problem! I am not going to dive into theory of convolutional neural networks, you can check out this amazing resourses: Let’s define 2-layer convolutional neural network (combination of convolution and max-pooling layers) with one fully-connected layer and the same output as earlier: Let’s check out results. MSEs for scaled and restored data are: 0.227074542433; 935.520550172. Plots are below: Even looking on MSE on scaled data, this network learned much worse. Most probably, deeper architecture needs more data for training, or it just overfitted due to too high number of filters or layers. We will consider this issue later. As recurrent architecture I want to use two stacked LSTM layers (read more about LSTMs here). Plots of forecasts are below, MSEs = 0.0246238639582; 939.948636707. RNN forecasting looks more like moving average model, it can’t learn and predict all fluctuations. So, it’s a bit unexpectable result, but we can see, that MLPs work better for this time series forecasting. Let’s check out what will happen if we swith from regression to classification problem. Now we will use not close prices, but daily return (close price-open price) and we want to predict if close price is higher or lower than open price based on last 20 days returns. Code is changed just a bit — we change our last Dense layer to have output [0; 1] or [1; 0] and add softmax output to expect probabilistic output. To load binary outputs, change in the code following line: Also we change loss function to binary cross-entopy and add accuracy metrics. Oh, it’s not better than random guessing (50% accuracy), let’s try something better. Check out the results below. We can see, that treating financial time series prediction as regression problem is better approach, it can learn the trend and prices close to the actual. What was surprising for me, that MLPs are treating sequence data better as CNNs or RNNs which are supposed to work better with time series. I explain it with pretty small dataset (~16k time stamps) and dummy hyperparameters choice. You can reproduce results and get better using code from repository. I think we can get better results both in regression and classification using different features (not only scaled time series) like some technical indicators, volume of sales. Also we can try more frequent data, let’s say minute-by-minute ticks to have more training data. All these things I’m going to do later, so stay tuned :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. 🇺🇦 🇮🇹 AI entrepreneur, blogger and researcher. Making machines work 💻, learn 📕 and like 👍, but humans create 🎨, discover 🚀 and love ❤️ The best about Machine Learning, Computer Vision, Deep Learning, Natural language processing and other. " Arthur Juliani,1.7K,8,https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df?source=tag_archive---------9----------------,Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and Beyond,"Welcome to the latest installment of my Reinforcement Learning series. In this tutorial we will be walking through the creation of a Deep Q-Network. It will be built upon the simple one layer Q-network we created in Part 0, so I would recommend reading that first if you are new to reinforcement learning. While our ordinary Q-network was able to barely perform as well as the Q-Table in a simple game environment, Deep Q-Networks are much more capable. In order to transform an ordinary Q-Network into a DQN we will be making the following improvements: It was these three innovations that allowed the Google DeepMind team to achieve superhuman performance on dozens of Atari games using their DQN agent. We will be walking through each individual improvement, and showing how to implement it. We won’t stop there though. The pace of Deep Learning research is extremely fast, and the DQN of 2014 is no longer the most advanced agent around anymore. I will discuss two simple additional improvements to the DQN architecture, Double DQN and Dueling DQN, that allow for improved performance, stability, and faster training time. In the end we will have a network that can tackle a number of challenging Atari games, and we will demonstrate how to train the DQN to learn a basic navigation task. Since our agent is going to be learning to play video games, it has to be able to make sense of the game’s screen output in a way that is at least similar to how humans or other intelligent animals are able to. Instead of considering each pixel independently, convolutional layers allow us to consider regions of an image, and maintain spatial relationships between the objects on the screen as we send information up to higher levels of the network. In this way, they act similarly to human receptive fields. Indeed there is a body of research showing that convolutional neural network learn representations that are similar to those of the primate visual cortex. As such, they are ideal for the first few elements within our network. In Tensorflow, we can utilize the tf.contrib.layers.convolution2d function to easily create a convolutional layer. We write for function as follows: Here num_outs refers to how many filters we would like to apply to the previous layer. kernel_size refers to how large a window we would like to slide over the previous layer. Stride refers to how many pixels we want to skip as we slide the window across the layer. Finally, padding refers to whether we want our window to slide over just the bottom layer (“VALID”) or add padding around it (“SAME”) in order to ensure that the convolutional layer has the same dimensions as the previous layer. For more information, see the Tensorflow documentation. The second major addition to make DQNs work is Experience Replay. The basic idea is that by storing an agent’s experiences, and then randomly drawing batches of them to train the network, we can more robustly learn to perform well in the task. By keeping the experiences we draw random, we prevent the network from only learning about what it is immediately doing in the environment, and allow it to learn from a more varied array of past experiences. Each of these experiences are stored as a tuple of . The Experience Replay buffer stores a fixed number of recent memories, and as new ones come in, old ones are removed. When the time comes to train, we simply draw a uniform batch of random memories from the buffer, and train our network with them. For our DQN, we will build a simple class that handles storing and retrieving memories. The third major addition to the DQN that makes it unique is the utilization of a second network during the training procedure. This second network is used to generate the target-Q values that will be used to compute the loss for every action during training. Why not use just use one network for both estimations? The issue is that at every step of training, the Q-network’s values shift, and if we are using a constantly shifting set of values to adjust our network values, then the value estimations can easily spiral out of control. The network can become destabilized by falling into feedback loops between the target and estimated Q-values. In order to mitigate that risk, the target network’s weights are fixed, and only periodically or slowly updated to the primary Q-networks values. In this way training can proceed in a more stable manner. Instead of updating the target network periodically and all at once, we will be updating it frequently, but slowly. This technique was introduced in another DeepMind paper earlier this year, where they found that it stabilized the training process. With the additions above, we have everything we need to replicate the DWN of 2014. But the world moves fast, and a number of improvements above and beyond the DQN architecture described by DeepMind, have allowed for even greater performance and stability. Before training your new DQN on your favorite ATARI game, I would suggest checking the newer additions out. I will provide a description and some code for two of them: Double DQN, and Dueling DQN. Both are simple to implement, and by combining both techniques, we can achieve better performance with faster training times. The main intuition behind Double DQN is that the regular DQN often overestimates the Q-values of the potential actions to take in a given state. While this would be fine if all actions were always overestimates equally, there was reason to believe this wasn’t the case. You can easily imagine that if certain suboptimal actions regularly were given higher Q-values than optimal actions, the agent would have a hard time ever learning the ideal policy. In order to correct for this, the authors of DDQN paper propose a simple trick: instead of taking the max over Q-values when computing the target-Q value for our training step, we use our primary network to chose an action, and our target network to generate the target Q-value for that action. By decoupling the action choice from the target Q-value generation, we are able to substantially reduce the overestimation, and train faster and more reliably. Below is the new DDQN equation for updating the target value. In order to explain the reasoning behind the architecture changes that Dueling DQN makes, we need to first explain some a few additional reinforcement learning terms. The Q-values that we have been discussing so far correspond to how good it is to take a certain action given a certain state. This can be written as Q(s,a). This action given state can actually be decomposed into two more fundamental notions of value. The first is the value function V(s), which says simple how good it is to be in any given state. The second is the advantage function A(a), which tells how much better taking a certain action would be compared to the others. We can then think of Q as being the combination of V and A. More formally: The goal of Dueling DQN is to have a network that separately computes the advantage and value functions, and combines them back into a single Q-function only at the final layer. It may seem somewhat pointless to do this at first glance. Why decompose a function that we will just put back together? The key to realizing the benefit is to appreciate that our reinforcement learning agent may not need to care about both value and advantage at any given time. For example: imagine sitting outside in a park watching the sunset. It is beautiful, and highly rewarding to be sitting there. No action needs to be taken, and it doesn’t really make sense to think of the value of sitting there as being conditioned on anything beyond the environmental state you are in. We can achieve more robust estimates of state value by decoupling it from the necessity of being attached to specific actions. Now that we have learned all the tricks to get the most out of our DQN, let’s actually try it on a game environment! While the DQN we have described above could learn ATARI games with enough training, getting the network to perform well on those games takes at least a day of training on a powerful machine. For educational purposes, I have built a simple game environment which our DQN learns to master in a couple hours on a moderately powerful machine (I am using a GTX970). In the environment the agent controls a blue square, and the goal is to navigate to the green squares (reward +1) while avoiding the red squares (reward -1). At the start of each episode all squares are randomly placed within a 5x5 grid-world. The agent has 50 steps to achieve as large a reward as possible. Because they are randomly positioned, the agent needs to do more than simply learn a fixed path, as was the case in the FrozenLake environment from Tutorial 0. Instead the agent must learn a notion of spatial relationships between the blocks. And indeed, it is able to do just that! The game environment outputs 84x84x3 color images, and uses function calls as similar to the OpenAI gym as possible. In doing so, it should be easy to modify this code to work on any of the OpenAI atari games. I encourage those with the time and computing resources necessary to try getting the agent to perform well in an ATARI game. The hyperparameters may need some tuning, but it is definitely possible. Good luck! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. " Vishal Maini,32K,10,https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------0----------------,A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium,"Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future. Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent. Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs. Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models. Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD). Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications. Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem. Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum. This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series. Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic. The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions. The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C. Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe). In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie. Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste. In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing. Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models. The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task. Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself. And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years. The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel. Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start. You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have: Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence. Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale. Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process. And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning! More from Machine Learning for Humans 🤖👶 A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact. " Tim Anglade,7K,23,https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------1----------------,"How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native","The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs. While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches. The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps. If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet. While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways. Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.” Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one). All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section. We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done. The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors: For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone. Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack. It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs). While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition. The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images. One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation. Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods. Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons. First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner. At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with. We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile. To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come. The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source). So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million. This has two advantages: There are tradeoffs of course: During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions. After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations. So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture. Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time. With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with. Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular: So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features. Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were: The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit. Our data augmentation rules were as follows: These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation. The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be. The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours. We trained the network in 3 phases: While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training. In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well. Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users. This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries. Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future. If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again. There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future: Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience. UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case. DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch). Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks. For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects. Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide. Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration. ... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com " Dhruv Parthasarathy,4.3K,12,https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------2----------------,A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN,"At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com " Sebastian Heinz,4.4K,13,https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------3----------------,A simple deep learning model for stock price prediction using TensorFlow,"For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind. Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB). Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format. The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values. A quick look at the S&P time series using pyplot.plot(data['SP500']): Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute. The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017. There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values. Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler. Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction. TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog): In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get ""filled"" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow: After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex. As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1). The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly. The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch. Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail. The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article. It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer. After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications. Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model. The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network. The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets. However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved. The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research. Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp. Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies. Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient. After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets. A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch. The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies. During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice! One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario. Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post. The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;) If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice. Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork. Lastly, follow me on: Twitter | LinkedIn From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts. " Max Pechyonkin,23K,8,https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------4----------------,Understanding Hinton’s Capsule Networks. Part I: Intuition.,"Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur " Slav Ivanov,3.9K,17,https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------5----------------,"The $1700 great Deep Learning box: Assembly, setup and benchmarks","Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Stefan Kojouharov,14.2K,7,https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------6----------------,"Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data","Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Vishal Maini,8K,13,https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feab?source=tag_archive---------7----------------,"Machine Learning for Humans, Part 2.1: Supervised Learning","How much money will we make by spending more dollars on digital advertising? Will this loan applicant pay back the loan or not? What’s going to happen to the stock market tomorrow? In supervised learning problems, we start with a data set containing training examples with associated correct labels. For example, when learning to classify handwritten digits, a supervised learning algorithm takes thousands of pictures of handwritten digits along with labels containing the correct number each image represents. The algorithm will then learn the relationship between the images and their associated numbers, and apply that learned relationship to classify completely new images (without labels) that the machine hasn’t seen before. This is how you’re able to deposit a check by taking a picture with your phone! To illustrate how supervised learning works, let’s examine the problem of predicting annual income based on the number of years of higher education someone has completed. Expressed more formally, we’d like to build a model that approximates the relationship f between the number of years of higher education X and corresponding annual income Y. One method for predicting income would be to create a rigid rules-based model for how income and education are related. For example: “I’d estimate that for every additional year of higher education, annual income increases by $5,000.” You could come up with a more complex model by including some rules about degree type, years of work experience, school tiers, etc. For example: “If they completed a Bachelor’s degree or higher, give the income estimate a 1.5x multiplier.” But this kind of explicit rules-based programming doesn’t work well with complex data. Imagine trying to design an image classification algorithm made of if-then statements describing the combinations of pixel brightnesses that should be labeled “cat” or “not cat”. Supervised machine learning solves this problem by getting the computer to do the work for you. By identifying patterns in the data, the machine is able to form heuristics. The primary difference between this and human learning is that machine learning runs on computer hardware and is best understood through the lens of computer science and statistics, whereas human pattern-matching happens in a biological brain (while accomplishing the same goals). In supervised learning, the machine attempts to learn the relationship between income and education from scratch, by running labeled training data through a learning algorithm. This learned function can be used to estimate the income of people whose income Y is unknown, as long as we have years of education X as inputs. In other words, we can apply our model to the unlabeled test data to estimate Y. The goal of supervised learning is to predict Y as accurately as possible when given new examples where X is known and Y is unknown. In what follows we’ll explore several of the most common approaches to doing so. The rest of this section will focus on regression. In Part 2.2 we’ll dive deeper into classification methods. Regression predicts a continuous target variable Y. It allows you to estimate a value, such as housing prices or human lifespan, based on input data X. Here, target variable means the unknown variable we care about predicting, and continuous means there aren’t gaps (discontinuities) in the value that Y can take on. A person’s weight and height are continuous values. Discrete variables, on the other hand, can only take on a finite number of values — for example, the number of kids somebody has is a discrete variable. Predicting income is a classic regression problem. Your input data X includes all relevant information about individuals in the data set that can be used to predict income, such as years of education, years of work experience, job title, or zip code. These attributes are called features, which can be numerical (e.g. years of work experience) or categorical (e.g. job title or field of study). You’ll want as many training observations as possible relating these features to the target output Y, so that your model can learn the relationship f between X and Y. The data is split into a training data set and a test data set. The training set has labels, so your model can learn from these labeled examples. The test set does not have labels, i.e. you don’t yet know the value you’re trying to predict. It’s important that your model can generalize to situations it hasn’t encountered before so that it can perform well on the test data. In our trivially simple 2D example, this could take the form of a .csv file where each row contains a person’s education level and income. Add more columns with more features and you’ll have a more complex, but possibly more accurate, model. How do we build models that make accurate, useful predictions in the real world? We do so by using supervised learning algorithms. Now let’s get to the fun part: getting to know the algorithms. We’ll explore some of the ways to approach regression and classification and illustrate key machine learning concepts throughout. “Draw the line. Yes, this counts as machine learning.” First, we’ll focus on solving the income prediction problem with linear regression, since linear models don’t work well with image recognition tasks (this is the domain of deep learning, which we’ll explore later). We have our data set X, and corresponding target values Y. The goal of ordinary least squares (OLS) regression is to learn a linear model that we can use to predict a new y given a previously unseen x with as little error as possible. We want to guess how much income someone earns based on how many years of education they received. Linear regression is a parametric method, which means it makes an assumption about the form of the function relating X and Y (we’ll cover examples of non-parametric methods later). Our model will be a function that predicts ŷ given a specific x: β0 is the y-intercept and β1 is the slope of our line, i.e. how much income increases (or decreases) with one additional year of education. Our goal is to learn the model parameters (in this case, β0 and β1) that minimize error in the model’s predictions. To find the best parameters: Graphically, in two dimensions, this results in a line of best fit. In three dimensions, we would draw a plane, and so on with higher-dimensional hyperplanes. Mathematically, we look at the difference between each real data point (y) and our model’s prediction (ŷ). Square these differences to avoid negative numbers and penalize larger differences, and then add them up and take the average. This is a measure of how well our data fits the line. For a simple problem like this, we can compute a closed form solution using calculus to find the optimal beta parameters that minimize our loss function. But as a cost function grows in complexity, finding a closed form solution with calculus is no longer feasible. This is the motivation for an iterative approach called gradient descent, which allows us to minimize a complex loss function. “Put on a blindfold, take a step downhill. You’ve found the bottom when you have nowhere to go but up.” Gradient descent will come up over and over again, especially in neural networks. Machine learning libraries like scikit-learn and TensorFlow use it in the background everywhere, so it’s worth understanding the details. The goal of gradient descent is to find the minimum of our model’s loss function by iteratively getting a better and better approximation of it. Imagine yourself walking through a valley with a blindfold on. Your goal is to find the bottom of the valley. How would you do it? A reasonable approach would be to touch the ground around you and move in whichever direction the ground is sloping down most steeply. Take a step and repeat the same process continually until the ground is flat. Then you know you’ve reached the bottom of a valley; if you move in any direction from where you are, you’ll end up at the same elevation or further uphill. Going back to mathematics, the ground becomes our loss function, and the elevation at the bottom of the valley is the minimum of that function. Let’s take a look at the loss function we saw in regression: We see that this is really a function of two variables: β0 and β1. All the rest of the variables are determined, since X, Y, and n are given during training. We want to try to minimize this function. The function is f(β0,β1)=z. To begin gradient descent, you make some guess of the parameters β0 and β1 that minimize the function. Next, you find the partial derivatives of the loss function with respect to each beta parameter: [dz/dβ0, dz/dβ1]. A partial derivative indicates how much total loss is increased or decreased if you increase β0 or β1 by a very small amount. Put another way, how much would increasing your estimate of annual income assuming zero higher education (β0) increase the loss (i.e. inaccuracy) of your model? You want to go in the opposite direction so that you end up walking downhill and minimizing loss. Similarly, if you increase your estimate of how much each incremental year of education affects income (β1), how much does this increase loss (z)? If the partial derivative dz/β1 is a negative number, then increasing β1 is good because it will reduce total loss. If it’s a positive number, you want to decrease β1. If it’s zero, don’t change β1 because it means you’ve reached an optimum. Keep doing that until you reach the bottom, i.e. the algorithm converged and loss has been minimized. There are lots of tricks and exceptional cases beyond the scope of this series, but generally, this is how you find the optimal parameters for your parametric model. Overfitting: “Sherlock, your explanation of what just happened is too specific to the situation.” Regularization: “Don’t overcomplicate things, Sherlock. I’ll punch you for every extra word.” Hyperparameter (λ): “Here’s the strength with which I will punch you for every extra word.” A common problem in machine learning is overfitting: learning a function that perfectly explains the training data that the model learned from, but doesn’t generalize well to unseen test data. Overfitting happens when a model overlearns from the training data to the point that it starts picking up idiosyncrasies that aren’t representative of patterns in the real world. This becomes especially problematic as you make your model increasingly complex. Underfitting is a related issue where your model is not complex enough to capture the underlying trend in the data. Remember that the only thing we care about is how the model performs on test data. You want to predict which emails will be marked as spam before they’re marked, not just build a model that is 100% accurate at reclassifying the emails it used to build itself in the first place. Hindsight is 20/20 — the real question is whether the lessons learned will help in the future. The model on the right has zero loss for the training data because it perfectly fits every data point. But the lesson doesn’t generalize. It would do a horrible job at explaining a new data point that isn’t yet on the line. Two ways to combat overfitting: 1. Use more training data. The more you have, the harder it is to overfit the data by learning too much from any single training example. 2. Use regularization. Add in a penalty in the loss function for building a model that assigns too much explanatory power to any one feature or allows too many features to be taken into account. The first piece of the sum above is our normal cost function. The second piece is a regularization term that adds a penalty for large beta coefficients that give too much explanatory power to any specific feature. With these two elements in place, the cost function now balances between two priorities: explaining the training data and preventing that explanation from becoming overly specific. The lambda coefficient of the regularization term in the cost function is a hyperparameter: a general setting of your model that can be increased or decreased (i.e. tuned) in order to improve performance. A higher lambda value will more harshly penalize large beta coefficients that could lead to potential overfitting. To decide the best value of lambda, you’d use a method called cross-validation which involves holding out a portion of the training data during training, and then seeing how well your model explains the held-out portion. We’ll go over this in more depth Here’s what we covered in this section: In the next section — Part 2.2: Supervised Learning II — we’ll talk about two foundational methods of classification: logistic regression and support vector machines. For a more thorough treatment of linear regression, read chapters 1–3 of An Introduction to Statistical Learning. The book is available for free online and is an excellent resource for understanding machine learning concepts with accompanying exercises. For more practice: To actually implement gradient descent in Python, check out this tutorial. And here is a more mathematically rigorous description of the same concepts. In practice, you’ll rarely need to implement gradient descent from scratch, but understanding how it works behind the scenes will allow you to use it more effectively and understand why things break when they do. More from Machine Learning for Humans 🤖👶 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact. " Arvind N,9.5K,8,https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------8----------------,Thoughts after taking the Deeplearning.ai courses – Towards Data Science,"[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes. " Blaise Aguera y Arcas,8.7K,15,https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------0----------------,Do algorithms reveal sexual orientation or just expose our stereotypes?,"by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft. " David Foster,12.8K,11,https://medium.com/applied-data-science/how-to-build-your-own-alphazero-ai-using-python-and-keras-7f664945c188?source=tag_archive---------1----------------,How to build your own AlphaZero AI using Python and Keras,"In this article I’ll attempt to cover three things: In March 2016, Deepmind’s AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished. This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further. The paper ‘Mastering the Game of Go without Human Knowledge’ unveiled a new variant of the algorithm, AlphaGo Zero, that had defeated AlphaGo 100–0. Incredibly, it had done so by learning solely through self-play, starting ‘tabula rasa’ (blank state) and gradually finding strategies that would beat previous incarnations of itself. No longer was a database of human expert games required to build a super-human AI . A mere 48 days later, on 5th December 2017, DeepMind released another paper ‘Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm’ showing how AlphaGo Zero could be adapted to beat the world-champion programs StockFish and Elmo at chess and shogi. The entire learning process, from being shown the games for the first time, to becoming the best computer program in the world, had taken under 24 hours. With this, AlphaZero was born — the general algorithm for getting good at something, quickly, without any prior knowledge of human expert strategy. There are two amazing things about this achievement: It cannot be overstated how important this is. This means that the underlying methodology of AlphaGo Zero can be applied to ANY game with perfect information (the game state is fully known to both players at all times) because no prior expertise is required beyond the rules of the game. This is how it was possible for DeepMind to publish the chess and shogi papers only 48 days after the original AlphaGo Zero paper. Quite literally, all that needed to change was the input file that describes the mechanics of the game and to tweak the hyper-parameters relating to the neural network and Monte Carlo tree search. If AlphaZero used super-complex algorithms that only a handful of people in the world understood, it would still be an incredible achievement. What makes it extraordinary is that a lot of the ideas in the paper are actually far less complex than previous versions. At its heart, lies the following beautifully simple mantra for learning: Doesn’t that sound a lot like how you learn to play games? When you play a bad move, it’s either because you misjudged the future value of resulting positions, or you misjudged the likelihood that your opponent would play a certain move, so didn’t think to explore that possibility. These are exactly the two aspects of gameplay that AlphaZero is trained to learn. Firstly, check out the AlphaGo Zero cheat sheet for a high level understanding of how AlphaGo Zero works. It’s worth having that to refer to as we walk through each part of the code. There’s also a great article here that explains how AlphaZero works in more detail. Clone this Git repository, which contains the code I’ll be referencing. To start the learning process, run the top two panels in the run.ipynb Jupyter notebook. Once it’s built up enough game positions to fill its memory the neural network will begin training. Through additional self-play and training, it will gradually get better at predicting the game value and next moves from any position, resulting in better decision making and smarter overall play. We’ll now have a look at the code in more detail, and show some results that demonstrate the AI getting stronger over time. N.B — This is my own understanding of how AlphaZero works based on the information available in the papers referenced above. If any of the below is incorrect, apologies and I’ll endeavour to correct it! The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go... but there are still 4,531,985,219,092 game positions in total. The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row — each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn. Here’s a summary of the key files that make up the codebase: This file contains the game rules for Connect4. Each squares is allocated a number from 0 to 41, as follows: The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting player’s piece at the bottom of the centre column. You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it. This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages: There are two agents involved in this loop, the best_player and the current_player. The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again. This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree. The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree. The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move. The replay method retrains the neural network, using memories from previous games. This file contains the Residual_CNN class, which defines how to build an instance of the neural network. It uses a condensed version of the neural network architecture in the AlphaGoZero paper — i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head. The depth and number of convolutional filters can be specified in the config file. The Keras library is used to build the network, with a backend of Tensorflow. To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook: This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree. The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move. This is where you set the key parameters that influence the algorithm. Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead. Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents. To play against your creation, run the following code (it’s also in the run.ipynb notebook) When you run the algorithm, all model and memory files are saved in the run folder, in the root directory. To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint. An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player. This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function. The locations of the run and run_archive folders. Log files are saved to the log folder inside the run folder. To turn on logging, set the values of the logger_disabled variables to False inside this file. Viewing the log files will help you to understand how the algorithm works and see inside its ‘mind’. For example, here is a sample from the logger.mcts file. Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase: Training over a couple of days produces the following chart of loss against mini-batch iteration number: The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two. Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first. Here are the final standings: Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasn’t yet saturated — with further training time, the players would continue to get stronger, learning more and more intricate strategies. As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version: 1st neural network version 30th neural network version This is a good strategy as many lines require the centre column — claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input. There is a game.py file for a game called ‘Metasquares’ in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins. If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead. Hopefully you find this article useful — let me know in the comments below if you find any typos or have questions about anything in the codebase or article and I’ll get back to you as soon as possible. If you would like to learn more about how our company, Applied Data Science develops innovative data science solutions for businesses, feel free to get in touch through our website or directly through LinkedIn. ... and if you like this, feel free to leave a few hearty claps :) Applied Data Science is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you’re looking to do more with your data, let’s talk. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Applied Data Science Cutting edge data science news and projects " Aman Agarwal,7K,24,https://medium.freecodecamp.org/explained-simply-how-an-ai-program-mastered-the-ancient-game-of-go-62b8940a9080?source=tag_archive---------2----------------,Explained Simply: How an AI program mastered the ancient game of Go,"This is about AlphaGo, Google DeepMind’s Go playing AI that shook the technology world in 2016 by defeating one of the best players in the world, Lee Sedol. Go is an ancient board game which has so many possible moves at each step that future positions are hard to predict — and therefore it requires strong intuition and abstract thinking to play. Because of this reason, it was believed that only humans could be good at playing Go. Most researchers thought that it would still take decades to build an AI which could think like that. In fact, I’m releasing this essay today because this week (March 8–15) marks the two-year anniversary of the AlphaGo vs Sedol match! But AlphaGo didn’t stop there. 8 months later, it played 60 professional games on a Go website under disguise as a player named “Master”, and won every single game, against dozens of world champions, of course without resting between games. Naturally this was a HUGE achievement in the field of AI and sparked worldwide discussions about whether we should be excited or worried about artificial intelligence. Today we are going to take the original research paper published by DeepMind in the Nature journal, and break it down paragraph-by-paragraph using simple English. After this essay, you’ll know very clearly what AlphaGo is, and how it works. I also hope that after reading this you will not believe all the news headlines made by journalists to scare you about AI, and instead feel excited about it. Worrying about the growing achievements of AI is like worrying about the growing abilities of Microsoft Powerpoint. Yes, it will get better with time with new features being added to it, but it can’t just uncontrollably grow into some kind of Hollywood monster. You DON’T need to know how to play Go to understand this paper. In fact, I myself have only read the first 3–4 lines in Wikipedia’s opening paragraph about it. Instead, surprisingly, I use some examples from basic Chess to explain the algorithms. You just have to know what a 2-player board game is, in which each player takes turns and there is one winner at the end. Beyond that you don’t need to know any physics or advanced math or anything. This will make it more approachable for people who only just now started learning about machine learning or neural networks. And especially for those who don’t use English as their first language (which can make it very difficult to read such papers). If you have NO prior knowledge of AI and neural networks, you can read the “Deep Learning” section of one of my previous essays here. After reading that, you’ll be able to get through this essay. If you want to get a shallow understanding of Reinforcement Learning too (optional reading), you can find it here. Here’s the original paper if you want to try reading it: As for me: Hi I’m Aman, an AI and autonomous robots engineer. I hope that my work will save you a lot of time and effort if you were to study this on your own. Do you speak Japanese? Ryohji Ikebe has kindly written a brief memo about this essay in Japanese, in a series of Tweets. As you know, the goal of this research was to train an AI program to play Go at the level of world-class professional human players. To understand this challenge, let me first talk about something similar done for Chess. In the early 1990s, IBM came out with the Deep Blue computer which defeated the great champion Gary Kasparov in Chess. (He’s also a very cool guy, make sure to read more about him later!) How did Deep Blue play? Well, it used a very brute force method. At each step of the game, it took a look at all the possible legal moves that could be played, and went ahead to explore each and every move to see what would happen. And it would keep exploring move after move for a while, forming a kind of HUGE decision tree of thousands of moves. And then it would come back along that tree, observing which moves seemed most likely to bring a good result. But, what do we mean by “good result”? Well, Deep Blue had many carefully designed chess strategies built into it by expert chess players to help it make better decisions — for example, how to decide whether to protect the king or get advantage somewhere else? They made a specific “evaluation algorithm” for this purpose, to compare how advantageous or disadvantageous different board positions are (IBM hard-coded expert chess strategies into this evaluation function). And finally it chooses a carefully calculated move. On the next turn, it basically goes through the whole thing again. As you can see, this means Deep Blue thought about millions of theoretical positions before playing each move. This was not so impressive in terms of the AI software of Deep Blue, but rather in the hardware — IBM claimed it to be one of the most powerful computers available in the market at that time. It could look at 200 million board positions per second. Now we come to Go. Just believe me that this game is much more open-ended, and if you tried the Deep Blue strategy on Go, you wouldn’t be able to play well. There would be SO MANY positions to look at at each step that it would simply be impractical for a computer to go through that hell. For example, at the opening move in Chess there are 20 possible moves. In Go the first player has 361 possible moves, and this scope of choices stays wide throughout the game. This is what they mean by “enormous search space.” Moreover, in Go, it’s not so easy to judge how advantageous or disadvantageous a particular board position is at any specific point in the game — you kinda have to play the whole game for a while before you can determine who is winning. But let’s say you magically had a way to do both of these. And that’s where deep learning comes in! So in this research, DeepMind used neural networks to do both of these tasks (if you haven’t read about them yet, here’s the link again). They trained a “policy neural network” to decide which are the most sensible moves in a particular board position (so it’s like following an intuitive strategy to pick moves from any position). And they trained a “value neural network” to estimate how advantageous a particular board arrangement is for the player (or in other words, how likely you are to win the game from this position). They trained these neural networks first with human game examples (your good old ordinary supervised learning). After this the AI was able to mimic human playing to a certain degree, so it acted like a weak human player. And then to train the networks even further, they made the AI play against itself millions of times (this is the “reinforcement learning” part). With this, the AI got better because it had more practice. With these two networks alone, DeepMind’s AI was able to play well against state-of-the-art Go playing programs that other researchers had built before. These other programs had used an already popular pre-existing game playing algorithm, called the “Monte Carlo Tree Search” (MCTS). More about this later. But guess what, we still haven’t talked about the real deal. DeepMind’s AI isn’t just about the policy and value networks. It doesn’t use these two networks as a replacement of the Monte Carlo Tree Search. Instead, it uses the neural networks to make the MCTS algorithm work better... and it got so much better that it reached superhuman levels. THIS improved variation of MCTS is “AlphaGo”, the AI that beat Lee Sedol and went down in AI history as one of the greatest breakthroughs ever. So essentially, AlphaGo is simply an improved implementation of a very ordinary computer science algorithm. Do you understand now why AI in its current form is absolutely nothing to be scared of? Wow, we’ve spent a lot of time on the Abstract alone. Alright — to understand the paper from this point on, first we’ll talk about a gaming strategy called the Monte Carlo Tree Search algorithm. For now, I’ll just explain this algorithm at enough depth to make sense of this essay. But if you want to learn about it in depth, some smart people have also made excellent videos and blog posts on this: 1. A short video series from Udacity2. Jeff Bradberry’s explanation of MCTS3. An MCTS tutorial by Fullstack Academy The following section is long, but easy to understand (I’ll try my best) and VERY important, so stay with me! The rest of the essay will go much quicker. Let’s talk about the first paragraph of the essay above. Remember what I said about Deep Blue making a huge tree of millions of board positions and moves at each step of the game? You had to do simulations and look at and compare each and every possible move. As I said before, that was a simple approach and very straightforward approach — if the average software engineer had to design a game playing AI, and had all the strongest computers of the world, he or she would probably design a similar solution. But let’s think about how do humans themselves play chess? Let’s say you’re at a particular board position in the middle of the game. By game rules, you can do a dozen different things — move this pawn here, move the queen two squares here or three squares there, and so on. But do you really make a list of all the possible moves you can make with all your pieces, and then select one move from this long list? No — you “intuitively” narrow down to a few key moves (let’s say you come up with 3 sensible moves) that you think make sense, and then you wonder what will happen in the game if you chose one of these 3 moves. You might spend 15–20 seconds considering each of these 3 moves and their future — and note that during these 15 seconds you don’t have to carefully plan out the future of each move; you can just “roll out” a few mental moves guided by your intuition without TOO much careful thought (well, a good player would think farther and more deeply than an average player). This is because you have limited time, and you can’t accurately predict what your opponent will do at each step in that lovely future you’re cooking up in your brain. So you’ll just have to let your gut feeling guide you. I’ll refer to this part of the thinking process as “rollout”, so take note of it!So after “rolling out” your few sensible moves, you finally say screw it and just play the move you find best. Then the opponent makes a move. It might be a move you had already well anticipated, which means you are now pretty confident about what you need to do next. You don’t have to spend too much time on the rollouts again. OR, it could be that your opponent hits you with a pretty cool move that you had not expected, so you have to be even more careful with your next move.This is how the game carries on, and as it gets closer and closer to the finishing point, it would get easier for you to predict the outcome of your moves — so your rollouts don’t take as much time. The purpose of this long story is to describe what the MCTS algorithm does on a superficial level — it mimics the above thinking process by building a “search tree” of moves and positions every time. Again, for more details you should check out the links I mentioned earlier. The innovation here is that instead of going through all the possible moves at each position (which Deep Blue did), it instead intelligently selects a small set of sensible moves and explores those instead. To explore them, it “rolls out” the future of each of these moves and compares them based on their imagined outcomes.(Seriously — this is all I think you need to understand this essay) Now — coming back to the screenshot from the paper. Go is a “perfect information game” (please read the definition in the link, don’t worry it’s not scary). And theoretically, for such games, no matter which particular position you are at in the game (even if you have just played 1–2 moves), it is possible that you can correctly guess who will win or lose (assuming that both players play “perfectly” from that point on). I have no idea who came up with this theory, but it is a fundamental assumption in this research project and it works. So that means, given a state of the game s, there is a function v*(s) which can predict the outcome, let’s say probability of you winning this game, from 0 to 1. They call it the “optimal value function”. Because some board positions are more likely to result in you winning than other board positions, they can be considered more “valuable” than the others. Let me say it again: Value = Probability between 0 and 1 of you winning the game. But wait — say there was a girl named Foma sitting next to you while you play Chess, and she keeps telling you at each step if you’re winning or losing. “You’re winning... You’re losing... Nope, still losing...” I think it wouldn’t help you much in choosing which move you need to make. She would also be quite annoying. What would instead help you is if you drew the whole tree of all the possible moves you can make, and the states that those moves would lead to — and then Foma would tell you for the entire tree which states are winning states and which states are losing states. Then you can choose moves which will keep leading you to winning states. All of a sudden Foma is your partner in crime, not an annoying friend. Here, Foma behaves as your optimal value function v*(s). Earlier, it was believed that it’s not possible to have an accurate value function like Foma for the game of Go, because the games had so much uncertainty. BUT — even if you had the wonderful Foma, this wonderland strategy of drawing out all the possible positions for Foma to evaluate will not work very well in the real world. In a game like Chess or Go, as we said before, if you try to imagine even 7–8 moves into the future, there can be so many possible positions that you don’t have enough time to check all of them with Foma. So Foma is not enough. You need to narrow down the list of moves to a few sensible moves that you can roll out into the future. How will your program do that? Enter Lusha. Lusha is a skilled Chess player and enthusiast who has spent decades watching grand masters play Chess against each other. She can look at your board position, look quickly at all the available moves you can make, and tell you how likely it would be that a Chess expert would make any of those moves if they were sitting at your table. So if you have 50 possible moves at a point, Lusha will tell you the probability that each move would be picked by an expert. Of course, a few sensible moves will have a much higher probability and other pointless moves will have very little probability. She is your policy function, p(a\s). For a given state s, she can give you probabilities for all the possible moves that an expert would make. Wow — you can take Lusha’s help to guide you in how to select a few sensible moves, and Foma will tell you the likelihood of winning from each of those moves. You can choose the move that both Foma and Lusha approve. Or, if you want to be extra careful, you can roll out the moves selected by Lusha, have Foma evaluate them, pick a few of them to roll out further into the future, and keep letting Foma and Lusha help you predict VERY far into the game’s future — much quicker and more efficient than to go through all the moves at each step into the future. THIS is what they mean by “reducing the search space”. Use a value function (Foma) to predict outcomes, and use a policy function (Lusha) to give you grand-master probabilities to help narrow down the moves you roll out. These are called “Monte Carlo rollouts”. Then while you backtrack from future to present, you can take average values of all the different moves you rolled out, and pick the most suitable action. So far, this has only worked on a weak amateur level in Go, because the policy functions and value functions that they used to guide these rollouts weren’t that great. Phew. The first line is self explanatory. In MCTS, you can start with an unskilled Foma and unskilled Lusha. The more you play, the better they get at predicting solid outcomes and moves. “Narrowing the search to a beam of high probability actions” is just a sophisticated way of saying, “Lusha helps you narrow down the moves you need to roll out by assigning them probabilities that an expert would play them”. Prior work has used this technique to achieve strong amateur level AI players, even with simple (or “shallow” as they call it) policy functions. Yeah, convolutional neural networks are great for image processing. And since a neural network takes a particular input and gives an output, it is essentially a function, right? So you can use a neural network to become a complex function. So you can just pass in an image of the board position and let the neural network figure out by itself what’s going on. This means it’s possible to create neural networks which will behave like VERY accurate policy and value functions. The rest is pretty self explanatory. Here we discuss how Foma and Lusha were trained. To train the policy network (predicting for a given position which moves experts would pick), you simply use examples of human games and use them as data for good old supervised learning. And you want to train another slightly different version of this policy network to use for rollouts; this one will be smaller and faster. Let’s just say that since Lusha is so experienced, she takes some time to process each position. She’s good to start the narrowing-down process with, but if you try to make her repeat the process , she’ll still take a little too much time. So you train a *faster policy network* for the rollout process (I’ll call it... Lusha’s younger brother Jerry? I know I know, enough with these names). After that, once you’ve trained both of the slow and fast policy networks enough using human player data, you can try letting Lusha play against herself on a Go board for a few days, and get more practice. This is the reinforcement learning part — making a better version of the policy network. Then, you train Foma for value prediction: determining the probability of you winning. You let the AI practice through playing itself again and again in a simulated environment, observe the end result each time, and learn from its mistakes to get better and better. I won’t go into details of how these networks are trained. You can read more technical details in the later section of the paper (‘Methods’) which I haven’t covered here. In fact, the real purpose of this particular paper is not to show how they used reinforcement learning on these neural networks. One of DeepMind’s previous papers, in which they taught AI to play ATARI games, has already discussed some reinforcement learning techniques in depth (And I’ve already written an explanation of that paper here). For this paper, as I lightly mentioned in the Abstract and also underlined in the screenshot above, the biggest innovation was the fact that they used RL with neural networks for improving an already popular game-playing algorithm, MCTS. RL is a cool tool in a toolbox that they used to fine-tune the policy and value function neural networks after the regular supervised training. This research paper is about proving how versatile and excellent this tool it is, not about teaching you how to use it. In television lingo, the Atari paper was a RL infomercial and this AlphaGo paper is a commercial. A quick note before you move on. Would you like to help me write more such essays explaining cool research papers? If you’re serious, I’d be glad to work with you. Please leave a comment and I’ll get in touch with you. So, the first step is in training our policy NN (Lusha), to predict which moves are likely to be played by an expert. This NN’s goal is to allow the AI to play similar to an expert human. This is a convolutional neural network (as I mentioned before, it’s a special kind of NN that is very useful in image processing) that takes in a simplified image of a board arrangement. “Rectifier nonlinearities” are layers that can be added to the network’s architecture. They give it the ability to learn more complex things. If you’ve ever trained NNs before, you might have used the “ReLU” layer. That’s what these are. The training data here was in the form of random pairs of board positions, and the labels were the actions chosen by humans when they were in those positions. Just regular supervised learning. Here they use “stochastic gradient ASCENT”. Well, this is an algorithm for backpropagation. Here, you’re trying to maximise a reward function. And the reward function is just the probability of the action predicted by a human expert; you want to increase this probability. But hey — you don’t really need to think too much about this. Normally you train the network so that it minimises a loss function, which is essentially the error/difference between predicted outcome and actual label. That is called gradient DESCENT. In the actual implementation of this research paper, they have indeed used the regular gradient descent. You can easily find a loss function that behaves opposite to the reward function such that minimising this loss will maximise the reward. The policy network has 13 layers, and is called “SL policy” network (SL = supervised learning). The data came from a... I’ll just say it’s a popular website on which millions of people play Go. How good did this SL policy network perform? It was more accurate than what other researchers had done earlier. The rest of the paragraph is quite self-explanatory. As for the “rollout policy”, you do remember from a few paragraphs ago, how Lusha the SL policy network is slow so it can’t integrate well with the MCTS algorithm? And we trained another faster version of Lusha called Jerry who was her younger brother? Well, this refers to Jerry right here. As you can see, Jerry is just half as accurate as Lusha BUT it’s thousands of times faster! It will really help get through rolled out simulations of the future faster, when we apply the MCTS. For this next section, you don’t *have* to know about Reinforcement Learning already, but then you’ll have to assume that whatever I say works. If you really want to dig into details and make sure of everything, you might want to read a little about RL first. Once you have the SL network, trained in a supervised manner using human player moves with the human moves data, as I said before you have to let her practice by itself and get better. That’s what we’re doing here. So you just take the SL policy network, save it in a file, and make another copy of it. Then you use reinforcement learning to fine-tune it. Here, you make the network play against itself and learn from the outcomes. But there’s a problem in this training style. If you only forever practice against ONE opponent, and that opponent is also only practicing with you exclusively, there’s not much of new learning you can do. You’ll just be training to practice how to beat THAT ONE player. This is, you guessed it, overfitting: your techniques play well against one opponent, but don’t generalize well to other opponents. So how do you fix this? Well, every time you fine-tune a neural network, it becomes a slightly different kind of player. So you can save this version of the neural network in a list of “players”, who all behave slightly differently right? Great — now while training the neural network, you can randomly make it play against many different older and newer versions of the opponent, chosen from that list. They are versions of the same player, but they all play slightly differently. And the more you train, the MORE players you get to train even more with! Bingo! In this training, the only thing guiding the training process is the ultimate goal, i.e winning or losing. You don’t need to specially train the network to do things like capture more area on the board etc. You just give it all the possible legal moves it can choose from, and say, “you have to win”. And this is why RL is so versatile; it can be used to train policy or value networks for any game, not just Go. Here, they tested how accurate this RL policy network was, just by itself without any MCTS algorithm. As you would remember, this network can directly take a board position and decide how an expert would play it — so you can use it to single-handedly play games.Well, the result was that the RL fine-tuned network won against the SL network that was only trained on human moves. It also won against other strong Go playing programs. Must note here that even before training this RL policy network, the SL policy network was already better than the state of the art — and now, it has further improved! And we haven’t even come to the other parts of the process like the value network. Did you know that baby penguins can sneeze louder than a dog can bark? Actually that’s not true, but I thought you’d like a little joke here to distract from the scary-looking equations above. Coming to the essay again: we’re done training Lusha here. Now back to Foma — remember the “optimal value function”: v*(s) -> that only tells you how likely you are to win in your current board position if both players play perfectly from that point on?So obviously, to train an NN to become our value function, we would need a perfect player... which we don’t have. So we just use our strongest player, which happens to be our RL policy network. It takes the current state board state s, and outputs the probability that you will win the game. You play a game and get to know the outcome (win or loss). Each of the game states act as a data sample, and the outcome of that game acts as the label. So by playing a 50-move game, you have 50 data samples for value prediction. Lol, no. This approach is naive. You can’t use all 50 moves from the game and add them to the dataset. The training data set had to be chosen carefully to avoid overfitting. Each move in the game is very similar to the next one, because you only move once and that gives you a new position, right? If you take the states at all 50 of those moves and add them to the training data with the same label, you basically have lots of “kinda duplicate” data, and that causes overfitting. To prevent this, you choose only very distinct-looking game states. So for example, instead of all 50 moves of a game, you only choose 5 of them and add them to the training set. DeepMind took 30 million positions from 30 million different games, to reduce any chances of there being duplicate data. And it worked! Now, something conceptual here: there are two ways to evaluate the value of a board position. One option is a magical optimal value function (like the one you trained above). The other option is to simply roll out into the future using your current policy (Lusha) and look at the final outcome in this roll out. Obviously, the real game would rarely go by your plans. But DeepMind compared how both of these options do. You can also do a mixture of both these options. We will learn about this “mixing parameter” a little bit later, so make a mental note of this concept! Well, your single neural network trying to approximate the optimal value function is EVEN BETTER than doing thousands of mental simulations using a rollout policy! Foma really kicked ass here. When they replaced the fast rollout policy with the twice-as-accurate (but slow) RL policy Lusha, and did thousands of simulations with that, it did better than Foma. But only slightly better, and too slowly. So Foma is the winner of this competition, she has proved that she can’t be replaced. Now that we have trained the policy and value functions, we can combine them with MCTS and give birth to our former world champion, destroyer of grand masters, the breakthrough of a generation, weighing two hundred and sixty eight pounds, one and only Alphaaaaa GO! In this section, ideally you should have a slightly deeper understanding of the inner workings of the MCTS algorithm, but what you have learned so far should be enough to give you a good feel for what’s going on here. The only thing you should note is how we’re using the policy probabilities and value estimations. We combine them during roll outs, to narrow down the number of moves we want to roll out at each step. Q(s,a) represents the value function, and u(s,a) is a stored probability for that position. I’ll explain. Remember that the policy network uses supervised learning to predict expert moves? And it doesn’t just give you most likely move, but rather gives you probabilities for each possible move that tell how likely it is to be an expert move. This probability can be stored for each of those actions. Here they call it “prior probability”, and they obviously use it while selecting which actions to explore. So basically, to decide whether or not to explore a particular move, you consider two things: First, by playing this move, how likely are you to win? Yes, we already have our “value network” to answer this first question. And the second question is, how likely is it that an expert would choose this move? (If a move is super unlikely to be chosen by an expert, why even waste time considering it. This we get from the policy network) Then let’s talk about the “mixing parameter” (see came back to it!). As discussed earlier, to evaluate positions, you have two options: one, simply use the value network you have been using to evaluate states all along. And two, you can try to quickly play a rollout game with your current strategy (assuming the other player will play similarly), and see if you win or lose. We saw how the value function was better than doing rollouts in general. Here they combine both. You try giving each prediction 50–50 importance, or 40–60, or 0–100, and so on. If you attach a % of X to the first, you’ll have to attach 100-X to the second. That’s what this mixing parameter means. You’ll see these hit and trial results later in the paper. After each roll out, you update your search tree with whatever information you gained during the simulation, so that your next simulation is more intelligent. And at the end of all simulations, you just pick the best move. Interesting insight here! Remember how the RL fine-tuned policy NN was better than just the SL human-trained policy NN? But when you put them within the MCTS algorithm of AlphaGo, using the human trained NN proved to be a better choice than the fine-tuned NN. But in the case of the value function (which you would remember uses a strong player to approximate a perfect player), training Foma using the RL policy works better than training her with the SL policy. “Doing all this evaluation takes a lot of computing power. We really had to bring out the big guns to be able to run these damn programs.” Self explanatory. “LOL, our program literally blew the pants off of every other program that came before us” This goes back to that “mixing parameter” again. While evaluating positions, giving equal importance to both the value function and the rollouts performed better than just using one of them. The rest is self explanatory, and reveals an interesting insight! Self explanatory. Self explanatory. But read that red underlined sentence again. I hope you can see clearly now that this line right here is pretty much the summary of what this whole research project was all about. Concluding paragraph. “Let us brag a little more here because we deserve it!” :) Oh and if you’re a scientist or tech company, and need some help in explaining your science to non-technical people for marketing, PR or training etc, I can help you. Drop me a message on Twitter: @mngrwl From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Engineer, teacher, learner of foreign languages, lover of history, cinema and art. Our community publishes stories worth reading on development, design, and data science. " Eugenio Culurciello,6.4K,8,https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0?source=tag_archive---------3----------------,The fall of RNN / LSTM – Towards Data Science,"We fell for Recurrent neural networks (RNN), Long-short term memory (LSTM), and all their variants. Now it is time to drop them! It is the year 2014 and LSTM and RNN make a great come-back from the dead. We all read Colah’s blog and Karpathy’s ode to RNN. But we were all young and unexperienced. For a few years this was the way to solve sequence learning, sequence translation (seq2seq), which also resulted in amazing results in speech to text comprehension and the raise of Siri, Cortana, Google voice assistant, Alexa. Also let us not forget machine translation, which resulted in the ability to translate documents into different languages or neural machine translation, but also translate images into text, text into images, and captioning video, and ... well you got the idea. Then in the following years (2015–16) came ResNet and Attention. One could then better understand that LSTM were a clever bypass technique. Also attention showed that MLP network could be replaced by averaging networks influenced by a context vector. More on this later. It only took 2 more years, but today we can definitely say: But do not take our words for it, also see evidence that Attention based networks are used more and more by Google, Facebook, Salesforce, to name a few. All these companies have replaced RNN and variants for attention based models, and it is just the beginning. RNN have the days counted in all applications, because they require more resources to train and run than attention-based models. See this post for more info. Remember RNN and LSTM and derivatives use mainly sequential processing over time. See the horizontal arrow in the diagram below: This arrow means that long-term information has to sequentially travel through all cells before getting to the present processing cell. This means it can be easily corrupted by being multiplied many time by small numbers < 0. This is the cause of vanishing gradients. To the rescue, came the LSTM module, which today can be seen as multiple switch gates, and a bit like ResNet it can bypass units and thus remember for longer time steps. LSTM thus have a way to remove some of the vanishing gradients problems. But not all of it, as you can see from the figure above. Still we have a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it. No question LSTM and GRU and derivatives are able to learn a lot of longer term information! See results here; but they can remember sequences of 100s, not 1000s or 10,000s or more. And one issue of RNN is that they are not hardware friendly. Let me explain: it takes a lot of resources we do not have to train these network fast. Also it takes much resources to run these model in the cloud, and given that the demand for speech-to-text is growing rapidly, the cloud is not scalable. We will need to process at the edge, right into the Amazon Echo! See note below for more details. If sequential processing is to be avoided, then we can find units that “look-ahead” or better “look-back”, since most of the time we deal with real-time causal data where we know the past and want to affect future decisions. Not so in translating sentences, or analyzing recorded videos, for example, where we have all data and can reason on it more time. Such look-back/ahead units are neural attention modules, which we previously explained here. To the rescue, and combining multiple neural attention modules, comes the “hierarchical neural attention encoder”, shown in the figure below: A better way to look into the past is to use attention modules to summarize all past encoded vectors into a context vector Ct. Notice there is a hierarchy of attention modules here, very similar to the hierarchy of neural networks. This is also similar to Temporal convolutional network (TCN), reported in Note 3 below. In the hierarchical neural attention encoder multiple layers of attention can look at a small portion of recent past, say 100 vectors, while layers above can look at 100 of these attention modules, effectively integrating the information of 100 x 100 vectors. This extends the ability of the hierarchical neural attention encoder to 10,000 past vectors. But more importantly look at the length of the path needed to propagate a representation vector to the output of the network: in hierarchical networks it is proportional to log(N) where N are the number of hierarchy layers. This is in contrast to the T steps that a RNN needs to do, where T is the maximum length of the sequence to be remembered, and T >> N. This architecture is similar to a neural Turing machine, but lets the neural network decide what is read out from memory via attention. This means an actual neural network will decide which vectors from the past are important for future decisions. But what about storing to memory? The architecture above stores all previous representation in memory, unlike neural Turning machines. This can be rather inefficient: think about storing the representation of every frame in a video — most times the representation vector does not change frame-to-frame, so we really are storing too much of the same! What can we do is add another unit to prevent correlated data to be stored. For example by not storing vectors too similar to previously stored ones. But this is really a hack, the best would be to be let the application guide what vectors should be saved or not. This is the focus of current research studies. Stay tuned for more information. Tell your friends! It is very surprising to us to see so many companies still use RNN/LSTM for speech to text, many unaware that these networks are so inefficient and not scalable. Please tell them about this post. About training RNN/LSTM: RNN and LSTM are difficult to train because they require memory-bandwidth-bound computation, which is the worst nightmare for hardware designer and ultimately limits the applicability of neural networks solutions. In short, LSTM require 4 linear layer (MLP layer) per cell to run at and for each sequence time-step. Linear layers require large amounts of memory bandwidth to be computed, in fact they cannot use many compute unit often because the system has not enough memory bandwidth to feed the computational units. And it is easy to add more computational units, but hard to add more memory bandwidth (note enough lines on a chip, long wires from processors to memory, etc). As a result, RNN/LSTM and variants are not a good match for hardware acceleration, and we talked about this issue before here and here. A solution will be compute in memory-devices like the ones we work on at FWDNXT. See this repository for a simple example of these techniques. Note 1: Hierarchical neural attention is similar to the ideas in WaveNet. But instead of a convolutional neural network we use hierarchical attention modules. Also: Hierarchical neural attention can be also bi-directional. Note 2: RNN and LSTM are memory-bandwidth limited problems (see this for details). The processing unit(s) need as much memory bandwidth as the number of operations/s they can provide, making it impossible to fully utilize them! The external bandwidth is never going to be enough, and a way to slightly ameliorate the problem is to use internal fast caches with high bandwidth. The best way is to use techniques that do not require large amount of parameters to be moved back and forth from memory, or that can be re-used for multiple computation per byte transferred (high arithmetic intensity). Note 3: Here is a paper comparing CNN to RNN. Temporal convolutional network (TCN) “outperform canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory”. Note 4: Related to this topic, is the fact that we know little of how our human brain learns and remembers sequences. “We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks” — these chunks remind me of small convolutional or attention like networks on smaller sequences, that then are hierarchically strung together like in the hierarchical neural attention encoder and Temporal convolutional network (TCN). More studies make me think that working memory is similar to RNN networks that uses recurrent real neuron networks, and their capacity is very low. On the other hand both the cortex and hippocampus give us the ability to remember really long sequences of steps (like: where did I park my car at airport 5 days ago), suggesting that more parallel pathways may be involved to recall long sequences, where attention mechanism gate important chunks and force hops in parts of the sequence that is not relevant to the final goal or task. Note 5: The above evidence shows we do not read sequentially, in fact we interpret characters, words and sentences as a group. An attention-based or convolutional module perceives the sequence and projects a representation in our mind. We would not be misreading this if we processed this information sequentially! We would stop and notice the inconsistencies! I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more... If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I dream and build new technology Sharing concepts, ideas, and codes. " Gary Marcus,1.3K,27,https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1?source=tag_archive---------4----------------,In defense of skepticism about deep learning – Gary Marcus – Medium,"In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.” In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). I also urged the community to consider incorporating more innate structure into AI systems. Within a few days, thousands of people had weighed in over Twitter, some enthusiastic (“e.g, the best discussion of #DeepLearning and #AI I’ve read in many years”), some not (“Thoughtful... But mostly wrong nevertheless”). Because I think clarity around these issues is so important, I’ve compiled a list of fourteen commonly-asked queries. Where does unsupervised learning fit in? Why didn’t I say more nice things about deep learning? What gives me the right to talk about this stuff in the first place? What’s up with asking a neural network to generalize from even numbers to odd numbers? (Hint: that’s the most important one). And lots more. I haven’t addressed literally every question I have seen, but I have tried to be representative. 1. What is general intelligence? Thomas Dietterich, an eminent professor of machine learning, and my most thorough and explicit critic thus far, gave a nice answer that I am very comfortable with: 2. Marcus wasn’t very nice to deep learning. He should have said more nice things about all of its vast accomplishments. And he minimizes others. Dietterich, mentioned above, made both of these points, writing: On the first part of that, true, I could have said more positive things. But it’s not like I didn’t say any. Or even like I forgot to mention Dietterich’s best example; I mentioned it on the first page: More generally, later in the article I cited a couple of great texts and excellent blogs that have pointers to numerous examples. A lot of them though, would not really count as AGI, which was the main focus of my paper. (Google Translate, for example, is extremely impressive, but it’s not general; it can’t, for example, answer questions about what it has translated, the way a human translator could.) The second part is more substantive. Is 1,000 categories really very finite? Well, yes, compared to the flexibility of cognition. Cognitive scientists generally place the number of atomic concepts known by an individual as being on the order of 50,000, and we can easily compose those into a vastly greater number of complex thoughts. Pets and fish are probably counted in those 50,000; pet fish, which is something different, probably isn’t counted. And I can easily entertain the concept of “a pet fish that is suffering from Ick”, or note that “it is always disappointing to buy a pet fish only to discover that it was infected with Ick” (an experience that I had as a child and evidently still resent). How many ideas like that I can express? It’s a lot more than 1,000. I am not precisely sure how many visual categories a person can recognize, but suspect the math is roughly similar. Try google images on “pet fish”, and you do ok; try it on “pet fish wearing goggles” and you mostly find dogs wearing goggles, with a false alarm rate of over 80%. Machines win over nonexpert humans on distinguishing similar dog breeds, but people win, by a wide margin, on interpreting complex scenes, like what would happen to a skydiver who was wearing a backpack rather than a parachute. In focusing on 1,000 category chunks the machine learning field is, in my view, doing itself a disservice, trading a short-term feeling of success for a denial of harder, more open-ended problems (like scene and sentence comprehension) that must eventually be addressed. Compared to the essentially infinite range of sentences and scenes we can see and comprehend, 1000 of anything really is small. [See also Note 2 at bottom] 3. Marcus says deep learning is useless, but it’s great for many things Of course it is useful; I never said otherwise, only that (a) in its current supervised form, deep learning might be approaching its limits and (b) that those limits would stop short from full artificial general intelligence — unless, maybe, we started incorporating a bunch of other stuff like symbol-manipulation and innateness. The core of my conclusion was this: 4. “One thing that I don’t understand. — @GaryMarcus says that DL is not good for hierarchical structures. But in @ylecun nature review paper [says that] that DL is particularly suited for exploiting such hierarchies.” This is an astute question, from Ram Shankar, and I should have been a LOT clearer about the answer: there are many different types of hierarchy one could think about. Deep learning is really good, probably the best ever, at the sort of feature-wise hierarchy LeCun talked about, which I typically refer to as hierarchical feature detection; you build lines out of pixels, letters out of lines, words out of letters and so forth. Kurzweil and Hawkins have emphasized this sort of thing, too, and it really goes back to Hubel and Wiesel (1959)in neuroscience experiments and to Fukushima. (Fukushima, Miyake, & Ito, 1983) in AI. Fukushima, in his Neocognitron model, hand-wired his hierarchy of successively more abstract features; LeCun and many others after showed that (at least in some cases) you don’t have to hand engineer them. But you don’t have to keep track of the subcomponents you encounter along the way; the top-level system need not explicitly encode the structure of the overall output in terms of which parts were seen along the way; this is part of why a deep learning system can be fooled into thinking a pattern of a black and yellow stripes is a school bus. (Nguyen, Yosinski, & Clune, 2014). That stripe pattern is strongly correlated with activation of the school bus output units, which is in turn correlated with a bunch of lower-level features, but in a typical image-recognition deep network, there is no fully-realized representation of a school bus as being made up of wheels, a chassis, windows, etc. Virtually the whole spoofing literature can be thought of in these terms. [Note 3] The structural sense of hierarchy which I was discussing was different, and focused around systems that can make explicit reference to the parts of larger wholes. The classic illustration would be Chomsky’s sense of hierarchy, in which a sentence is composed of increasingly complex grammatical units (e.g., using a novel phrase like the man who mistook his hamburger for a hot dog with a larger sentence like The actress insisted that she would not be outdone by the man who mistook his hamburger for a hot dog). I don’t think deep learning does well here (e.g., in discerning the relation between the actress, the man, and the misidentified hot dog), though attempts have certainly been made. Even in vision, the problem is not entirely licked; Hinton’s recent capsule work (Sabour, Frosst, & Hinton, 2017), for example, is an attempt to build in more robust part-whole directions for image recognition, by using more structured networks. I see this as a good trend, and one potential way to begin to address the spoofing problem, but also as a reflection of trouble with the standard deep learning approach. 5. “It’s weird to discuss deep learning in [the] context of general AI. General AI is not the goal of deep learning!” Best twitter response to this came from University of Quebec professor Daniel Lemire: “Oh! Come on! Hinton, Bengio... are openly going for a model of human intelligence.” Second prize goes to a math PhD at Google, Jeremy Kun, who countered the dubious claim that “General AI is not the goal of deep learning” with “If that’s true, then deep learning experts sure let everyone believe it is without correcting them.” Andrew Ng’s recent Harvard Business Review article, which I cited, implies that deep learning can do anything a person can do in a second. Thomas Dietterich’s tweet that said in part “it is hard to argue that there are limits to DL”. Jeremy Howard worried that the idea that deep learning is overhyped might itself be overhyped, and then suggested that every known limit had been countered. DeepMind’s recent AlphaGo paper [See Note 4] is positioned somewhat similarly, with Silver et al (Silver et al., 2017) enthusiastically reporting that: In that paper’s concluding discussion, not one of the 10 challenges to deep learning that I reviewed was mentioned. (As I will discuss in a paper coming out soon, it’s not actually a pure deep learning system, but that’s a story for another day.) The main reason people keep benchmarking their AI systems against humans is precisely because AGI is the goal. 6. What Marcus said is a problem with supervised learning, not deep learning. Yann LeCun presented a version of this, in a comment on my Facebook page: The part about my allegedly not recognizing LeCun’s recent work is, well, odd. It’s true that I couldn’t find a good summary article to cite (when I asked LeCun, he told me by email that there wasn’t one yet) but I did mention his interest explicitly: I also noted that: My conclusion was positive, too. Although I expressed reservations about current approaches to building unsupervised systems, I ended optimistically: What LeCun’s remark does get right is that many of the problems I addressed are a general problem with supervised learning, not something unique to deep learning; I could have been more clear about this. Many other supervised learning techniques face similar challenges, such as problems in generalization and dependence on massive data sets; relatively little of what I said is unique to deep learning. In my focus on assessing deep learning at the five year resurgence mark, I neglected to say that. But it doesn’t really help deep learning that other supervised learning techniques are in the same boat. If someone could come up with a truly impressive way of using deep learning in an unsupervised way, a reassessment might be required. But I don’t see that unsupervised learning, at least as it currently pursued, particularly remedies the challenges I raised, e.g., with respect to reasoning, hierarchical representations, transfer, robustness, and interpretability. It’s simply a promissory note. [Note 5] As Portland State and Santa Fe Institute Professor Melanie Mitchell’s put it in a thus far unanswered tweet: I would, too. In the meantime, I see no principled reason to believe that unsupervised learning can solve the problems I raise, unless we add in more abstract, symbolic representations, first. 7. Deep learning is not just convolutional networks [of the sort Marcus critiqued], it’s “essentially a new style of programming — ”differentiable programming” — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc” — Tom Dietterich This seemed (in the context of Dietterich’s longer series of tweets) to have been proposed as a criticism, but I am puzzled by that, as I am a fan of differentiable programming and said so. Perhaps the point was that deep learning can be taken in a broader way. In any event, I would not equate deep learning and differentiable programming (e.g., approaches that I cited like neural Turing machines and neural programming). Deep learning is a component of many differentiable systems. But such systems also build in exactly the sort of elements drawn from symbol-manipulation that I am and have been urging the field to integrate (Marcus, 2001; Marcus, Marblestone, & Dean, 2014a; Marcus, Marblestone, & Dean, 2014b), including memory units and operations over variables, and other systems like routing units stressed in the more recent two essays. If integrating all this stuff into deep learning is what gets us to AGI, my conclusion, quoted below, will have turned out to be dead on: 8. Now vs the future. Maybe deep learning doesn’t work now, but it’s offspring will get us to AGI. Possibly. I do think that deep learning might play an important role in getting us to AGI, if some key things (many not yet discovered) are added in first. But what we add matters, and whether it is reasonable to call some future system an instance of deep learning per se, or more sensible to call the ultimate system “a such-and-such that uses deep learning”, depends on where deep learning fits into the ultimate solution. Maybe, for example, in truly adequate natural language understanding systems, symbol-manipulation will play an equally large role as deep learning, or an even larger one. Part of the issue here is of course terminological. A very good friend recently asked me, why can’t we just call anything that includes deep learning, deep learning, even if it includes symbol-manipulation? Some enhancement to deep learning ought to work. To which I respond: why not call anything that includes symbol-manipulation, symbol-manipulation, even if it includes deep learning? Gradient-based optimization should get its due, but so should symbol-manipulation, which as yet is the only known tool for systematically representing and achieving high-level abstraction, bedrock to virtually all of the world’s complex computer systems, from spreadsheets to programming environments to operating systems. Eventually, I conjecture, credit will also be due to the inevitable marriage between the two, hybrid systems that bring together the two great ideas of 20th century AI, symbol-processing and neural networks, both initially developed in the 1950s. Other new tools yet to be invented may be critical as well. To a true acolyte of deep learning, anything is deep learning, no matter what it’s incorporating, and no matter how different it might be from current techniques. (Viva Imperialism!) If you replaced every transistor in a classic symbolic microprocessor with a neuron, but kept the chip’s logic entirely unchanged, a true deep learning acolyte would still declare victory. But we won’t understand the principles driving (eventual) success if we lump everything together. [Note 6] 9. No machine can extrapolate. It’s not fair to expect a neural network to generalize from even numbers to odd numbers. Here’s a function, expressed over binary digits. f(110) = 011; f(100) = 001; f(010) = 010. What’s f(111)? If you are an ordinary human, you are probably going to guess 111. If you are neural network of the sort I discussed, you probably won’t. If you have been told many times that hidden layers in neural networks “abstract functions”, you should be a little bit surprised by this. If you are a human, you might think of the function as something like “reversal”, easily expressed in a line of computer code. If you are a neural network of a certain sort, it’s very hard to learn the abstraction of reversal in a way that extends from evens in that context to odds. But is that impossible? Certainly not if you have a prior notion of an integer. Try another, this time in decimal: f(4) = 8; f(6) = 12. What’s f(5)? None of my human readers would care that questions happens to require you to extrapolate from even numbers to odds; a lot of neural networks would be flummoxed. Sure, the function is undetermined by the sparse number of examples, like all functions, but it is interesting and important that most people would (amid the infinite range of a priori possible inductions), would alight on f(5)=10. And just as interesting that most standard multilayer perceptrons, representing the numbers as binary digits, wouldn’t. That’s telling us something, but many people in the neural network community, François Chollet being one very salient exception, don’t want to listen. Importantly, recognizing that a rule applies to any integer is roughly the same kind of generalization that allows one to recognize that a novel noun that can be used in one context can be used in a huge variety of other contexts. From the first time I hear the word blicket used as an object, I can guess that it will fit into a wide range of frames, like I thought I saw a blicket, I had a close encounter with a blicket, and exceptionally large blickets frighten me, etc. And I can both generate and interpret such sentences, without specific further training. It doesn’t matter whether blicket is or not similar in (for example) phonology to other words I have heard, nor whether I pile on the adjectives or use the word as a subject or an object. If most machine learning [ML] paradigms have a problem with this, we should have problem with most ML paradigms. Am I being “fair”? Well, yes, and no. It’s true that I am asking neural networks to do something that violates their assumptions. A neural network advocate might, for example, say, “hey wait a minute, in your reversal example, there are three dimensions in your input space, representing the left binary digit, the middle binary digit, and rightmost binary digit. The rightmost binary digit has only been a zero in the training; there is no way a network can know what to do when you get to one in that position.” For example, Vincent Lostenlan, a postdoc at Cornell, said Dietterich, made essentially the same point, more concisely: But although both are right about why odds-and-evens are (in this context) hard for deep learning, they are both wrong about the larger issues for three reasons. First, it can’t be that people can’t extrapolate. You just did, in two different examples, at the top of this section. Paraphrasing Chico Marx. who are you going to believe, me or your own eyes? To someone immersed deeply — perhaps too deeply — in contemporary machine learning, my odds-and-evens problem seems unfair because a certain dimension (the one which contains the value of 1 in the rightmost digit) hasn’t been illustrated in the training regime. But when you, a human, look at my examples above, you will not be stymied by this particular gap in the training data. You won’t even notice it, because your attention is on higher-level regularities. People routinely extrapolate in exactly the fashion that I have been describing, like recognizing string reversal from the three training examples I gave above. In a technical sense, that is extrapolation, and you just did it. In The Algebraic Mind I referred to this specific kind of extrapolation as generalizing universally quantified one-to-one mappings outside of a space of training examples. As a field we desperately need a solution to this challenge, if we are ever to catch up to human learning — even if it means shaking up our assumptions. Now, it might reasonably be objected that it’s not a fair fight: humans manifestly depend on prior knowledge when they generalize such mappings. (In some sense, Dieterrich proposed this objection later in his tweet stream.) True enough. But in a way, that’s the point: neural networks of a certain sort don’t have a good way of incorporating the right sort of prior knowledge in the place. It is precisely because those networks don’t have a way of incorporating prior knowledge like “many generalizations hold for all elements of unbounded classes” or “odd numbers leave a remainder of one when divided by two” that neural networks that lack operations over variables fail. The right sort of prior knowledge that would allow neural networks to acquire and represent universally quantified one-to-one mappings. Standard neural networks can’t represent such mappings, except in certain limited ways. (Convolution is a way of building in one particular such mapping, prior to learning). Second, saying that no current system (deep learning or otherwise) can extrapolate in the way that I have described is no excuse; once again other architectures may be in the choppy water, but that doesn’t mean we shouldn’t be trying to swim to shore. If we want to get to AGI, we have to solve the problem. (Put differently: yes, one could certainly hack together solutions to get deep learning to solve my specific number series problems, by, for example, playing games with the input encoding schemes; the real question, if we want to get to AGI, is how to have a system learn the sort of generalizations I am describing in a general way.) Third, the claim that no current system can extrapolate turns out to be, well, false; there are already ML systems that can extrapolate at least some functions of exactly the sort I described, and you probably own one: Microsoft Excel, its Flash Fill function in particular (Gulwani, 2011). Powered by a very different approach to machine learning, it can do certain kinds of extrapolation, albeit in a narrow context, by the bushel, e.g., try typing the (decimal) digits 1, 11, 21 in a series of rows and see if the system can extrapolate via Flash Fill to the eleventh item in the sequence (101). Spoiler alert, it can, in exactly the same way as you probably would, even though there were no positive examples in the training dimension of the hundreds digit. The systems learns from examples the function you want and extrapolates it. Piece of cake. Can any deep learning system do that with three training examples, even with a range of experience on other small counting functions, like 1, 3, 5, .... and 2, 4, 6 ....? Well maybe, but only the ones that are likely do so are likely to be hybrids that build in operations over variables, which are quite different from the sort of typical convolutional neural networks that most people associate with deep learning. Putting all this very differently, one crude way to think about where we are with most ML systems that we have today [Note 7] is that they just aren’t designed to think “outside the box”; they are designed to be awesome interpolators inside the box. That’s fine for some purposes, but not others. Humans are better at thinking outside boxes than contemporary AI; I don’t think anyone can seriously doubt that. But that kind of extrapolation, that Microsoft can do in a narrow context, but that no machine can do with human-like breadth, is precisely what machine learning engineers really ought to be working on, if they want to get to AGI. 10. Everybody in the field already knew this. There is nothing new here. Well, certainly not everybody; as noted, there were many critics who think we still don’t know the limits of deep learning, and others who believe that there might be some, but none yet discovered. That said, I never said that any of my points was entirely new; for virtually all, I cited other scholars, who had independently reached similar conclusions. 11. Marcus failed to cite X. Definitely true; the literature review was incomplete. One favorite among the papers I failed to cite is Shanahan’s Deep Symbolic Reinforcement (Garnelo, Arulkumaran, & Shanahan, 2016); I also can’t believe I forgot Richardson and Domingos’ (2006) Markov Logic Networks. I also wish I had cited Evans and Edward Grefenstette (2017), a great paper from DeepMind. And Smolensky’s tensor calculus work (Smolensky et al., 2016). And work on inductive programming in various forms (Gulwani et al., 2015) and probabilistic programming, too, by Noah Goodman (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2012) All seek to bring rules and networks close to together. And older stuff by pioneers like Jordan Pollack (Smolensky et al., 2016). And Forbus and Gentner’s (Falkenhainer, Forbus, & Gentner, 1989) and Hofstadter and Mitchell’s (1994) work on analogy; and many others. I am sure there is a lot more I could and should have cited. Overall, I tried to be representative rather than fully comprehensive, but I still could have done better. #chagrin. 12. Marcus has no standing in the field; he isn’t a practitioner; he is just a critic. Hesitant to raise this one, but it came up in all kinds of different responses, even from the mouths of certain well-known professionals. As Ram Shankar noted, “As a community, we must circumscribe our criticism to science and merit based arguments.” What really matters is not my credentials (which I believe do in fact qualify me to write) but the validity of the arguments. Either my arguments are correct, or they are not. [Still, for those who are curious, I supply an optional mini-history of some of my relevant credentials in Note 8 at the end.] 13. Re: hierarchy, what about Socher’s tree-RNNs? I have written to him, in hopes of having a better understanding of its current status. I’ve also privately pushed several other teams towards trying out tasks like Lake and Baroni (2017) presented. Pengfei et al (2017) offers some interesting discussion. 14. You could have been more critical of deep learning. Nobody quite said that, not in exactly those words, but a few came close, generally privately. One colleague for example pointed out that there may be some serious errors of future forecasting around The same colleague added Another colleague, ML researcher and author Pedro Domingos, pointed out still other shortcomings of current deep learning methods that I didn’t mention: Like other flexible supervised learning methods, deep learning systems can be unstable in the sense that slightly changing the training data may result in large changes in the resulting model. As Domingos notes, there’s no guarantee this sort of rise and decline won’t repeat itself. Neural networks have risen and fallen several times before, all the way back to Rosenblatt’s first Perceptron in 1957. We shouldn’t mistake cyclical enthusiasm for a complete solution to intelligence, which still seems (to me, anyway) to be decades away. If we want to reach AGI, we owe it to ourselves to be as keenly aware of challenges we face as we are of our successes. 2. There are other problems too in relying on these 1,000 image sets. For example, in reading a draft of this paper, Melanie Mitchell pointed me to important recent work by Loghmani and colleague (2017) on assessing how deep learning does in the real world. Quoting from the abstract, the paper “analyzes the transferability of deep representations from Web images to robotic data [in the wild]. Despite the promising results obtained with [representations developed from Web image], the experiments demonstrate that object classification with real-life robotic data is far from being solved.” 3. And that literature is growing fast. In late December there was a paper about fooling deep nets into mistaking a pair of skiers for a dog [https://arxiv.org/pdf/1712.09665.pdf] and another on a general-purpose tool for building real-world adversarial patches: https://arxiv.org/pdf/1712.09665.pdf. (See also https://arxiv.org/abs/1801.00634.) It’s frightening to think how vulnerable deep learning can be real-world contexts. And for that matter consider Filip Pieknewski’s blog on why photo-trained deep learning systems have trouble transferring what they have learned to line drawings, https://blog.piekniewski.info/2016/12/29/can-a-deep-net-see-a-cat/. Vision is not as solved as many people seem to think. 4. As I will explain in the forthcoming paper, AlphaGo is not actually a pure [deep] reinforcement learning system, although the quoted passage presented it as such. It’s really more of a hybrid, with important components that are driven by symbol-manipulating algorithms, along with a well engineered deep-learning component. 5. AlphaZero, by the way, isn’t unsupervised, it’s self-supervised, using self-play and simulation as a way of generating supervised data; I will have a lot more to say about that system in a forthcoming paper. 6. Consider, for example Google Search, and how one might understand it. Google has recently added in a deep learning algorithm, RankBrain, to the wide array of algorithms it uses for search. And Google Search certainly takes in data and knowledge and processes them hierarchically (which according to Maher Ibrahim is all you need to count as being deep learning). But, realistically, deep learning is just one cue among many; the knowledge graph component, for example, is based instead primarily on classical AI notions of traversing ontologies. By any reasonable measure Google Search is a hybrid, with deep learning as just one strand among many. Calling Google Search as a whole. “a deep learning system” would be grossly misleading, akin to relabeling carpentry “screwdrivery”, just because screwdrivers happen to be involved. 7. Important exceptions include inductive logic programming, inductive function programming (the brains behind Microsoft’s Flash Fill) and neural programming. All are making some progress here; some of these even include deep learning, but they also all include structured representations and operations over variables among their primitive operations; that’s all I am asking for. 8. My AI experiments begin in adolescence, with, among other thing, a Latin-English translator that I coded in the programming language Logo. In graduate school, studying with Steven Pinker, I explored the relation between language acquisition, symbolic rules, and neural networks. (I also owe a debt to my undergraduate mentor Neil Stillings.) The child language data I gathered (Marcus et al., 1992) for my dissertation have been cited hundreds of times, and were the most frequently-modeled data in the 90’s debate about neural networks and how children learned language. In the late 1990’s I discovered some specific, replicable problems with multilayer perceptrons, (Marcus, 1998b; Marcus, 1998a)); based on those observation, I designed a widely-cited experiment. published in Science (Marcus, Vijayan, Bandi Rao, & Vishton, 1999), that showed that young infants could extract algebraic rules, contra Jeff Elman’s (1990) then popular neural network. All of this culminated in a 2001 MIT Press book (Marcus, 2001), which lobbied for a variety of representational primitives, some of which have begun to pop up in recent neural networks; in particular that the use of operations over variables in the new field of differentiable programming (Daniluk, Rocktäschel, Welbl, & Riedel, 2017; Graves et al., 2016) owes something to the position outlined in that book. There was a strong emphasis on having memory records, as well, which can be seen in the memory networks being developed e.g., at Facebook (Bordes, Usunier, Chopra, & Weston, 2015).) The next decade saw me work on other problems including innateness (Marcus, 2004) (which I will discuss at length in the forthcoming piece about AlphaGo) and evolution (Marcus, 2004; Marcus, 2008), I eventually returned to AI and cognitive modeling, publishing a 2014 article on cortical computation in Science (Marcus, Marblestone, & Dean, 2014) that also anticipates some of what is now happening in differentiable programming. More recently, I took a leave from academia to found and lead a machine learning company in 2014; by any reasonable measure that company was successful, acquired by Uber roughly two years after founding. As co-founder and CEO I put together a team of some of the very best machine learning talent in the world, including Zoubin Ghahramani, Jeff Clune, Noah Goodman, Ken Stanley and Jason Yosinski, and played a pivotal role in developing our core intellectual property and shaping our intellectual mission. (A patent is pending, co-written by Zoubin Ghahramani and myself.) Although much of what we did there remains confidential, now owned by Uber, and not by me, I can say that a large part of our efforts were addressed towards integrating deep learning with our own techniques, which gave me a great deal of familiarity with joys and tribulations of Tensorflow and vanishing (and exploding) gradients. We aimed for state-of-the-art results (sometimes successfully, sometimes not) with sparse data, using hybridized deep learning systems on a daily basis. Bordes, A., Usunier, N., Chopra, S., & Weston, J. (2015). Large-scale Simple Question Answering with Memory Networks. arXiv. Daniluk, M., Rocktäschel, T., Welbl, J., & Riedel, S. (2017). Frustratingly Short Attention Spans in Neural Language Modeling. arXiv. Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2)(2), 179–211. Evans, R., & Grefenstette, E. (2017). Learning Explanatory Rules from Noisy Data. arXiv, cs.NE. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1)(1), 1–63. Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 5, 826–834. Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards Deep Symbolic Reinforcement Learning. arXiv, cs.AI. Goodman, N., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A. et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626)(7626), 471–476. Gulwani, S. (2011). Automating string processing in spreadsheets using input-output examples. dl.acm.org, 46(1)(1), 317–330. Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11)(11), 90–99. Hofstadter, D. R., & Mitchell, M. (1994). The copycat project: A model of mental fluidity and analogy-making. Advances in connectionist and neural computation theory, 2(31–112)(31–112), 29–30. Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the Limitation of Convolutional Neural Networks in Recognizing Negative Images. arXiv, cs.CV. Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3)(3), 574–591. Lake, B. M., & Baroni, M. (2017). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. arXiv. Loghmani, M. R., Caputo, B., & Vincze, M. (2017). Recognizing Objects In-the-wild: Where Do We Stand? arXiv, cs.RO. Marcus, G. F. (1998a). Rethinking eliminative connectionism. Cogn Psychol, 37(3)(3), 243 — 282. Marcus, G. F. (1998b). Can connectionism save constructivism? Cognition, 66(2)(2), 153 — 182. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F. (2004). The Birth of the Mind : how a tiny number of genes creates the complexities of human thought. Basic Books. Marcus, G. F. (2008). Kluge : the haphazard construction of the human mind. Boston : Houghton Mifflin. Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv. Marcus, G.F., Marblestone, A., & Dean, T. (2014a). The atoms of neural computation. Science, 346(6209)(6209), 551 — 552. Marcus, G. F., Marblestone, A. H., & Dean, T. L. (2014b). Frequently Asked Questions for: The Atoms of Neural Computation. Biorxiv (arXiv), q-bio.NC. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F., Pinker, S., Ullman, M., Hollander, M., Rosen, T. J., & Xu, F. (1992). Overregularization in language acquisition. Monogr Soc Res Child Dev, 57(4)(4), 1–182. Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Rule learning by seven-month-old infants. Science, 283(5398)(5398), 77–80. Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv, cs.CV. Pengfei, L., Xipeng, Q., & Xuanjing, H. (2017). Dynamic Compositional Neural Networks over Tree Structure IJCAI. Proceedings from Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17). Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv, cs.LG. Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine learning, 62(1)(1), 107–136. Sabour, S., dffsdfdsf, N., & Hinton, G. E. (2017). Dynamic Routing Between Capsules. arXiv, cs.CV. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676)(7676), 354–359. Smolensky, P., Lee, M., He, X., Yih, W.-t., Gao, J., & Deng, L. (2016). Basic Reasoning with Tensor Product Representations. arXiv, cs.AI. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Founder, Geometric Intelligence (acquired by Uber). Professor of Psychology and Neural Science, NYU. Freelancer for The New Yorker & New York Times. " Bargava,11.8K,3,https://towardsdatascience.com/how-to-learn-deep-learning-in-6-months-e45e40ef7d48?source=tag_archive---------5----------------,How to learn Deep Learning in 6 months – Towards Data Science,"It is quite possible to learn, follow and contribute to state-of-art work in deep learning in about 6 months’ time. This article details out the steps to achieve that. Pre-requisites - You are willing to spend 10–20 hours per week for the next 6 months- You have some programming skills. You should be comfortable to pick up Python along the way. And cloud. (No background in Python and cloud assumed).- Some math education in the past (algebra, geometry etc). - Access to internet and computer. Step 1 We learn driving a car — by driving. Not by learning how the clutch and the internal combustion engine work. Atleast not initially. When learning deep learning, we will follow the same top-down approach. Do the fast.ai course — Practical Deep Learning for Coders — Part 1. This takes about 4–6 weeks of effort. This course has a session on running the code on cloud. Google Colaboratory has free GPU access. Start with that. Other options include Paperspace, AWS, GCP, Crestle and Floydhub. All of these are great. Do not start to build your own machine. Atleast not yet. Step 2 This is the time to know some of the basics. Learn about calculus and linear algebra. For calculus, Big Picture of Calculus provides a good overview. For Linear Algebra, Gilbert Strang’s MIT course on OpenCourseWare is amazing. Once you finish the above two, read the Matrix Calculus for Deep Learning. Step 3 Now is the time to understand the bottom-up approach to deep learning. Do all the 5 courses in the deep learning specialisation in Coursera. You need to pay to get the assignments graded. But the effort is truly worth it. Ideally, given the background you have gained so far, you should be able to complete one course every week. Step 4 Do a capstone project. This is the time where you delve deep into a deep learning library(eg: Tensorflow, PyTorch, MXNet) and implement an architecture from scratch for a problem of your liking. The first three steps are about understanding how and where to use deep learning and gaining a solid foundation. This step is all about implementing a project from scratch and developing a strong foundation on the tools. Step 5 Now go and do fast.ai’s part II course — Cutting Edge Deep Learning for Coders. This covers more advanced topics and you will learn to read the latest research papers and make sense out of them. Each of the steps should take about 4–6 weeks’ time. And in about 26 weeks since the time you started, and if you followed all of the above religiously, you will have a solid foundation in deep learning. Where to go next? Do the Stanford’s CS231n and CS224d courses. These two are amazing courses with great depth for vision and NLP respectively. They cover the latest state-of-art. And read the deep learning book. This will solidify your understanding. Happy deep learning. Create every single day. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @ http://impel.io/ . Currently building a personalization engine http://www.recotap.com/. Data Science Trainer and Mentor. Sharing concepts, ideas, and codes. " Seth Weidman,2.8K,11,https://hackernoon.com/the-3-tricks-that-made-alphago-zero-work-f3d47b6686ef?source=tag_archive---------6----------------,The 3 Tricks That Made AlphaGo Zero Work – Hacker Noon,"There were many advances in Deep Learning and AI in 2017, but few generated as much publicity and interest as DeepMind’s AlphaGo Zero. This program was truly a shocking breakthrough: not only did it beat the prior version of AlphaGo — the program that beat 17 time world champion Lee Sedol just a year and a half earlier — 100–0, it was trained without any data from real human games. Xavier Amatrain called it “more [significant] than anything...in the last 5 years” in Machine Learning. So how did DeepMind do it? In this essay, I’ll try to give an intuitive idea of the techniques AlphaGo Zero used, what made them work, and what the implications for future AI research are. Let’s start with the general approach that both AlphaGo and AlphaGo Zero took to playing Go. Both AlphaGo and AlphaGo Zero evaluated the Go board and chose moves using a combination of two methods: AlphaGo and AlphaGo Zero both worked by cleverly combining these two methods. Let’s look at each one in turn: Go is a sufficiently complex game that computers can’t simply search all possible moves using a brute force approach to find the best one (indeed, they can’t even come close). The best Go programs prior to AlphaGo overcame this by using “Monte Carlo Tree Search” or MCTS. At a high level, this method involves initially exploring many possible moves on the board, and then focusing this exploration over time as certain moves are found to be more likely to lead to wins than others. Both AlphaGo and AlphaGo Zero use a relatively straightforward version of MCTS for their “lookahead”, simply using many of the best practices listed in the Monte Carlo Tree Search Wikipedia page to properly manage the tradeoff between exploring new sequences of move or more deeply explore already-explored sequences (for more, see the details in the “Search” section under “Methods” in the original AlphaGo Paper published in Nature). Though, MCTS had been the core of all successful Go programs prior to AlphaGo, it was DeepMind’s clever combination of this technique with a neural network-based “intuition” that allowed it to surpass human performance. DeepMind’s major innovation with AlphaGo was to use deep neural networks to understand the state of the game, and then use this understanding to intelligently guide the search of the MCTS. More specifically: they trained networks that could look at Given this information, the neural networks could recommend: How did DeepMind train neural networks to do this? Here, AlphaGo and AlphaGo Zero used very different approaches; we’ll start first with AlphaGo’s: AlphaGo had two separately trained neural networks. DeepMind then combined these two neural networks with MCTS — that is, the program’s “intuition” with its brute force “lookahead” search— in a very clever way: it used the network that had been trained to predict moves to guide which branches of the game tree to search and used the network that had been trained to predict whether a position was “winning” to evaluate the positions it encountered during its search. This allowed AlphaGo to intelligently search upcoming moves and ultimately allowed it to beat Lee Sedol. AlphaGo Zero, however, took this to a whole new level. At a high level, AlphaGo Zero works the same way as AlphaGo: specifically, it plays Go by using MCTS-based lookahead search, intelligently guided by a neural network. However, AlphaGo Zero’s neural network — its “intuition” — was trained completely differently from that of AlphaGo: Let’s say you have a neural network that is attempting to “understand” the game of Go: that is, for every board position, it is using a deep neural network to generate evaluations of what the best moves are. What DeepMind realized is that no matter how intelligent this neural network is — whether it is completely clueless or a Go master — its evaluations can always be made better by MCTS. Fundamentally, MCTS performs the kind of lookahead search that we would imagine a human master would perform if given enough time: it intelligently guesses which variations— sequences of future moves — are most promising, simulates those variations, evaluates how good they actually are, and updates its assessments of its current best moves accordingly. An illustration of this is below. Suppose we have a neural network that is reading the board and determining that a given move results in a game being even, with an evaluation of 0.0. Then, the network intelligently looks ahead a few moves and finds a sequence of moves that can be forced from the current position that ends up resulting in an evaluation of 0.5. It can then update its evaluation of the current board position to reflect that it leads to a more favorable position down the road. This lookahead search, therefore, can always give us improved data on how good the various moves in the current position that the neural network is evaluating are. This is true whether our neural network is playing at an amateur level or an expert level: we can always generate improve evaluations for it by looking ahead and seeing which of its current options actually lead to better positions. In addition, just as in AlphaGo, we would also want our neural network to learn which moves are likely to lead to wins. So, also as before, our agent—using its MCTS-improved evaluations and the current state of its neural network — could play games against itself, winning some and losing others. This data, generated purely via lookahead and self-play, is what DeepMind used to train AlphaGo Zero. More specifically: Much was made of the fact that no games between humans were used to train AlphaGo Zero, and this first “trick” was the reason why: for a given state of a Go agent, it can always be made smarter by performing MCTS-based lookahead and using the results of that lookahead to improve the agent. This is how AlphaGo Zero was able to continuously improve, from when it was an amateur all the way up to when it better than the best human players. The second trick was a novel neural network structure that I’ll call the “Two Headed Monster”. AlphaGo Zero’s was its neural network architecture, a “two-headed” architecture. Its first 20 layers or so were layer “blocks” of a type often seen in modern neural net architecures. These layers were followed by two “heads”: one head that took the output of the first 20 layers and produced probabilities of the Go agent making certain moves, and another that took the output of the first 20 layers and outputted a probability of the current player winning. This is quite unusual. In almost all applications, neural networks output a single, fixed output — such as the probability of an image containing a dog, or a vector containing the probabilities of an image containing one of 10 types of objects. How can a net learn if it is receiving two sets of signals: one on how good its evaluations of the board are, and another how good the specific moves it is selecting are? The answer is simple: remember that neural networks are fundamentally just mathematical functions with a bunch of parameters that determine the predictions that they make; we “teach” them by repeatedly showing them “correct answers” and having them update their parameters so the answers they produce more closely match these correct answers. So, when we use the two headed neural net to make a prediction using Head #1, we simply update the parameters that led to making that prediction, namely the parameters in the “Body” and in “Head #1”. Similarly, when we make a prediction using Head #2, we update the parameters in the “Body” and in “Head #2”. This is how DeepMind trained its single, “two-headed” neural network that it used to guide MCTS during its search, just as AlphaGo did with two separate neural networks. This trick accounted for half of AlphaGo Zero’s increase in playing strength over AlphaGo. (this trick is known more technically as Multi-Task Learning with Hard Parameter Sharing. Sebastian Ruder has a great overview here). The other half of the increase in playing strength simply came from bringing the neural network architecture up-to-date with the latest advances in the field: AlphaGo Zero used a more “cutting edge” neural network architecture than AlphaGo. Specifically, they used a “residual” neural network architecture instead of a purely “convolutional” architecture. Residual nets were pioneered by Microsoft Research in late 2015, right around the time work on the first version of AlphaGo would have wrapped up, so it both understandable that DeepMind did not use them in the original AlphaGo program. Interestingly, as the chart below shows, each of these two neural network-related tricks — switching from convolutional to residual architecture and using the “Two Headed Monster” neural network architecture instead of separate neural networks — would have resulted in about half of the increase in playing strength as was achieved when both were combined. These three tricks are what enabled AlphaGo Zero to achieve its incredible performance that blew away even Alpha Go: It is worth noting that AlphaGo did not use any classical or even “cutting edge” reinforcement learning concepts — no Deep Q Learning, Asynchronous Actor-Critic Agents, or anything else we typically associate with reinforcement learning. It simply used simulations to generate training data for its neural nets to then learn from in a supervised fashion. Denny Britz sums this idea up well in this Tweet from just after when the AlphaGo Zero paper was released: Here’s a “step-by-step” timeline of how AlphaGo Zero was trained: 3. As these self-play games are happening, sample 2,048 positions from the most recent 500,000 games, along with whether the game was won or lost. For each move, record both A) the results of the MCTS evaluations of those positions — how “good” the various moves in these positions were based on lookahead — and B) whether the current player won or lost the game. 4. Train the neural network, using both A) the move evaluations produced by the MCTS lookahead search and B) whether the current player won or lost. 5. Finally, every 1,000 iterations of steps 3–4, evaluate the current neural network against the previous best version; if it wins at least 55% of the games, begin using it to generate self-play games instead of the prior version. Repeat steps 3–4 700,000 times, while the self-play games are continuously being played — after three days, you’ll have yourself an AlphaGo Zero! There are many implications of DeepMind’s incredible achievement for the future of AI research. Here are a couple of key ones: First, the fact that self-play data generated from simulations was “good enough” to be able to train the network suggests that simulated self-play data can train agents to surpass human performance in extremely complex tasks, even starting completely from scratch — data generated from human experts may not be needed. Second, the “Two Headed Monster” trick seems to significantly help agents learn to perform several related tasks in many domains, since it seems to prevent the agents from overfitting their behavior to any individual task. DeepMind seems to really like this trick, and has used it and more advanced versions of it to build agents that can learn multiple tasks in several different domains. Many projects in robotics, especially the burgeoning field of using simulations to teach robotic agents to use their limbs to accomplish tasks, are using these two tricks to great effect. Pieter Abbeel’s recent NIPS keynote highlights many impressive new results that use these tricks along with many bleeding edge reinforcement learning techniques. Indeed, locomotion seems like a perfect use case for the “Two Headed Monster” trick in particular: for example, robotic agents could be simultaneously trained to hit a baseball using a bat and to throw a punch to hit a moving target, since the two tasks require learning some common skills (e.g. balance, torso rotation). DeepMind’s AlphaGo Zero was one of the most intriguing advancements in AI and Deep Learning in 2017. I can’t wait to see what 2018 brings! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Senior Data Scientist at @thisismetis. I write about the intersection of Data Science, business, education, and society. how hackers start their afternoons. " Gabriel Aldamiz...,5.1K,11,https://hackernoon.com/how-we-grew-from-0-to-4-million-women-on-our-fashion-app-with-a-vertical-machine-learning-approach-f8b7fc0a89d7?source=tag_archive---------7----------------,"How we grew from 0 to 4 million women on our fashion app, with a vertical machine learning approach","Three years ago we launched Chicisimo, our goal was to offer automated outfit advice. Today, with over 4 million women on the app, we want to share how our data and machine learning approach helped us grow. It’s been chaotic but it is now under control. If we wanted to build a human-level tool to offer automated outfit advice, we needed to understand people’s fashion taste. A friend can give us outfit advice because after seeing what we normally wear, she’s learnt our style. How could we build a system that learns fashion taste? We had previous experience with taste-based projects and a background in machine learning applied to music and other sectors. We saw how a collaborative filtering tool transformed the music industry from blindness to totally understanding people (check out the Audioscrobbler story). It also made life better for those who love music, and created several unicorns along the way. With this background, we built the following thesis: online fashion will be transformed by a tool that understands taste. Because if you understand taste, you can delight people with relevant content and a meaningful experience. We also thought that “outfits” were the asset that would allow taste to be understood, to learn what people wear or have in their closet, and what style each of us like. We decided we were going to build that tool to understand taste. We focused on developing the correct dataset, and built two assets: our mobile app and our data platform. From previous experience building mobile products, even in Symbian back then, we knew it was easy to bring people to an app but difficult to retain them. So we focused on small iterations to learn as fast as possible. We launched an extremely early alpha of Chicisimo with one key functionality. We launched under another name and in another country. You couldn’t even upload photos... but it allowed us to iterate with real data and get a lot of qualitative input. At some point, we launched the real Chicisimo, and removed this alpha from the App Store. We spent a long time trying to understand what our true levers of retention were, and what algorithms we needed in order to match content and people. Three things helped with retention: (a) identify retention levers using behavioral cohorts (we use Mixpanel for this). We run cohorts not only over the actions that people performed, but also over the value they received. This was hard to conceptualize for an app such as Chicisimo*. We thought in terms of what specific and measurable value people received, measured it, and run cohorts over those events, and then we were able to iterate over value received, not only over actions people performed. We also defined and removed anti-levers (all those noisy things that distract from the main value) and got all the relevant metrics for different time periods: first session, first day, first week, etc. These super specific metrics allowed us to iterate (*Nir Eyal’s book Hooked: How to Build Habit-Forming Products discusses a framework to create habits that helped us build our model); (b) re-think the onboarding process, once we knew the levers of retention. We define it as the process by which new signups find the value of the app as soon as possible, and before we lose them. We clearly articulated to ourselves what needed to happen (what and when). It went something like this: If people don’t do [action] during their first 7 minutes in their first session, they will not come back. So we need to change the experience to make that happen. We also run tons of user-tests with different types of people, and observed how they perceived (or mostly didn’t) the retention lever; (c) define how we learn. The data approach described above is key, but there is much more than data when building a product people love. In our case, first of all, we think that the what-to-wear problem is a very important one to solve, and we truly respect it. We obsess over understanding the problem, and over understanding how our solution is helping, or not. It’s our way of showing respect. This leads me to one of the most surprising aspects IMO of building a product: the fact that, regularly, we access new corpuses of knowledge that we did not have before, which help us improve the product significantly. When we’ve obtained these game-changing learnings, it’s always been by focusing on two aspects: how people relate to the problem, and how people relate to the product (the red arrows in the image below). There are a million subtleties that happen in these two relations, and we are building Chicisimo by trying to understand them. Now, we know that at any point there is something important that we don’t know and therefore the question always is: how can we learn... sooner? Talking with one of my colleagues, she once told me, “this is not about data, this is about people”. And the truth is, from day one we’ve learnt significantly by having conversations with women about how they relate with the problem, and with solutions. We use several mechanisms: having face to face conversations, reading the emails we get from women without predefined questions, or asking for feedback around specific topics (we now use Typeform and its a great tool for product insight). And then we talk among ourselves and try to articulate the learnings. We also seek external references: we talk with other product people, we play with inspiring apps, and we re-read articles that help us think. This process is what allows us to learn, and then build product and develop technology. At some point, we were lucky to get noticed by the App Store team, and we’ve been featured as App of the Day throughout the world (view Apple’s description of Chicisimo, here). On December 31st, Chicisimo was featured in a summary of apps the App Store team did, we are the pink “C.” in the left image below 😀. The app got viewed by 957,437 uniques thanks to this feature, for a total of 1.3M times. In our case, app features have a 0,5% conversion rate from impression to app install (normally: impression > product page view > install); ASO has a 3% conversion, and referrers 45%. The app aims at understanding taste so we can do a better job at suggesting outfit ideas. The simple act of delivering the right content at the right time can absolutely wow people, although it is an extremely difficult utility to build. Chicisimo content is 100% user-generated, and this poses some challenges: the system needs to classify different types of content automatically, build the right incentives, and understand how to match content and needs. We soon saw that there was a lot of data coming in. After thinking “hey, how cool we are, look at all this data we have”, we realized it was actually a nightmare because, being chaotic, the data wasn’t actionable. This wasn’t cool at all. But then we decided to start giving some structure to parts of the data, and we ended inventing what we called the Social Fashion Graph. The graph is a compact representation of how needs, outfits and people interrelate, a concept that helped us build the data platform. The data platform creates a high-quality dataset linked to a learning and training world, our app, which therefore improves with each new expression of taste. We thought of outfits as playlists: an outfit is a combination of items that makes sense to consume together. Using collaborative filtering, the relations captured here allow us to offer recommendations in different areas of the app. There was still a lot of noise in the data, and one of the hardest things was to understand how people were expressing the same fashion need in different ways, which made matching content and needs even more difficult. Lots of people might need ideas to go to school, and express that specific need in a hundred different ways. How do you capture this diversity, and how do you provide structure to it? We built a system to collect concepts (we call them needs) and captured equivalences among different ways to express the same need. We ended up building a list of the world’s what-to-wear needs, which we call our ontology. This really cleaned up the dataset and helped us understand what we had. This understanding led to better product decisions. We now understand that an outfit, a need or a person, can have a lot of understandable data attached, if you allow people to express freely (the app) while having the right system behind (the platform). Structuring data gave us control, while encouraging unstructured data gave us knowledge and flexibility. The end result is our current system. A system that learns the meaning of an outfit, how to respond to a need, or the taste of an individual. And I wouldn’t even dare saying that this is Day 1 for us. Screenshot of an internal tool. The amount of work we have in front of us is immense, but we feel things are now under control. One of the new areas we’ve been working on is adding a fourth element to the Social Fashion Graph: shoppable products. A system to match outfits to products automatically, and to help people decide what to buy next. This is pretty exciting. Back when we built recommender systems for music and other products, it was pretty easy (that’s what we think now, we obviously didn’t think that at the time:). First, it was easy to capture that you liked a given song. Then, it was easy to capture the sequence in which you and others would listen to that song, and therefore you could capture the correlations. With this data, you could do a lot. However, as we soon found out, fashion has its own challenges. There is not an easy way to match an outfit to a shoppable product (think about most garments in your wardrobe, most likely you won’t find a link to view/buy those garments online, something you can do for many other products you have at home). Another challenge: the industry is not capturing how people describe clothes or outfits, so there is a strong disconnect between many ecommerces and its shoppers (we think we’ve solved that problem. Also Similar.ai and Twiggle are working on it). Another challenge: style is complex to capture and classify by a machine. Now, deep learning brings a new tool to add to other mechanisms, and changes everything. Owning the correct data set allows us to focus on the specific narrow use cases related to outfit recommendations, and to focus on delivering value through the algorithms instead of spending time collecting and cleaning data. 👉 Now comes the fun and rewarding part, so please email us if you want to join the team and help build algorithms that have real impact on people — we are 100% remote, Slack based 👈 -😂😂😉 😉 😉. People’s very personal style can become as actionable as metadata and possibly as transparent as well (?), and I think we can see the path to get there. As we have a consumer product that people already love, we can ship early results of these algorithms partially hidden, and increase their presence as feedback improves results. There are more and more researchers working of these areas, you can read Tangseng’s paper on recommending outfits from personal closet or clothing parsing project, or how Edgar Simo-Serra defines similarity between images using user-provided metadata. Outfits are a key asset in the race to capture the $123 billion US apparel market. Data is also the reason many players are taking outfits to the forefront of technology: outfits are a daily habit, and have proven to be great assets to attract and retain shoppers, and capture their data. Many players are introducing a Shop the Look section with outfits from real people: Amazon, Zalando or Google are a few examples. Google recently introduced a new feature called Style Ideas showing how a “product can be worn in real life”. Same month Amazon launched its Alexa Echo Look to help you with your outfit, and Alibaba’s artificial intelligence personal stylist helped them achieve record sales during Singles Day. Some people think that fashion data is in the same place as music data was in 2003: ready to play a very relevant role. The good news is: the daily habit of deciding what to wear will not change. The need to buy new clothes won’t disappear, either. So, what do you think? Where will we be 10 years from now? Will taste data build unique online experiences? What role will outfits play? How will machine learning change fashion ecommerce? Will everything change, 10 years from now? We are a small team of eight, four on product and four engineers. We believe in focusing on our very specific problem, no one on earth can understand the problem better than us. We also believe on building the complete solution ourselves while doing as few things as possible. We work 100% remote and live in Slack + GitHub. You can learn more about our machine learning approach, here. If you are a deep learning engineer or a product manager in the fashion space, and want to chat & temporarily access our Social Fashion Graph, please email us describing your work. You can also download our iOS and Android apps, or simply say hi: hi at chicisimo.com. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO at @chicisimo. Machine learning to automate outfit advise. World’s largest outfits app how hackers start their afternoons. " Sarthak Jain,3.9K,10,https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------8----------------,How to easily Detect Objects with Deep Learning on Raspberry Pi,"Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API " Emil Wallner,9.1K,25,https://medium.freecodecamp.org/how-you-can-train-an-ai-to-convert-your-design-mockups-into-html-and-css-cc7afd82fed4?source=tag_archive---------9----------------,How you can train an AI to convert your design mockups into HTML and CSS,"Within three years, deep learning will change front-end development. It will increase prototyping speed and lower the barrier for building software. The field took off last year when Tony Beltramelli introduced the pix2code paper and Airbnb launched sketch2code. Currently, the largest barrier to automating front-end development is computing power. However, we can use current deep learning algorithms, along with synthesized training data, to start exploring artificial front-end automation right now. In this post, we’ll teach a neural network how to code a basic a HTML and CSS website based on a picture of a design mockup. Here’s a quick overview of the process: We’ll build the neural network in three iterations. First, we’ll make a bare minimum version to get a hang of the moving parts. The second version, HTML, will focus on automating all the steps and explaining the neural network layers. In the final version, Bootstrap, we’ll create a model that can generalize and explore the LSTM layer. All the code is prepared on GitHub and FloydHub in Jupyter notebooks. All the FloydHub notebooks are inside the floydhub directory and the local equivalents are under local. The models are based on Beltramelli‘s pix2code paper and Jason Brownlee’s image caption tutorials. The code is written in Python and Keras, a framework on top of TensorFlow. If you’re new to deep learning, I’d recommend getting a feel for Python, backpropagation, and convolutional neural networks. My three earlier posts on FloydHub’s blog will get you started: Let’s recap our goal. We want to build a neural network that will generate HTML/CSS markup that corresponds to a screenshot. When you train the neural network, you give it several screenshots with matching HTML. It learns by predicting all the matching HTML markup tags one by one. When it predicts the next markup tag, it receives the screenshot as well as all the correct markup tags until that point. Here is a simple training data example in a Google Sheet. Creating a model that predicts word by word is the most common approach today. There are other approaches, but that’s the method we’ll use throughout this tutorial. Notice that for each prediction it gets the same screenshot. So if it has to predict 20 words, it will get the same design mockup twenty times. For now, don’t worry about how the neural network works. Focus on grasping the input and output of the neural network. Let’s focus on the previous markup. Say we train the network to predict the sentence “I can code.” When it receives “I,” then it predicts “can.” Next time it will receive “I can” and predict “code.” It receives all the previous words and only has to predict the next word. The neural network creates features from the data. The network builds features to link the input data with the output data. It has to create representations to understand what is in each screenshot, the HTML syntax, that it has predicted. This builds the knowledge to predict the next tag. When you want to use the trained model for real-world usage, it’s similar to when you train the model. The text is generated one by one with the same screenshot each time. Instead of feeding it with the correct HTML tags, it receives the markup it has generated so far. Then, it predicts the next markup tag. The prediction is initiated with a “start tag” and stops when it predicts an “end tag” or reaches a max limit. Here’s another example in a Google Sheet. Let’s build a “hello world” version. We’ll feed a neural network a screenshot with a website displaying “Hello World!” and teach it to generate the markup. First, the neural network maps the design mockup into a list of pixel values. From 0–255 in three channels — red, blue, and green. To represent the markup in a way that the neural network understands, I use one hot encoding. Thus, the sentence “I can code” could be mapped like the below. In the above graphic, we include the start and end tag. These tags are cues for when the network starts its predictions and when to stop. For the input data, we will use sentences, starting with the first word and then adding each word one by one. The output data is always one word. Sentences follow the same logic as words. They also need the same input length. Instead of being capped by the vocabulary, they are bound by maximum sentence length. If it’s shorter than the maximum length, you fill it up with empty words, a word with just zeros. As you see, words are printed from right to left. This forces each word to change position for each training round. This allows the model to learn the sequence instead of memorizing the position of each word. In the below graphic there are four predictions. Each row is one prediction. To the left are the images represented in their three color channels: red, green and blue and the previous words. Outside of the brackets are the predictions one by one, ending with a red square to mark the end. In the hello world version, we use three tokens: start,

Hello World!

and end. A token can be anything. It can be a character, word, or sentence. Character versions require a smaller vocabulary but constrain the neural network. Word level tokens tend to perform best. Here we make the prediction: FloydHub is a training platform for deep learning. I came across them when I first started learning deep learning and I’ve used them since for training and managing my deep learning experiments. You can install it and run your first model within 10 minutes. It’s hands down the best option to run models on cloud GPUs. If you are new to FloydHub, do their 2-min installation or my 5-minute walkthrough. All the notebooks are prepared inside the FloydHub directory. The local equivalents are under local. Once it’s running, you can find the first notebook here: floydhub/Helloworld/helloworld.ipynb . If you want more detailed instructions and an explanation for the flags, check my earlier post. In this version, we’ll automate many of the steps from the Hello World model. This section will focus on creating a scalable implementation and the moving pieces in the neural network. This version will not be able to predict HTML from random websites, but it’s still a great setup to explore the dynamics of the problem. If we expand the components of the previous graphic it looks like this. There are two major sections. First, the encoder. This is where we create image features and previous markup features. Features are the building blocks that the network creates to connect the design mockups with the markup. At the end of the encoder, we glue the image features to each word in the previous markup. The decoder then takes the combined design and markup feature and creates a next tag feature. This feature is run through a fully connected neural network to predict the next tag. Since we need to insert one screenshot for each word, this becomes a bottleneck when training the network (example). Instead of using the images, we extract the information we need to generate the markup. The information is encoded into image features. This is done by using an already pre-trained convolutional neural network (CNN). The model is pre-trained on Imagenet. We extract the features from the layer before the final classification. We end up with 1536 eight by eight pixel images known as features. Although they are hard to understand for us, a neural network can extract the objects and position of the elements from these features. In the hello world version, we used a one-hot encoding to represent the markup. In this version, we’ll use a word embedding for the input and keep the one-hot encoding for the output. The way we structure each sentence stays the same, but how we map each token is changed. One-hot encoding treats each word as an isolated unit. Instead, we convert each word in the input data to lists of digits. These represent the relationship between the markup tags. The dimension of this word embedding is eight but often varies between 50–500 depending on the size of the vocabulary. The eight digits for each word are weights similar to a vanilla neural network. They are tuned to map how the words relate to each other (Mikolov et al., 2013). This is how we start developing markup features. Features are what the neural network develops to link the input data with the output data. For now, don’t worry about what they are, we’ll dig deeper into this in the next section. We’ll take the word embeddings and run them through an LSTM and return a sequence of markup features. These are run through a Time distributed dense layer — think of it as a dense layer with multiple inputs and outputs. In parallel, the image features are first flattened. Regardless of how the digits were structured, they are transformed into one large list of numbers. Then we apply a dense layer on this layer to form a high-level feature. These image features are then concatenated to the markup features. This can be hard to wrap your mind around — so let’s break it down. Here we run the word embeddings through the LSTM layer. In this graphic, all the sentences are padded to reach the maximum size of three tokens. To mix signals and find higher-level patterns, we apply a TimeDistributed dense layer to the markup features. TimeDistributed dense is the same as a dense layer, but with multiple inputs and outputs. In parallel, we prepare the images. We take all the mini image features and transform them into one long list. The information is not changed, just reorganized. Again, to mix signals and extract higher level notions, we apply a dense layer. Since we are only dealing with one input value, we can use a normal dense layer. To connect the image features to the markup features, we copy the image features. In this case, we have three markup features. Thus, we end up with an equal amount of image features and markup features. All the sentences are padded to create three markup features. Since we have prepared the image features, we can now add one image feature for each markup feature. After sticking one image feature to each markup feature, we end up with three image-markup features. This is the input we feed into the decoder. Here we use the combined image-markup features to predict the next tag. In the below example, we use three image-markup feature pairs and output one next tag feature. Note that the LSTM layer has the sequence set to false. Instead of returning the length of the input sequence, it only predicts one feature. In our case, it’s a feature for the next tag. It contains the information for the final prediction. The dense layer works like a traditional feedforward neural network. It connects the 512 digits in the next tag feature with the 4 final predictions. Say we have 4 words in our vocabulary: start, hello, world, and end. The vocabulary prediction could be [0.1, 0.1, 0.1, 0.7]. The softmax activation in the dense layer distributes a probability from 0–1, with the sum of all predictions equal to 1. In this case, it predicts that the 4th word is the next tag. Then you translate the one-hot encoding [0, 0, 0, 1] into the mapped value, say “end”. If you can’t see anything when you click these links, you can right click and click on “View Page Source.” Here is the original website for reference. In our final version, we’ll use a dataset of generated bootstrap websites from the pix2code paper. By using Twitter’s bootstrap, we can combine HTML and CSS and decrease the size of the vocabulary. We’ll enable it to generate the markup for a screenshot it has not seen before. We’ll also dig into how it builds knowledge about the screenshot and markup. Instead of training it on the bootstrap markup, we’ll use 17 simplified tokens that we then translate into HTML and CSS. The dataset includes 1500 test screenshots and 250 validation images. For each screenshot there are on average 65 tokens, resulting in 96925 training examples. By tweaking the model in the pix2code paper, the model can predict the web components with 97% accuracy (BLEU 4-ngram greedy search, more on this later). Extracting features from pre-trained models works well in image captioning models. But after a few experiments, I realized that pix2code’s end-to-end approach works better for this problem. The pre-trained models have not been trained on web data and are customized for classification. In this model, we replace the pre-trained image features with a light convolutional neural network. Instead of using max-pooling to increase information density, we increase the strides. This maintains the position and the color of the front-end elements. There are two core models that enable this: convolutional neural networks (CNN) and recurrent neural networks (RNN). The most common recurrent neural network is long-short term memory (LSTM), so that’s what I’ll refer to. There are plenty of great CNN tutorials, and I covered them in my previous article. Here, I’ll focus on the LSTMs. One of the harder things to grasp about LSTMs is timesteps. A vanilla neural network can be thought of as two timesteps. If you give it “Hello,” it predicts “World.” But it would struggle to predict more timesteps. In the below example, the input has four timesteps, one for each word. LSTMs are made for input with timesteps. It’s a neural network customized for information in order. If you unroll our model it looks like this. For each downward step, you keep the same weights. You apply one set of weights to the previous output and another set to the new input. The weighted input and output are concatenated and added together with an activation. This is the output for that timestep. Since we reuse the weights, they draw information from several inputs and build knowledge of the sequence. Here is a simplified version of the process for each timestep in an LSTM. To get a feel for this logic, I’d recommend building an RNN from scratch with Andrew Trask’s brilliant tutorial. The number of units in each LSTM layer determines it’s ability to memorize. This also corresponds to the size of each output feature. Again, a feature is a long list of numbers used to transfer information between layers. Each unit in the LSTM layer learns to keep track of different aspects of the syntax. Below is a visualization of a unit that keeps tracks of the information in the row div. This is the simplified markup we are using to train the bootstrap model. Each LSTM unit maintains a cell state. Think of the cell state as the memory. The weights and activations are used to modify the state in different ways. This enables the LSTM layers to fine-tune which information to keep and discard for each input. In addition to passing through an output feature for each input, it also forwards the cell states, one value for each unit in the LSTM. To get a feel for how the components within the LSTM interact, I recommend Colah’s tutorial, Jayasiri’s Numpy implementation, and Karphay’s lecture and write-up. It’s tricky to find a fair way to measure the accuracy. Say you compare word by word. If your prediction is one word out of sync, you might have 0% accuracy. If you remove one word which syncs the prediction, you might end up with 99/100. I used the BLEU score, best practice in machine translating and image captioning models. It breaks the sentence into four n-grams, from 1–4 word sequences. In the below prediction “cat” is supposed to be “code.” To get the final score, you multiply each score with 25%, (4/5) * 0.25 + (2/4) * 0.25 + (1/3) * 0.25 + (0/2) * 0.25 = 0.2 + 0.125 + 0.083 + 0 = 0.408 . The sum is then multiplied with a sentence length penalty. Since the length is correct in our example, it becomes our final score. You could increase the number of n-grams to make it harder. A four n-gram model is the model that best corresponds to human translations. I’d recommend running a few examples with the below code and reading the wiki page. Links to sample output Front-end development is an ideal space to apply deep learning. It’s easy to generate data, and the current deep learning algorithms can map most of the logic. One of the most exciting areas is applying attention to LSTMs. This will not just improve the accuracy, but enable us to visualize where the CNN puts its focus as it generates the markup. Attention is also key for communicating between markup, stylesheets, scripts and eventually the backend. Attention layers can keep track of variables, enabling the network to communicate between programming languages. But in the near feature, the biggest impact will come from building a scalable way to synthesize data. Then you can add fonts, colors, words, and animations step-by-step. So far, most progress is happening in taking sketches and turning them into template apps. In less then two years, we’ll be able to draw an app on paper and have the corresponding front-end in less than a second. There are already two working prototypes built by Airbnb’s design team and Uizard. Here are some experiments to get started. Getting started Further experiments Huge thanks to Tony Beltramelli and Jon Gold for their research and ideas, and for answering questions. Thanks to Jason Brownlee for his stellar Keras tutorials (I included a few snippets from his tutorial in the core Keras implementation), and Beltramelli for providing the data. Also thanks to Qingping Hou, Charlie Harrington, Sai Soundararaj, Jannes Klaas, Claudio Cabral, Alain Demenet and Dylan Djian for reading drafts of this. This the fourth part of a multi-part blog series from Emil as he learns deep learning. Emil has spent a decade exploring human learning. He’s worked for Oxford’s business school, invested in education startups, and built an education technology business. Last year, he enrolled at Ecole 42 to apply his knowledge of human learning to machine learning. If you build something or get stuck, ping me below or on twitter: emilwallner. I’d love to see what you are building. This was first published as a community post on Floydhub’s blog. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I study CS at 42 Paris, blog, and experiment with deep learning. Our community publishes stories worth reading on development, design, and data science. " Gant Laborde,1.3K,7,https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------2----------------,Machine Learning: how to go from Zero to Hero – freeCodeCamp,"If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text. Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough. However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you. A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter. A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players. Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory. I’m talking about Machine Learning. You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm. So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I. Wow! Right? That’s a crazy process! Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane. Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games. Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks! I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML. Secondly, if you don’t influence the world, the world will influence you. Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML. The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work? If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos. If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting. Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley). It’s cool watching data go through a trained model, but you can even watch your neural network get trained. One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model! Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above. These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now. There are tons of resources available. I’ll be recommending two approaches. In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch! If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you. I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone. This one costs money after Level 1. Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper. Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did. This course is 100% free. You only have to pay for a certificate if you want one. With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple. But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways. If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course. TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course. Plenty more information on available courses and rankings can be found here. If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready. I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive! Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now. I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in. It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps! See? I hope this article has inspired you and those around you to learn ML! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech Our community publishes stories worth reading on development, design, and data science. " James JD Sutton,2.2K,9,https://medium.com/coinmonks/what-is-q-from-a-laymen-given-barney-style-6387b18267d2?source=---------3----------------,What is “Q” from a laymen... – Coinmonks – Medium,"A bit long, but I think it might help people understand Qubic a bit. Two takeaways I took from reading Qubic: (Rev_02) Take Away One: 1. If you host a “Q-Node”, a node that supports the Q protocol (layer) you can earn rewards in these manners: Offering PoW (mining rigs, computer, or your coffee pot), PoS (your IOTA’s that you hold), your bandwidth that you don’t use (probably something to do with LIFI in the future, so this could be your router and lightbulbs in your house), and simply, the previous history of running an honest node for the system. All of the above can be used to pass the “resource test phase”. All of those resources: PoW, PoS, Po(Bandwidth), and Po(Honesty) are measured and quantified. Your resources than essentially set you in an equivalent resource pool ie: in a pool with other people of similar resource power. You then earn IOTA’s from people using the Oracle system, smart contract, or simply who want computational power (which is absolutely needed to be able to outsource the IoT industry which is for sure the future. So what does that mean. Before do you remember all of the questions: IOTA won’t work because people won’t run nodes, because they don’t get incentives like traditional blockchains. Well now they can!!! And not only that, “Q” takes every aspect of each crypto and combines it all in one... PoS, PoW, PoBandwidth, and PoHonesty. More so, if you have Asic’s, you are in the Asic’s pool, GPU’s, you’re in the GPU pool, old crappy computer (you’re in the old crappy computer pool), you stake a lot of IOTA, you’re in the high-stake IOTA pool... etc. This is the process of “proving” your resources to the network. People will purchase “resources” using the Qubic protocol. If they want quality, fast, or extreme computational power they have to pay. Remember, you the user set what you want to receive in IOTA for your resources (economic principles). If you spend $1200 a month on electricity and equipment, you will only charge more than $1200 a month for your resources, no one would charge less. So, in your pool, everyone will eventually come to a quorum charging a set amount, and thus the economy (the users) will pay for it. So, in essence, the better the pool, the more the reward you get (based on economic principals in society (just like blockchain). I don’t fully understand the exact quantitative measure of what equates to the reward (such as with hash power in blockchain), though it seems that once you prove your resources your machine performs the calculations that are being bought on the Qubic network. However, if your coffee pot has a jinn chip that is Ternary hardware, with Ternary programming (ABRA), then it can sell its resources when it’s not making coffee ie: proving its resources and then completing computations for buyers. This is just speculative and the ABRA ternary language will be able to interface with binary and lower the energy consumption but a significant amount. When combining ABRA with a ternary chip such as Jinn, the energy efficiency is even more! One of the major bottle necks or challenges that prevents advancement in technology is the amount of battery storage within machines. If we can’t redesign a battery to store more power, at least we can redesign the energy consumption within machine devices. Also, your autonomous car not only can offer up its PoW, it can also stake the IOTA’s it is not using in its wallet the bandwidth when it isn’t working or driving, and the experience / honesty factor (by proving its resources and then selling its computational power) as it “may” be able to be a node in itself. In addition the left-over electricity it has from charging up through solar or wind power, it can sell through the smart-grid to neighbors or local businesses. Your car has “multiple” resources and the Qubic network allows machines to offer “all” of their resources to their owner, not just one or two as with blockchain. Qubics revolutionizes machinery by allowing it; the machinery, to sell its resources. This is another building block to the ultimate vision of a machine acting in a “machine economy”. Rather than us setting this up, and the fees we want to charge, eventually we can create smart contracts with Qubic functions which then allows machines to negotiate and earn “themselves”, the machines will sell and buy resources “THEMSELVES”, truly creating a machine economy, “AND” if you own the machine, you earn the rewards (ie: income, passive income). Take Away Two: 2. From the above description these are only a few use cases that I take away from reading about Qubics. The reality is that the community will be coming up with new use cases every day for the following year probably. Use cases that we can’t even imagine at the present time, but here is my second takeaway: The Qubic protocol, where all this is happening. Miners earning, people staking their IOTA and earning (ie: “interest” or “passive income”) because they are HODLER’s (and by proving their resources they sell their computational power), Forex financial companies using Qubics for quorum “ORACLE” data, smart-contracts being run on the protocol, scientist using computational power for medical research, VW, Fujitsu, and Bosch using computational power for their IoT devices, etc. on and on. All those use cases, to power.... TO POWER, to run the network, all those functions will be conducted with zero fee transactions that take place on the Tangle with real-time smart-contract micro-payments. The whole system runs on data transactions (zero fee transactions) by sending MetaData within the transaction sent on the Tangle. MetaData essentially (I’m not a techie) is like the language that tells the Q-Nodes to wake up, to process data, pay, earn, and receive, and essentially run the whole Q network. So.... that is a SHITLOAD of transactions occurring!!!!!! At the present day, the amount of transactions right now occurring from Trinity, speculation, and trading, is like a drop in the ocean compared to how many transactions the Qubic network will produce. It’s not hard to understand, the Qubic network will run millions if not billions of transactions per day over the Tangle, and remember, “each transaction confirms two transactions”. So.... what does that mean. More transactions mean a faster Tangle, a more secure Tangle, an infinitely scalable Tangle.... and most importantly.... WE CAN TAKE THE COO (Coordinator) OFFLINE!!! Note: there may be use cases for multiple COO’s (coordinators) or private COO’s but that is a whole other arena and I simply state this because I read someone writing such an example that went right over my head. The point is: Q is needed to remove the COO! So, as everyone says, “Why don’t the dev’s focus working on removing the COO, (“wen remove COO”), you can see that THEY ARE working on it!”. The Qubic network will support the network because it incentives people to host nodes and earn IOTA! Also, if no one uses the Qubic network then it doesn’t work right?!? So, making “Corporate Partners”, and United Nations (NGO) affiliates, partnership with banks, all of this is needed to support the Qubic Network. So here are the building blocks to the dev’s vision: - You need a Tangle (Zero fee transactions that can that can send meta data) - You need IOTA (A transfer means of metadata and a form of payment that can buy and sell resources (ie: PoW, PoS, PoBandwidth, and PoHonesty) - You need the Qubic Network (creates Oracles, allows for Quorum Based Computations that powers Oracles.) - You need Oracles (Oracles power smart-contracts which is the whole shabang! It will change society and change global finance). - You need the Qubic Network (Connects users of the network with resource providers of the network, enables a machine economy, and provides computational power and the most advanced smart-contracts to society). - Users of the network (We need a community (that the IOTA Foundation builds from hosting AMA’s, takes the time to talk to the community on Discord, and provides transparency so we all can go along on their journey of completing their vision), we need global partners such as Bosch, VW, Fujitsu, etc., We need governments and societies such as Taiwan, Denmark, and maybe Sweden; and we need banking like DnB, and electrical companies like Elaad. We need the global integration to actually “use” the Qubic network for it to work (demand drives economic principals, which ultimately will pay the Q-Node providers, which will drive transactions thus scaling the network). - Lastly, you need to remove the COO and let the network grow organically. (This can only be done when the previous steps have been completed). Tangle ->IOTA ->Qubic Network ->Oracles ->Partners -> COO So removing the COO is one of the last steps. After removing COO the network can just grow organically on its own without much support or help from the dev’s. They can then work on building applications that work on top of the Qubic network. This is a large challenging undertaking that is being built step-by-step, each piece is part of a large puzzle that all comes together. As for the Qubic vision, which is what was just released, is a really large damn piece of that puzzle!!! It just goes to show, that all of this adds up to removing the COO. Everything the dev’s, and the IF, have been doing are working towards simply that! It’s all one big construct, not different pieces, everything ties together and the Qubic network is a large friggin piece of it all. Their sole mission is to complete the puzzle, the vision, so the COO can be removed, and the Tangle can literally change society through the machine economy. This is just my non-techie understanding at the moment. I have a lot more research and studying to do, but damn I love it! So glad to be allowed within this community and enjoy the journey with the IOTA Foundation. Please clarify if I totally misunderstood anything and looking forward to hear other people’s understanding. Lastly, after writing this I re-read the Qubic website. Difficult to understand, but my rough understanding is that Q-Nodes and Qubics can lie dormant listening to the Tangle. Qubics are event driven so that when one Qubic initiates, another Qubic may need that quorum information to activate, and when the one Qubic gets the result it intended to compute, then the that Qubic itself can activate. So, one Qubic can initiate another Qubic and so on so on like neurons firing, lighting up a portion of the brain, which then fires more neurons. This is all done through secured data streams, the Tangle, and the Q-Nodes, and the Qubic network. In a way, it’s a global living system with the data stream as its life-blood, the Tangle as it’s bone structure, and the Qubics and Q-Nodes as its neurons. For all we know, in the future, this global mass network can be, and power AI, or maybe it will grow to become one massive AI source that can help society in so many ways. As I stated, I’m a non-techie. I probably haven’t put out a bit of misinformation as I don’t fully understand it all. Really, I just hope to ignite curiosity, so people may be inspired to put a two into the new world of the Machine Economy. As well sometimes it is hard to see the big picture. The IOTA Foundation has been working on “A” vision, a machine-to-machine economy that will change society with the Tangle as a standard protocol, the bone structure of it all. The fact is each new development is another puzzle piece, or a foundation block, that stacks on top of the others. In the end we have the puzzle as a whole, or a great structure built upon a solid foundation. https://twitter.com/IotanSea https://qubic.iota.org https://www.iota.org/ https://www.facebook.com/groups/iotatangle/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The Crypto & Blockchain publication. Educate yourself about cryptocurrency, blockchain developments. Check tutorials on Solidity and smart contracts. " Justin Lee,511,10,https://medium.com/swlh/the-beginners-guide-to-conversational-commerce-96f9c7dbaefb?source=---------5----------------,The beginner’s guide to conversational commerce – The Startup – Medium,"Your greengrocer does it. So does that guy selling sunglasses on the beach. It’s why the funny old French bakery around the corner’s been running for 15 years. Conversational marketing. A buzzword, a footnote, a revelation. Everyone’s talking about it, but what is it? At its most simple, it’s the act of talking — and more importantly, listening — to your customers: their problems, their stories, their successes. Forging a genuine connection and using that connection to inform your marketing decisions. At its most complex, conversational marketing has become synonymous with cutting-edge technologies for computer-based dialog processing. Brands have always known that one-to-one conversations are valuable; but up until very recently, it was impossible to personalize these conversations at scale, in real-time. No longer. Chatbots have become a mainstay of digital marketing, and every day their underlying AI becomes more sophisticated. Gartner predicts that by 2020, 30% of our interactions with technology will be through “conversations” with smart machines. In his 1999 Cluetrain Manifesto, David Weinberger reminded us that that’s a hundred times more true today. A successful conversational marketing strategy will pair the spark of authenticity from real conversation with the emerging technologies of the future. In a 2016 article, Chris Messina distills the concept: Conversational converse is the process of having a real-time, one-to-one conversation with a customer or lead. It’s a direct, personalized, dialog-driven approach to nurturing long-term relationships, collecting data and increasing sales. Unlike traditional digital marketing, it ‘pulls’ users in instead of ‘pushing’ content on them. It’s a discourse, not a lecture. Despite recently picking up speed, conversational marketing isn’t new. The concept made its first appearance in 2007 with Joseph Jaffe’s Join the Conversation. Jaffe wanted to teach marketers to re-engage their customers through community, partnership, and dialog: In the past, brands have been able to talk at their customers — through email, website interactions and social media — not with them. Brands have struggled to capture, keep and convert attention into sales, sign-ups, and long-term loyalty. Engagement was passive, and results were shallow. Customer service was relegated to a formulaic question-answer scenario that was unsatisfying for everyone involved. Take it from leading conversational marketing platform Drift’s stellar report: Today, messaging apps have over 5 billion monthly active users, and for the first time, usage rates have surpassed social networks. Whether it’s chatting with friends on WhatsApp or exchanging ideas with coworkers on Slack, messaging has become an integral part of our lives. Despite extreme app saturation, the average person only uses five apps regularly and, you guessed it — messaging apps claim these spots, boasting 10x better open rates than the next leading digital channel. These messaging platforms have huge audiences: there are over 4 billion active monthly users on the top three messaging apps. Like the rise of the internet or the app economy of the past decade, conversational marketing is born from current desires: for real-time connection and genuine value. Conversational marketing is an umbrella term that encompasses every dialog-driven tactic, from opt-in email marketing to customer feedback. But the engine powering recent developments is Artificial Intelligence (AI). Chatbots represent the new era in conversational marketing: scaleable, personalized, real-time and data-driven. Of course, these bots aren’t intended to replace human-to-human interactions; they’re there to support and enhance them: helping users have the right conversations with the right people at the right time. (For the meantime, anyway. According to Gartner research, chatbots will account for 85% of all customer service by 2020). Chatbots are a blank canvas, with the potential to be molded and infused with a persona that reflects a company’s values — like our very own GrowthBot(AKA a mini Dharmesh Shah). This technology is still in its infancy, so most bots follow a set of rules programmed by a human via a bot-building platform. The differentiator is that the chatbots carry out conversations with users using natural language. AI uses first-person data to learn more about each customer and deliver a hyper-personalized experience. Reps and bots can then join forces to manage these conversations at scale. Let’s imagine I’m going to a fancy party. Tonight. It’s last minute, and I’ve just received a message that it’s black tie; but I don’t have the right shoes. I need to quickly find a pair that is appropriate; my size; coherent with the rest of my outfit; a good price, etc. I would usually Google for a shop in my area, then go to browse on their website to find a pair I like. But other issues would soon crop up: do they have my size? Are the shoes smart enough? Are they in stock? I could fill out a query on the site’s contact form, or give them a ring, but will they answer? And if they don’t have the right pair, they are unlikely to suggest a range of alternatives. The whole process is time-consuming and inefficient. But suppose the brand I like has a strong conversational marketing culture. Instead of resorting to email, I would be able to conduct the conversation in seconds on my phone; instantly, I’m given the colours, sizes and styles in stock. I can pay for the right shoes with a tap of a button. Conversational marketing enables users to get the information they need instantly, without picking up the phone or engaging with a person. It’s not about laziness; it’s about ease. Chris Messina concludes, As Clara de Soto, cofounder of Reply.ai, told VentureBeat, If users are made to toggle between various apps and platforms to get the answer they need, the value of the bot is moot: it needs to be native to the place they spend most time, whether that’s Slack, Messenger or onsite chat. But it can be tricky for brands to consolidate all their conversations in one place. That’s why HubSpot created Conversations, a free, multi-channel tool that lets businesses have one-to-one conversations at scale. says Dharmesh Shah, co-founder and CTO of HubSpot. We have a much lower tolerance for mistakes with machines compared to humans: 73% of people say they won’t interact with a bot again after one negative experience. And if a bot seems to be able to converse in English, we tend to easily overestimate how capable it is. That’s why it’s crucial to manage your customers’ expectations appropriately. Bots are far from being autonomous, and people aren’t easily fooled; trying to present your bot as a human agent is likely to be self-defeating. Bots don’t understand context created by preceding text, and conversational nuances can easily affect their capacity to answer. Because bots live inside messaging apps, they have the potential to invade a highly personal space, making the stakes of getting it right much higher. According to research, people use messaging apps for customer assistance with one key goal: to get their problem solved, fast. Bots should serve one simple purpose well, without getting tangled up in the conversational complications that are better left to humans. The way brands and users interract is undergoing a monumental shift. Customers are smarter and better-informed than ever before. They expect personalization and transparency as a prerequisite. They feel empowered by their options. It’s hard to fool them, and even harder to gain their loyalty. And most significantly, they want 24/7, 365 days of the year instantaneousness: to be heard, to be helped, right now; not in half an hour, not tomorrow. That’s why conversational marketing represents a new cornerstone in marketing but also in customer service and experience, branding and sales. Building a bot for the sake of being on-trend is not enough; it needs to be part of a larger strategy where each conversation has a purpose. As a long-term strategy intended to facilitate lasting relationships, it needs to be spearheaded towards a long-term goal. Effective conversational marketing is an intersection of brand values, user engagement and valuable dialogue. It’s about building your audience first, selling last. Thanks for reading. Originally published at blog.growthbot.org. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Kai Stinchcombe,44K,11,https://medium.com/@kaistinchcombe/decentralized-and-trustless-crypto-paradise-is-actually-a-medieval-hellhole-c1ca122efdec?source=---------6----------------,Blockchain is not only crappy technology but a bad vision for the future,"Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction. This December I wrote a widely-circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but rather hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because they were looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: the company Ripple decided the best way to move money across international borders was to not use Ripples. Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The end result is a shared definitive historical record. And, what’s more, because consensus is formed by each person acting in their own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009 they abandoned it because of logistical problems getting everyone to enter the data, and in 2017 they re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely-proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor and they don’t trust you (because you’re just two individuals on the internet), but, because it’s on blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money they don’t have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic, and direct, with no middleman needed to arbitrate the transaction, dictate terms, and take a fat cut on the way. Isn’t that better for everybody? Hm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly-governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are ten times those of the world’s most mismanaged currencies, and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully, or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded, and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—but more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy, and personal security was at the point of the sword. This is what Somalia looks like now, and also, what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool, and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is, and whether it has been sprayed with pesticides or not. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market, and so on, do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data a sequence of small hashed files won’t get the mango-pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen, or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters, or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors, or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust, and, at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not. Kai Stinchcombe coined the terms “crypto-medieval” “futuristic integrity wand” and “smart mango.” Please use freely: coining terms makes you a futurist. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Whatever the opposite of a futurist is " savedroid ICO,340,3,https://medium.com/@ico_8796/sneakpeek-the-savedroid-crypto-saving-app-part-1-your-wish-64d1f7308518?source=---------7----------------,#SNEAKPEEK The savedroid crypto saving app — Part #1: Your wish,"The international beta of our brand new crypto saving app is coming soon. The beta app will be launched in English language and will exclusively be available for our ICO token buyers only. Now, get ready to learn more about the savedroid crypto saving app even before its official release. Today, we give you a very first sneak peek of one of its core features: your wish. With savedroid you can save up for your personal goals you want to afford in the future. Your own lambo or your desired moon. Exactly that is your wish. So, using the savedroid crypto saving app is not just about piling up a fortune. It’s all about saving up for your personal wishes, which you are aspiring to fulfill but can’t afford right now. There are 3 simple steps to set up your wish in less than one minute: 1) What?First, name your wish and select one of our illustrations to always keep you motivated to continue saving. You can go small and save for your new pair of hipster sneakers or you may go big and start a crypto savings plan for your new family home. Everything is possible, only the moon is the limit — at least for now. 2) How much?Then set the amount you need to save up to afford your wish. The amount is denominated in Fiat currency as it is the prevailing means of payment. By the way, that makes it a lot easier for you as you don’t need to do the math converting Fiat to crypto and vice versa — this complex task is on us. 3) When?Finally, select the date by when you want to fulfil your wish. And you are done! That was easy. Just as easy as savedroid’s other features will be to deliver on our mission to democratize crypto and bring cryptocurrencies to the masses. To keep you posted on our latest product updates we have started this new #SNEAKPEEK series. Here we will provide you regular sneak peeks on our hottest new features. Stay tuned and follow our blog! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The savedroid ICO: Cryptocurrencies for Everyone — now! Give Power to the People. Join the Revolution: https://ico.savedroid.com " Brandon Morelli,221,5,https://techburst.io/artificial-intelligence-top-10-articles-june-2018-4b3fa7572b46?source=---------8----------------,Artificial Intelligence Top 10 Articles — June 2018,"Here’s what’s trending this month in Artificial Intelligence. Topics include: Whether you’re experienced with Artificial Intelligence, or a newbie looking to learn the basics of AI, there’s something for everyone on this list. Disclosure: We receive compensation from the courses we feature. 4.3/5 Stars || 17 Hours of Video || 58,823 Students Build an AI that combines the power of Data Science, Machine Learning and Deep Learning to create powerful AI for Real-World applications. You will also have the chance to understand the story behind Artificial Intelligence. Learn More. 4.7/5 Stars || 8 Hours of Video || 15,063 Students Completely understand the relationship between reinforcement learning and psychology and on a technical level. Apply gradient-based supervised machine learning methods to reinforcement learning and implement 17 different reinforcement learning algorithms. Learn More. By Lance Ulanoff Have you heard about the Google Duplex yet? It’s pretty much the talk all over the internet. Google CEO Sundar Pichai has dropped its biggest bomb when they introduced Google Duplex to all. Take a look more on this story to know more. By Irhum Shafkat Understanding convolutions can often feel a bit unnerving yet it’s concept is fascinatingly powerful and highly extensible. Let’s try to break down the mechanics of the convolution operation, step-by-step, relate and explore it’s hierarchy into a more powerful one. By WiseWolf Fund AI is already shaping the economy, and in the near future, its effect may be even more significant. Ignoring the new technology and its influence on the global economic situation is a recipe for failure. Read more of this article now! By Sam Drozdov Machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed”. Learn the basics of machine learning and how to apply it to the products you are building right now. By Aman Dalmia Having a great opportunity to interact with great minds is one of the awesome privileges one can keep to their knowledge and motivate them to avoid the mistakes in a much better manner. By Simon Greenman Welcome to AI gold rush! Check out this awesome article that talks about how companies and startups make money on AI and how it helps the economic growth as well. By Justin Lee Are chatbots hype over already? Find out why our industry massively overestimated the initial impact chatbots would have and a lot more reasons why chatbots are not on the trend anymore. By Daniel Jeffries AI could mean the end of all jobs for most people and that’s just terrifying, right? Check out this topic to get to know more about how will AI bring an explosion to new jobs. By George Seif Learn more about Google’s AutoML — a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge in AI. By James Loy Understand the inner workings of Deep Learning through Python with Neural Network. Know and train more about Neural Network from scratch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Creator of @codeburstio — Frequently posting web development tutorials & articles. Follow me on Twitter too: @BrandonMorelli bursts of tech to power through your day " Sarthak Jain,3.9K,10,https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=---------9----------------,How to easily Detect Objects with Deep Learning on Raspberry Pi,"Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API " Dr. GP Pulipaka,2,6,https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------5----------------,"3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and...","Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form. Latent semantic analysis Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document. Brief overview of linear algebra The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK. Deep Learning at scale with Google Colab notebooks Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper. Once this step is complete, it shows the following log without errors: The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab If it shows successful retrieval of access token, then Colab is all set. At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file. Once the drive is mounted, Colab has access to the datasets from Google drive. Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook. PyCharm IDE The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal. Results from OSX Terminal Jupyter Notebook on standalone machine Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine: References Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013 Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing. Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience " Gabriel Jiménez,50,5,https://medium.com/aimarketingassociation/chatbots-could-we-talk-edd6ccbd8f5a?source=---------7----------------,"Chatbots, could we talk? – AIMA: AI Marketing Magazine – Medium","After the euphoria for apps, the trend is reversing. Every day we download fewer new apps and we keep with few in constant use. A lot has happened since Apple in 2009 proclaimed that there was an app for everything. As in the next commercial: The chat boom According to the report of the Internet Association 2017 on the habits of Internet users in México. The second social network used by Mexicans is WhatsApp a messaging app and the first, although the report has no separate data, is Facebook, which also includes Facebook Messenger. As a particular fact both are from Facebook, as is instagram that is in position 5 on the list. The customer experience in issues such as support, attention or navigation in telephone menus and the transition we have made from voice calls to text messages, both for practicity and cost have catalyzed the technological development of so-called virtual agents or chatbots to optimize resources and improve customer service. In an environment where an immediate response is the minimum that is expected, the best option to improve customer service at the lowest cost is through a chatbot. But what is a chatbot? It is a computer program, which works either through rules or the most advanced using artificial intelligence, the way to interact with them is via a chat. With rules Chatbots that work with rules have limited functionality, respond only to specific commands; If you do not write correctly what you want, it does not understand it. With artificial intelligence On the other hand, assistants who use artificial intelligence can understand what you say, in any way you write it, even if you do it incorrectly, abbreviated or with idiomatic expressions. They are also able to improve over time, learning the way people express themselves and how they ask. Context and memory Chatbots that use artificial intelligence can resume a previous conversation or, based on the context of the chat, move forward in a coherent manner. If for example we are looking for a movie to see in the cinema and first ask us the cinema we want to go to and then the movie, then we change the movie and then the chatbot will assume that we continue talking about the same cinema unless we specify otherwise. The above may seem very simple for us as people but for a chatbot to maintain a coherent and fluid conversation, it is a huge achievement and one that brings great value. Channels A chatbot can be integrated into any chat application, whether corporate, your website or commercial like Facebook Messenger or Whatsapp. Limitations One of the challenges faced by chatbots is the initial adoption, they may fail, mainly for 3 reasons: 1. As a result of not adequately delimiting its initial scope. We want to resolve all the possible issues with the chatbot, that deals with complaints, that supports, that sells, that generates interaction with customers, that gives service status. This causes, as with any project, scope creep, endless requirements which makes it seem that the project never will work appropiately. 2. It is not linked to an activity that solves a business issue, sometimes they apply to trivial situations or that do not have a relevant metric linked, so it is impossible to measure their effectiveness and quantify their benefits to the business. 3. Being a new technology, we tend to think that since it has intelligence it can answer any question outside the business context for which it was defined, thus also losing the initial focus and evaluating its performance outside the scope for which it was created. It is important to remember that although it has artificial intelligence, every bot has to have a period of learning and evolution and this takes time. Its process is similar to that carried by a child, when it begins to learn it makes mistakes, there are terms or forms of expression that it does not know but as time passes, it becomes more and more ready due to the experience it acquires with each conversation, the same It happens with the chatbot. Hand over That is why there always has to be a process to re-direct a human operator to a conversation in which the chatbot is not able to respond satisfactorily, in this way we keep the customer experience as a principle and we avoid frustrations to people. Connection with systems The chatbot can give an integral attention to clients through chat but its capacity to do it also depends on the integration that this one has with the systems of the company; without this, the service you provide will be incomplete and frustrating. For example, if we have a chatbot to schedule appointments, we need that in addition to understanding what people ask, you can access the agenda system to check if there is time available to schedule, if you do not have it, you will be limited and it will be practically useless. Applications The main change when using a chatbot is that instead of browsing websites, we can ask to get what we want, it is even possible to obtain recommendations based on questions to find the most appropriate for us. Benefits To know about chatbots and artificial intelligence, write to me @gabojimenez_ or linkedin.com/in/gabrieljimenezmunoz/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CONSULTATIVE SELLING | AI FOR BUSINESS | CHATBOTS | ANALYTICS | SPEAKER | WRITER | TEACHER Driving the AI Marketing movement " Kai Stinchcombe,44K,11,https://medium.com/@kaistinchcombe/decentralized-and-trustless-crypto-paradise-is-actually-a-medieval-hellhole-c1ca122efdec?source=tag_archive---------0----------------,Blockchain is not only crappy technology but a bad vision for the future,"Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction. This December I wrote a widely-circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but rather hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because they were looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: the company Ripple decided the best way to move money across international borders was to not use Ripples. Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The end result is a shared definitive historical record. And, what’s more, because consensus is formed by each person acting in their own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009 they abandoned it because of logistical problems getting everyone to enter the data, and in 2017 they re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely-proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor and they don’t trust you (because you’re just two individuals on the internet), but, because it’s on blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money they don’t have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic, and direct, with no middleman needed to arbitrate the transaction, dictate terms, and take a fat cut on the way. Isn’t that better for everybody? Hm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly-governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are ten times those of the world’s most mismanaged currencies, and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully, or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded, and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—but more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy, and personal security was at the point of the sword. This is what Somalia looks like now, and also, what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool, and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is, and whether it has been sprayed with pesticides or not. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market, and so on, do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data a sequence of small hashed files won’t get the mango-pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen, or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters, or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors, or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust, and, at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not. Kai Stinchcombe coined the terms “crypto-medieval” “futuristic integrity wand” and “smart mango.” Please use freely: coining terms makes you a futurist. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Whatever the opposite of a futurist is " Dhruv Parthasarathy,4.3K,12,https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------1----------------,A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN,"At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com " Slav Ivanov,3.9K,17,https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------2----------------,"The $1700 great Deep Learning box: Assembly, setup and benchmarks","Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Tyler Elliot Bettilyon,17.9K,13,https://medium.com/@TebbaVonMathenstien/are-programmers-headed-toward-another-bursting-bubble-528e30c59a0e?source=tag_archive---------3----------------,Are Programmers Headed Toward Another Bursting Bubble?,"A friend of mine recently posed a question that I’ve heard many times in varying forms and forums: “Do you think IT and some lower-level programming jobs are going to go the way of the dodo? Seems a bit like a massive job bubble that’s gonna burst. It’s my opinion that one of the only things keeping tech and lower-level computer science-related jobs “prestigious” and well-paid is ridiculous industry jargon and public ignorance about computers, which are both going to go away in the next 10 years. [...]” This question is simultaneously on point about the future of technology jobs and exemplary of some pervasive misunderstandings regarding the field of software engineering. While it’s true that there is a great deal of “ridiculous industry jargon” there are equally many genuinely difficult problems waiting to be solved by those with the right skill-set. Some software jobs are definitely going away but programmers with the right experience and knowledge will continue to be prestigious and well remunerated for many years to come; as an example look at the recent explosion of AI researcher salaries and the corresponding dearth of available talent. Staying relevant in the ever changing technology landscape can be a challenge. By looking at the technologies that are replacing programmers in the status quo we should be able to predict what jobs might disappear from the market. Additionally, to predict how salaries and demand for specific skills might change we should consider the growing body of people learning to program. As Hannah pointed out “public ignorance” about computers is keeping wages high for those who can program and the public is becoming more computer savvy each year. The fear of automation replacing jobs is neither new nor unfounded. In any field, and especially in technology, market forces drive corporations toward automation and commodification. Gartner’s Hype Cycles are one way of contextualizing this phenomenon. As time goes on, specific ideas and technologies push towards the “plateau of productivity” where they are eventually automated. Looking at history one must conclude that automation has the power to destroy specific job markets. In diverse industries ranging from crop harvesting to automobile assembly technology advances have consistently replaced and augmented human labor to reduce costs. A professor once put it this way in his compilers course, “take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?” In this metaphor the “machine” is a computer programming language. This professor was really asking: Do you want to build websites using JavaScript, or do you want to build the V8 engine that powers JavaScript? The creation of websites is being automated by WordPress (and others) today. V8 on the other hand has a growing body of competitors some of whom are solving open research questions. Languages will come and go (how many Fortran job openings are there?) but there will always be someone building the next language. Lucky for us, programming language implementations are written with programming languages themselves. Being a “machine operator” in software puts you on the path to being a “machine creator” in a way which was not true of the steel mill workers of the past. The growing number of languages, interpreters, and compilers shows us that every job-destroying machine also brings with it new opportunities to improve those machines, maintain those machines, and so forth. Despite the growing body of jobs which no longer exist, there has yet to be a moment in history where humanity has collectively said, “I guess there isn’t any work left for us to do.” Commodification is coming for us all, not just software engineers. Throughout history, human labor has consistently been replaced with non-humans or augmented to require fewer and less skilled humans. Self-driving cars and trucks are the flavor of the week in this grand human tradition. If the cycle of creation and automation are a fact of life, the natural question to answer next is: which jobs and industries are at risk, and which are not? AWS, Heroku, and other similar hosting platforms have forever changed the role of the System Administrator/DevOps engineer. Internet businesses used to absolutely need their own server master. Someone who was well versed in Linux; someone who could configure a server with Apache or NGINX; someone who could not only physically wire up the server, the routers, and all the other physical components, but who could also configure the routing tables and all the software required to make that server accessible on the public web. While there are definitely still people applying this skill-set professionally, AWS is making some of those skills obsolete — especially at the lower experience levels and on the physical side of things. There are very lucrative roles within Amazon (and Netflix, and Google...) for people with deep expertise in networking infrastructure, but there is much less demand at the small-to-medium business scale. “Business Intelligence” tools such as SalesForce, Tableau and SpotFire are also beginning to occupy spaces historically held by software engineers. These systems have reduced the demand for in-house Database Administrators, but they have also increased the demand for SQL as a general-purpose skill. They have decreased demand for in-house reporting technology, but increased demand for “integration engineers” who automate the flow of data from the business to the third-party software platform(s). A field that was previously dominated by Excel and Spreadsheets is increasingly being pushed towards scripting languages like Python or R, and towards SQL for data management. Some jobs have disappeared, but demand for people who can write software has seen an increase overall. Data Science is a fascinating example of commodification at a level closer to software. Scikit.learn, Tensorflow, and PyTorch are all software libraries that make it easier for people to build machine learning applications without building the algorithms from scratch. In fact, it’s possible to run a dataset through many different machine learning algorithms, with many different parameter sets for those algorithms, with little to no understanding of how those algorithms are actually implemented (it’s not necessarily wise to do this, just possible). You can bet that business intelligence companies will be trying to integrate these kinds of algorithms into their own tools over the next few years as well. In many ways data science looks like web development did 5–8 years ago — a booming field where a little bit of knowledge can get you in the door due to a “skills gap”. As web development bootcamps are closing and consolidating, data science bootcamps are popping up in their place. Kaplan, who bought the original web development bootcamp (Dev Bootcamp) and started a data science bootcamp (Metis) has decided to close DevBootcamp and keep Metis running. Content management systems are among the most visible of the tools automating away the need for a software engineer. SquareSpace and WordPress are among the most popular CMS systems today. These platforms are significantly reducing the value of people with a just a little bit of front end web development skill. In fact the barriers for making a website and getting it online have come down so dramatically that people with zero programming experience are successfully launching websites every day. Those same people aren’t making deeply interactive websites that serve billions of people, but they absolutely do make websites for their own businesses that give customers the information they need. A lovely landing page with information such as how to find the establishment and how to contact them is more than enough for a local restaurant, bar, or retail store. If your business is not primarily an “internet business” it has never been easier to get a working site on the public web. As a result, the once thriving industry of web contractors who can quickly set up a simple website and get it online is becoming less lucrative. Finally, it would border on hubris to ignore the physical aspect of computers in this context. In the words of Mike Acton: “software is not the platform, hardware is the platform”. Software people would be wise to study at least a little computer architecture and electrical engineering. A big shake up in hardware, such as the arrival of consumer grade quantum computers would (will) change everything about professional software engineering. Quantum computers are still a ways off, but the growing interest in GPUs and the drive toward parallelization is an imminent shift. CPU speeds have been stagnant for several years now and in that time a seemingly unquenchable thirst for machine learning and “big data” has emerged. With more desire than ever to process large data-sets OpenMP, OpenCL, Go, CUDA, and other parallel processing languages and frameworks will continue to become mainstream. To be competitively fast in the near-term future, significant parallelization will be a requirement across the board, not just in high-performance niches like operating systems, infrastructure and video games. Websites are ubiquitous. The 2017 Stack Overflow Survey reports that about 15% of professional software engineers are working in an “Internet/Web Services” company. The Bureau of Labor Statistics expects growth in Web Development to continue much faster than average (24% between 2014 and 2024). Due to its visibility, there has been a massive focus on “solving the skills gap” in this industry. Coding bootcamps teach Web Development almost exclusively and Web Development online courses have flooded Udemy, Udacity, Coursera and similar marketplaces. The combination of increasing automation throughout the Web Development technology stack and the influx of new entry level programmers with an explicit focus on Web Development has led some to predict a slide towards a “blue collar” market for software developers. Some have gone further, suggesting that the push towards a blue collar market is a strategy architected by big tech firms. Others, of course, say we’re headed for another bursting bubble. Change in demand for specific technologies is not news. Languages and frameworks are always rising and falling in technology. Web Development in its current incarnation (“JS Is King”) will eventually go the way of Web Development of the early 2000’s (remember Flash?). What is new, is that a lot of people are receiving an education explicitly (and solely) in the current trendy web development frameworks. Before you decide to label yourself a “React developer” remember there were people who once identified themselves as “Flash developers”. Banking your career on a specific language, framework, or technology is a game of roulette. Of course it’s quite difficult to predict what technologies will remain relevant, but if you’re going to go all in on something, I suggest relying on The Lindy Effect and picking something like C that has already withstood the test of time. The next generation will have a level of de facto tech literacy that Generation X and even Millennials do not have. One outcome of this will be that using the next generation of CMS tools will be a given. These tools will get better and young workers will be better at using them. This combination will definitely will bring down the value of low-level IT and web development skills as eager and skilled youngsters enter the job market. High schools are catching on as well, offering computer science and programming classes — some well educated high school students will likely be entering the workforce as programming interns immediately upon graduation. Another big group of newcomers to programming are MBAs and data analysts. Job listings which were once dominated by Excel are starting to list SQL as a “nice to have” and even “requirement”. Tools such as Tableau, SpotFire, SalesForce, and other web-based metrics systems continue to replace the spreadsheet as the primary tool for report generation. If this continues more data analysts will learn to use SQL directly simply because it is easier than exporting the data into a spreadsheet. People looking to climb the ranks and out-perform their peers in these roles are taking online courses to learn about databases and statistical programming languages. With these new skills they can begin to position themselves as data scientists by learning a combination of machine learning and statistical libraries. Look at Metis’ curriculum as a prime example of this path. Finally, the number of people earning Computer Science and Software Engineering degrees continues to climb. Purdue, for example, reports that applications to their CS program have doubled over five years. Cornell reports a similar explosion of CS graduates. This trend isn’t surprising given the growth and ubiquity of software. It’s hard for young people to imagine that computers will play a smaller role in our futures, so why not study something that’s going to give you job security. A common argument in the industry nowadays is around the idea that the education you receive in a four-year Computer Science program is mostly unnecessary cruft. I have heard this argument repeatedly in the halls of bootcamps, web development shops, and online from big names in the field such as this piece by Eric Elliott. The opposition view is popular as well, with some going so far as saying “all programmers should earn a master’s degree”. Like Eric Elliott, I think it’s good that there are more options than ever to break into programming, and a 4 year degree might not be the best option for many. Simultaneously, I agree with William Bain that the foundational skills which apply across programming disciplines are crucial for career longevity, and that it is still hard to find that information outside of university courses. I’ve written previously about what skills I think aspiring engineers should learn as a foundation of a long career, and joined Bradfield in order to help share this knowledge. Coding schools of many shapes and sizes are becoming ubiquitous, and for good reasons. There is quite a lot you can learn about programming without getting into the minutia of Big O notation, obscure data structures, and algorithmic trivia. However, while it’s true that fresh graduates from Stanford are competing for some jobs with fresh graduates from Hack Reactor, it’s only true in one or two sub-industries. Code school and bootcamp graduates are not yet applying to work on embedded systems, cryptography/security, robotics, network infrastructure, or AI research and development. Yet these fields, like web development, are growing quickly. Some programming-related skills have already started their transition from “rare skill” to “baseline expectation”. Conversely, the engineering that goes into creating beastly engines like AWS is anything but common. The big companies driving technology forward — Amazon, Google, Facebook, Nvidia, Space-X, and so on — are typically not looking for people with a ‘basic understanding of JavaScript’. AWS serves billions of users per day. To support that kind of load an AWS infrastructure engineer needs a deep knowledge of network protocols, computer architecture, and several years of relevant experience. As with any discipline there are amateurs and artisans. These prestigious firms are solving research problems and building systems that are truly pushing against the boundaries of what is possible. Yet they still struggle to fill open roles even while basic programming skills are increasingly common. People who can write algorithms to predict changes in genetic sequences that will yield a desired result are going to be highly valuable in the future. People who can program satellites, spacecraft, and automate machinery will continue to be highly valued. These are not fields that lend themselves as readily to a “3 month intensive program” as front end web development, at least not without significant prior experience. Because computer science starts with the word “computer” it is assumed that young people will all have an innate understanding of it by 2025. Unfortunately, the ubiquity of computers has not created a new generation of people who de facto understand mathematics, computer science, network infrastructure, electrical engineering and so on. Computer literacy is not the same as the study of computation. Despite mathematics having existed since the dawn of time there is still a relatively small portion of the population with strong statistical literacy, and computer science is similarly old. Euclid invented several algorithms, one of which is used every time you make an HTTPS request; the fact that we use HTTPS every time we login to a website does not automatically imbue anyone with a knowledge of how those protocols work. More established professional fields often have a bimodal wage distribution: a relatively small number of practitioners make quite a lot of money, and the majority of them earn a good wage but do not find themselves in the top 1% of earners. The National Association for Law Placement collects data that can be used to visualize this phenomenon in stark clarity. A huge share of law graduates make between $45,00 and $65,000 — a good wage, but hardly the salary we associate with a “top professional”. We tend to think that all law graduates are on track to becoming partners at a law firm when really there are many paths: paralegal, clerk, public defender, judge, legal services for businesses, contract writing, and so on. Computer science graduates also have many options for their professional practice, from web development to embedded systems. As a basic level of programming literacy continues to become an expectation, rather than a “nice to have”, I suspect a similar distribution will emerge in programming jobs. While there will always be a cohort of programmers making a lot of money to push on the edges of technology, there will be a growing body of middle-class programmers powering the new computer-centric economy. The average salary for web developers will surely decrease over time. That said, I suspect that the number of jobs for “programmers” in general will only continue to grow. As worker supply begins to meet demand, hopefully we will see a healthy boom in a variety of middle-class programming jobs. There will also continue to be a top-professional salary available for those programmers who are redefining what is possible. Regardless of which cohort of programmers you’re in, a career in technology means continuing your education throughout your life. If you want to stay in the second cohort of programmers you may want to invest in learning how to create the machines, rather than simply operate them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A curious human on a quest to watch the world learn. " Blaise Aguera y Arcas,8.7K,15,https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------4----------------,Do algorithms reveal sexual orientation or just expose our stereotypes?,"by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft. " Arvind N,9.5K,8,https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------5----------------,Thoughts after taking the Deeplearning.ai courses – Towards Data Science,"[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes. " Berit Anderson,1.6K,20,https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b?source=tag_archive---------6----------------,The Rise of the Weaponized AI Propaganda Machine – Scout: Science Fiction + Journalism – Medium,"By Berit Anderson and Brett Horvath This piece was originally published at Scout.ai. “This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright. Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas. By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world. Most recently, Analytica helped elect U.S. President Donald Trump, secured a win for the Brexit Leave campaign, and led Ted Cruz’s 2016 campaign surge, shepherding him from the back of the GOP primary pack to the front. The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention. Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America. Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts. There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections. In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them. We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it. Welcome to the age of Weaponized AI Propaganda. Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends. In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores — a standard-bearing personality questionnaire used by psychologists — the team was able to identify an individual’s gender, sexuality, political beliefs, and personality traits based only on what they had liked on Facebook. According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.” Not long afterward, Kosinski was approached by Aleksandr Kogan, a fellow Cambridge professor in the psychology department, about licensing his model to SCL Elections, a company that claimed its specialty lay in manipulating elections. The offer would have meant a significant payout for Kosinki’s lab. Still, he declined, worried about the firm’s intentions and the downstream effects it could have. It had taken Kosinski and his colleagues years to develop that model, but with his methods and findings now out in the world, there was little to stop SCL Elections from replicating them. It would seem they did just that. According to a Guardian investigation, in early 2014, just a few months after Kosinski declined their offer, SCL partnered with Kogan instead. As a part of their relationship, Kogan paid Amazon Mechanical Turk workers $1 each to take the OCEAN quiz. There was just one catch: To take the quiz, users were required to provide access to all of their Facebook data. They were told the data would be used for research. The job was reported to Amazon for violating the platform’s Terms of Service. What many of the Turks likely didn’t realize: According to documents reviewed by The Guardian, “Kogan also captured the same data for each person’s unwitting friends.” The data gathered from Kogan’s study went on to birth Cambridge Analytica, which spun out of SCL Elections soon after. The name, metaphorically at least, was a nod to Kogan’s work — and a dig at Kosinski. But that early trove of user data was just the beginning — just the seed Analytica needed to build its own model for analyzing users personalities without having to rely on the lengthy OCEAN test. After a successful proof of concept and backed by wealthy conservative investors, Analytica went on a data shopping spree for the ages, snapping up data about your shopping habits, land ownership, where you attend church, what stores you visit, what magazines you subscribe to — all of which is for sale from a range of data brokers and third party organizations selling information about you. Analytica aggregated this data with voter roles, publicly available online data — including Facebook likes — and put it all into its predictive personality model. Nix likes to boast that Analytica’s personality model has allowed it to create a personality profile for every adult in the U.S. — 220 million of them, each with up to 5,000 data points. And those profiles are being continually updated and improved the more data you spew out online. Albright also believes that your Facebook and Twitter posts are being collected and integrated back into Cambridge Analytica’s personality profiles. “Twitter and also Facebook are being used to collect a lot of responsive data because people are impassioned, they reply, they retweet, but they also include basically their entire argument and their entire background on this topic,” he explains. Collecting massive quantities of data about voters’ personalities might seem unsettling, but it’s actually not what sets Cambridge Analytica apart. For Analytica and other companies like them, it’s what they do with that data that really matters. “Your behavior is driven by your personality and actually the more you can understand about people’s personality as psychological drivers, the more you can actually start to really tap in to why and how they make their decisions,” Nix explained to Bloomberg’s Sasha Issenburg. “We call this behavioral microtargeting and this is really our secret sauce, if you like. This is what we’re bringing to America.” Using those dossiers, or psychographic profiles as Analytica calls them, Cambridge Analytica not only identifies which voters are most likely to swing for their causes or candidates; they use that information to predict and then change their future behavior. As Vice reported recently, Kosinski and a colleague are now working on a new set of research, yet to be published, that addresses the effectiveness of these methods. Their early findings: Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. Scout reached out to Cambridge Analytica with a detailed list of questions about their communications tactics, but the company declined to answer any questions or to comment on any of their tactics. But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging. “They [the Trump campaign] were using 40–50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” Martin Moore, director of Kings College’s Centre for the Study of Media, Communication and Power, told The Guardian in early December. “It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius.” Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits. “For example,” Nix wrote in an op-ed last year about Analytica’s work on the Cruz campaign, ”our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.” “Leveraging our other data models, we were able to advise the campaign on how to approach this issue with specific individuals based on their unique profiles in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. For people in the ‘Temperamental’ personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is ‘as easy as buying a case of beer’. Whereas the right message for people in the ‘Stoic Traditionalist’ group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.” For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait — say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.” Much of this is done through Facebook dark posts, which are only visible to those being targeted. Based on users’ response to these posts, Cambridge Analytica was able to identify which of Trump’s messages were resonating and where. That information was also used to shape Trump’s campaign travel schedule. If 73 percent of targeted voters in Kent County, Mich. clicked on one of three articles about bringing back jobs? Schedule a Trump rally in Grand Rapids that focuses on economic recovery. Political analysts in the Clinton campaign, who were basing their tactics on traditional polling methods, laughed when Trump scheduled campaign events in the so-called blue wall — a group of states that includes Michigan, Pennsylvania, and Wisconsin and has traditionally fallen to Democrats. But Cambridge Analytica saw they had an opening based on measured engagement with their Facebook posts. It was the small margins in Michigan, Pennsylvania and Wisconsin that won Trump the election. Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’” Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds. In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born. “These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.” Meanwhile, surprised by the results of the 2016 presidential race, Albright started looking into the ‘fake news problem’. As a part of his research, Albright scraped 306 fake news sites to determine how exactly they were all connected to each other and the mainstream news ecosystem. What he found was unprecedented — a network of 23,000 pages and 1.3 million hyperlinks. “The sites in the fake news and hyper-biased #MCM network,” Albright writes, “have a very small ‘node’ size — this means they are linking out heavily to mainstream media, social networks, and informational resources (most of which are in the ‘center’ of the network), but not many sites in their peer group are sending links back.” These sites aren’t owned or operated by any one individual entity, he says, but together they have been able to game Search Engine Optimization, increasing the visibility of fake and biased news anytime someone Googles an election-related term online — Trump, Clinton, Jews, Muslims, abortion, Obamacare. “This network,” Albright wrote in a post exploring his findings, “is triggered on-demand to spread false, hyper-biased, and politically-loaded information.” Even more shocking to him though was that this network of fake news creates a powerful infrastructure for companies like Cambridge Analytica to track voters and refine their personality targeting models “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages.” The web of fake and biased news that Albright uncovered created a propaganda wave that Cambridge Analytica could ride and then amplify. The more fake news that users engage with, the more addictive Analytica’s personality engagement algorithms can become. Voter 35423 clicked on a fake story about Hillary’s sex-trafficking ring? Let’s get her to engage with more stories about Hillary’s supposed history of murder and sex trafficking. The synergy between fake-content networks, automated message testing, and personality profiling will rapidly spread to other digital mediums. Albright’s most-recent research focuses on an artificial intelligence that automatically creates YouTube videos about news and current events. The AI, which reacts to trending topics on Facebook and Twitter, pairs images and subtitles with a computer generated voiceover. It spooled out nearly 80,000 videos through 19 different channels in just a few days. Given its rapid development, the technology community needs to anticipate how AI propaganda will soon be used for emotional manipulation in mobile messaging, virtual reality, and augmented reality. If fake news created the scaffolding for this new automated political propaganda machine, bots, or fake social media profiles, have become its foot soldiers — an army of political robots used to control conversations on social media and silence and intimidate journalists and others who might undermine their messaging. Samuel Woolley, Director of Research at the University of Oxford’s Computational Propaganda Project and a fellow at Google’s Jigsaw project, has dedicated his career to studying the role of bots in online political organizing — who creates them, how they’re used, and to what end. Research by Woolley and his Oxford-based team in the lead-up to the 2016 election found that pro-Trump political messaging relied heavily on bots to spread fake news and discredit Hillary Clinton. By election day, Trump’s bots outnumbered hers, 5:1. “The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled activities after Election Day,” the study by Woolley’s team reported. Woolley believes it’s likely that Cambridge Analytica was responsible for subcontracting the creation of those Trump bots, though he says he doesn’t have direct proof. Still, if anyone outside of the Trump campaign is qualified to speculate about who created those bots, it would be Woolley. Led by Dr. Philip Howard, the team’s Principal Investigator, Woolley and his colleagues have been tracking the use of bots in political organizing since 2010. That’s when Howard, buried deep in research about the role Twitter played in the Arab Spring, first noticed thousands of bots coopting hashtags used by protesters. Curious, he and his team began reaching out to hackers, botmakers, and political campaigns, getting to know them and trying to understand their work and motivations. Eventually, those creators would come to make up an informal network of nearly 100 informants that have kept Howard and his colleagues in the know about these bots over the last few years. Before long, Howard and his team were getting the heads up about bot propaganda campaigns from the creators themselves. As more and more major international political figures began using botnets as just another tool in their campaigns, Howard, Woolley and the rest of their team studied the action unfolding. The world these informants revealed is an international network of governments, consultancies (often with owners or top management just one degree away from official government actors), and individuals who build and maintain massive networks of bots to amplify the messages of political actors, spread messages counter to those of their opponents, and silence those whose views or ideas might threaten those same political actors. “The Chinese, Iranian, and Russian, governments employ their own social-media experts and pay small amounts of money to large numbers of people to generate pro-government messages,” Howard and his coauthors wrote in a 2015 research paper about the use of bots in the Venezuelan election. Depending on which of those three categories bot creators fall into — government, consultancy or individual — they’re just as likely to be motivated by political beliefs as they are the opportunity to auction off their networks of digital influence to the highest bidder. Not all bots are created equal. The average, run-of-the-mill Twitter bot is literally a robot — often programmed to retweet specific accounts to help popularize specific ideas or viewpoints. They also frequently respond automatically to Twitter users who use certain keywords or hashtags — often with pre-written slurs, insults or threats. High-end bots on the other hand are more analog, operated by real people. They assume fake identities with distinct personalities and their responses to other users online are specific, intended to change their opinions or those of their followers by attacking their viewpoints. They have online friends and followers. They’re also far less likely to be discovered — and their accounts deactivated — by Facebook or Twitter. Working on their own, Woolley estimates, an individual could build and maintain up to 400 of these boutique Twitter bots; on Facebook, which he says is more effective at identifying and shutting down fake accounts, an individual could manage 10–20. As a result, these high-quality botnets are often used for multiple political campaigns. During the Brexit referendum, the Oxford team watched as one network of bots, previously used to influence the conversation around the Israeli/Palestinian conflict, was reactivated to fight for the Leave campaign. Individual profiles were updated to reflect the new debate, their personal taglines changed to ally with their new allegiances — and away they went. Russia’s bot army has been the subject of particular scrutiny since a CIA special report revealed that Russia had been working to influence the election in Trump’s favor. Recently, reporter/comedian Samantha Bee traveled to Moscow to interview two paid Russian troll operators. Clad in black ski masks to obscure their identities, the two talked with Bee about how and why they were using their accounts during the U.S. election. They told Bee that they pose as Americans online and target sites like The Wall Street Journal, The New York Post, The Washington Post, Facebook and Twitter. Their goal, they said, is to “piss off” other social media users, change their opinions, and silence their opponents. Or, to put it in the words of Russian Troll #1, “when your opponent just ... shut up.” The 2016 U.S. election is over, but the Weaponized AI Propaganda Machine is just warming up. And while each of its components would be worrying on its own, together, they represent the arrival of a new era in political messaging — a steel wall between campaign winners and losers that can only be mounted by gathering more data, creating better personality analyses, rapid development of engagement AI, and hiring more trolls. At the moment, Trump and Cambridge Analytica are lapping their opponents. The more data they gather about individuals, the more Analytica and, by extension, Trump’s presidency will benefit from the network effects of their work — and the harder it will become to counter or fight back against their messaging in the court of public opinion. Each Tweet that echoes forth from the @realDonaldTrump and @POTUS accounts, announcing and defending the administration’s moves, is met with a chorus of protest and argument. But even that negative engagement becomes a valuable asset for the Trump administration because every impulsive tweet can be treated like a psychographic experiment. Trump’s first few weeks in office may have seemed bumbling, but they represent a clear signal of what lies ahead for Trump’s presidency — an executive order designed to enrage and distract his opponents as he and Bannon move to strip power from the judicial branch, install Bannon himself on the National Security Council, and issues a series of unconstitutional gag orders to federal agencies. Cambridge Analytica may be slated to secure more federal contracts and is likely about to begin managing White House digital communications for the rest of the Trump Administration. What new predictive-personality targeting becomes possible with potential access to data on U.S. voters from the IRS, Department of Homeland Security, or the NSA? “Lenin wanted to destroy the state, and that’s my goal, too. I want to bring everything crashing down and destroy all of today’s establishment,” Bannon said in 2013. We know that Steve Bannon subscribes to a theory of history where a messianic ‘Grey Warrior’ consolidates power and remakes the global order. Bolstered by the success of Brexit and the Trump victory, Breitbart (of which Bannon was Executive Chair until Trump’s election) and Cambridge Analytica (which Bannon sits on the board of) are now bringing fake news and automated propaganda to support far-right parties in at least Germany, France, Hungary, and India as well as parts of South America. Never has such a radical, international political movement had the precision and power of this kind of propaganda technology. Whether or not leaders, engineers, designers, and investors in the technology community respond to this threat will shape major aspects of global politics for the foreseeable future. The future of politics will not be a war of candidates or even cash on hand. And it’s not even about big data, as some have argued. Everyone will have access to big data — as Hillary did in the 2016 election. From now on, the distinguishing factor between those who win elections and those who lose them will be how a candidate uses that data to refine their machine learning algorithms and automated engagement tactics. Elections in 2018 and 2020 won’t be a contest of ideas, but a battle of automated behavior change. The fight for the future will be a proxy war of machine learning. It will be waged online, in secret, and with the unwitting help of all of you. Anyone who wants to effect change needs to understand this new reality. It’s only by understanding this — and by building better automated engagement systems that amplify genuine human passion rather than manipulate it — that other candidates and causes around the globe will be able to compete. Implication #1: Public Sentiment Turns Into High-Frequency Trading Thanks to stock-trading algorithms, large portions of public stock and commodity markets no longer resemble a human system and, some would argue, no longer serve their purpose as a signal of value. Instead they’re a battleground for high-frequency trading algorithms attempting to influence price or find nano-leverage in price position. In the near future, we may see a similar process unfold in our public debates. Instead of battling press conferences and opinion articles, public opinion about companies and politicians may turn into multi-billion dollar battles between competing algorithms, each deployed to sway public sentiment. Stock trading algorithms already exist that analyze millions of Tweets and online posts in real-time and make trades in a matter of milliseconds based on changes in public sentiment. Algorithmic trading and ‘algorithmic public opinion’ are already connected. It’s likely they will continue to converge. Implication #2: Personalized, Automated Propaganda That Adapts to Your Weaknesses What if President Trump’s 2020 re-election campaign didn’t just have the best political messaging, but 250 million algorithmic versions of their political message all updating in real-time, personalized to precisely fit the worldview and attack the insecurities of their targets? Instead of having to deal with misleading politicians, we may soon witness a Cambrian explosion of pathologically-lying political and corporate bots that constantly improve at manipulating us. Implication #3: Not Just a Bubble, But Trapped in Your Own Ideological Matrix Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out? Correction: An earlier version of this story mistakenly referred to Steve Bannon as the owner of Breitbart News. Until Trump’s election, Bannon served as the Executive Chair of Breitbart, a position in which it is common to assume ownership through stock holdings. This story has been updated to reflect that. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Co-founder @Join_Scout. The social implications of technology. " Slav Ivanov,4.4K,10,https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------7----------------,37 Reasons why your Neural Network is not working – Slav,"The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer. Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)? A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too. A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response: If the steps above don’t do it, start going down the following big list and verify things one by one. Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK. Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong. Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it. Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels. Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data. This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off. The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels. If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together. Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches. If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more. This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset. This paper points out that having a very large batch can reduce the generalization ability of the model. Thanks to @hengcherkeng for this one: Did you standardize your input to have zero mean and unit variance? Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit. If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]? CS231n points out a common pitfall: Also, check for different preprocessing in each sample or batch. This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only. Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. After this, try increasing the regularization strength which should increase the loss. If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way. If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t. If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights. Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy. Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers. If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network. If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3. Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class. If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps. Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search. Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting. Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum. Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers. A low learning rate will cause your model to converge very slowly. A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution. Play around with your current learning rate by multiplying it by 0.1 or 10. Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it: Did I miss anything? Is anything wrong? Let me know by leaving a reply below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Sirui Li,1,5,https://medium.com/leethree/the-evolution-a-simple-illustration-203a1bba83b0?source=tag_archive---------2----------------,The evolution: a simple illustration – LeeThree on UX – Medium,"In the last paragraphs of Tools vs. Assistants: Part II, I’ve talked about the evolution of the society as the technology develops, in order to explain how we should apply software agents into our applications. Here I come up with some graphs to illustrate my model of machine intelligence in the process of society evolution: Firstly, consider the industrialization of the way people finish a certain task, say, writing a thank-you letter. (Let’s assume that this task is well defined, though I’m not going to define it.) When it came into being, only a few of the smartest people could complete this task. A minimal level of intelligence is required for this. The techniques and methodologies for writing thank-you letters developed very slowly, until one day tools were introduced. Dictionaries and phrase-books greatly helped people with this task and more and more people learned how to write thank-you letters. Once the most intelligent people all learned this, it was considered very cool if someone understood how to write beautiful thank-you letters and this soon became one of the trending topics among people. Better techniques were developed and more effective tools were invented, like electronic dictionaries and dictionary software. This field began to flourish. Soon, it became so easy to write thank-you letters that everyone with a right mind could complete the task with the help of certain tools. However, the most amazing thank-you letters are always written by intelligent human beings who put their mind to it. One day, an automatic thank-you letter software (ATULS) was developed. This buggy but yet usable tool was a great breakthrough because machines started to complete the task by themselves. On the basis of ATULS, more and better software tools were developed. Professional thank-you letter writers are gradually replaced by the machines, as more and more people thought the letters written by machines were better than theirs. The software tools pushed the quality bar higher and higher. Only the most excellent and experienced writers could done better than machines. But who cares? The majority of people no longer paid attention to how the letters were written. They just took it for granted. From here, we came to the end of the industrialization process of the task. It’s almost completely automated and machine intelligence has greatly improved the productivity. Very few people will remain doing this task. An extra note: Some may argue that the level of intelligence is lowered by tools and machines because they make the task easier. It is not the case because tools and machines are part of this intelligence requirement. Only by making use of the intelligence from the tools or the machines, human could complete the task with less intelligence. Thus the level of intelligence required for the task is not reduced. Let’s see the broader picture. This one is fairly easy to understand. The society becomes more and more sophisticated. Since the invention of machine intelligence, tasks with low level of sophistication are gradually done by machines. But more sophisticated tasks are being created, human beings are working on the most sophisticated tasks which the machines couldn’t do. So what our society looks like now? This shape looks strange as it shows the relationship between the other two axes: intelligence and sophistication. Basically, more intelligence are required to solve more sophisticated problems. But tasks could be done in many ways, that’s why it actually shows a colored band instead of a single curve. As we can see, the most difficult problems, i.e., the most sophisticated tasks are still being done by most intelligent human beings, because they’re new and machine performance are usually not acceptable. While time goes on, machine intelligence will take up more portion in the lower parts and human work will be “pushed” farther and higher like a sword cutting through the surface. (That’s a pretty reasonable illustration of the word “break-through”.) I have to emphasize that, as the title says, this is a very very simple model. There’re quite a few assumptions for these graphs, so you might find them naïve and inaccurate: The top five assumptions are very strong and not necessarily true. In fact, I personally doubt some of them because I don’t really agree with technocentrism. However, I do believe that, from the viewpoint of a technocentrist, this model could provide some insight on how technology works and develops. P.S. I hope I could make a 3D model out of the three views from different axes but it seems very difficult to make it both accurate and illustrative. Perhaps I’ll make a video once I know how to do it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @LeeThree9 This is a blog by @LeeThree9 on topics including user experience, human computer interaction, usability and interaction design. " Theo,3,4,https://becominghuman.ai/is-there-a-future-for-innovation-18b4d5ab168f?source=tag_archive---------1----------------,Is there a future for innovation ? – Becoming Human: Artificial Intelligence Magazine,"Have you noticed how tech savvy children have become but are no longer streetwise ? I read a friend’s thoughts on his own site last week and there was a slight pang of regret in where technology and innovation seems to be leading us all. And so I started to worry about where the concept of innovation is going for future generations. There’s an increasing reliance on technology for the sake of convenience, children are becoming self-reliant too quickly but gadgets are replacing people as the mentor. The human bonding of parenthood is a prime example of where it’s taking a toll. I’ve seen parents hand over iDevices to pacify a child numerous times now, the lullaby and bedtime reading session has been replaced with Cut The Rope and automated storybooks apps. I know a child who has developed speech difficulty because he’s been brought up on Cable TV and a DS Lite, pronouncing words as he has heard them from a tiny speaker and not by watching how his parents pronounce them. And I started to worry about how the concept of innovation is being redefined for future generations. I used my imagination constantly as a child and it’s still as active now as it was then but I didn’t use technology to spoon feed me. The next generation expect innovation to happen at their fingertips with little to no real stimuli. Steve Jobs said “stay hungry, stay foolish” and he was right. Innovation comes from a keenness, it’s a starvation and hunger that drives people forward to spark and create, it comes from grabbing what little there is from the ether and turning it into something spectacular. It’s the Big Bang of human thought creation. And I started to worry about what the concept of innovation means for future generations. Technology is taking away the power to think for ourselves and from our children. Everything must be there and in real-time for instant consumption. It’s junk food for the mind and we’re getting fat on it. And that breeds lazy innovation. We’ve become satiated before we reach the point of real creativity, nobody wants to bother taking the time to put it all together themselves any more, it has to be ready for us. And we’re happy to throw it away if it doesn’t work first time, use it or lose it, there’s less sweat and toil involved if we don’t persevere with failure. Remember seeing the human race depicted in Wall-E ? That’s where innovation is heading. And because of this we risk so many things disappearing for the sake of convenience. We’re all guilty of it, I’m guilty of it. I was asked once what would become absurd in ten years. Thinking about it I realized we’re on the cusp of putting books on the endangered species list. Real books, books bound in hard and paperback not digital copies from a Kindle store. And that scared me because the next generation of kids may grow up never seeing one, or experience sitting with their father as he reads an old battered copy of The Hobbit because he’ll be sitting there handing over an iPad with The Hobbit read-along app teed up, and it’ll be an actors voice not his father’s voice pretending to be a bunch of trolls about to eat a company of dwarfs. Innovation is a magical, crazy concept. It stems from a combination of crazy imagination, human interaction and creativity not convenient manufacture. Technology can aid collaboration in ways we’ve never experienced before but it can’t run crazy for us. And for the sake of future generations don’t let it. Here’s to the crazy ones indeed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO @ RawShark Studios. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Diana Filippova,1,11,https://medium.com/@dnafilippova/de-la-coop%C3%A9ration-entre-les-hommes-et-les-machines-pour-une-approche-pair-%C3%A0-pair-de-lintelligence-1bb8d8c56de1?source=tag_archive---------3----------------,"De la coopération entre les hommes et les machines, pour une approche pair-à-pair de l’intelligence...","Originally published at www.cuberevue.com on November 6, 2013. Lundi matin, huit heures, 2007, centre d’examen d’Arcueil. Mille têtes sont laborieusement penchées sur des bureaux en bois, abîmés par les stylos qui grattent sur de minces feuilles de papier. Les voies ferrées bordent l’enclave, les trains font trembler le bâtiment en rythme, les têtes se relèvent un instant, distraites, puis s’en retournent se concentrer sur l’écriture studieuse et pressée de la copie. Les surveillants passent dans les rangs, impassibles, guettent toute tête qui tourne, toute main qui se dérobe dans la poche d’un jean. Seuls les bruits de papier froissé sont perceptibles et, lorsqu’ils s’estompent, un silence de mort règne sur la salle. Mille élèves sont isolés pour répondre en six heures à une question difficile. Toute interaction avec leurs pairs leur est interdite, ils ne peuvent consulter leurs notes si un oubli inattendu vient perturber le fil de leur pensée. Les devoirs produits par les élèves tomberont dans l’oubli, stockées dans un hangar dédié qui accueille des papiers d’examen depuis maintes générations. Quelques années plus tard, j’anime un atelier qui s’étend sur toute la journée dans une grande salle blanche avec une vingtaine d’ordinateurs. Autour de moi, des groupes d’élèves discutent, rient, et oscillent entre une feuille de dessin et l’écran d’ordinateur. Certains s’isolent pour coder, d’autres sont penchés sur une imprimante 3D qui produit un design open source qu’ils viennent de télécharger. Les élèves consultent leurs professeurs, demandent conseil aux experts présents dans la salle et partagent leur avancement avec les autres. Certains abandonnent momentanément leur propre groupe pour aider leurs amis dans un groupe concurrent. L’atelier consiste à remixer des œuvres artistiques tombées dans le domaine public ou en open source. Aucune évaluation n’est prévue, les réactions des personnes présentes est la seule mesure de la qualité de leur production. Je pense en les regardant qu’ils ont une chance infinie de pouvoir librement puiser dans tous les puits de connaissance existants : leur intelligence, celle des pairs et accompagnateurs, la quasi-totalité des productions de l’humanité, et surtout, le savoir global présent à portée de main. A la clôture de l’atelier, leurs œuvres nous paraissent surprenantes, originales et leur qualité dépasse toutes nos attentes. Nos doutes sur la capacité des élèves à défricher de la matière brute et en extraire une forme structurée en une après-midi étaient vains, ils nous font désormais sourire. J’observe la magie de la création collective tous les jours au sein de OuiShare, projet collectif œuvrant pour le développement de l’économie collaborative. Le projet rassemble des personnes venues de tous les coins du monde, et j’ai beaucoup de chance de m’investir. Chaque jour, pour chacun des projets que nous conduisons, pour chaque décision que nous prenons et à chacun des désaccords qui surgit, nous faisons l’expérience d’une coopération intelligente. Au sein de ce laboratoire d’idées et de pratiques, nous avons la volonté de soutenir les projets collaboratifs qui surgissent dans les cuisines, les espaces de coworking, lors des rencontres. Aussi, nous appliquons-nous à apprendre au sein de notre communauté comment on peut créer ensemble mieux que ne le ferait chacun de nous, seul. C’est l’alchimie de l’intelligence collective. Ensemble, en coopérant, on crée et pense mieux que seul, reclus dans le monastère qu’est notre cerveau. Nous avons désormais un accès immédiat à la grande somme du savoir existant, mais c’est avec les autres, aujourd’hui et demain, que nous créons bien. Nous sommes reliés à une infinité d’individus, organisations, machines. La coopération de l’ensemble de ces entités, quelle que soit leur nature, quelle que soit la nature de leur intelligence, est ce qui définit à mon sens l’intelligence collective. Les enjeux de l’évolution de notre penser-ensemble et décider-ensemble dans le monde de demain sont critiques. Aussi, nous avons de nouveaux compagnons qui nous assistent sans cesse — les machines, les programmes, les robots — et qui modifient nos façons d’agir et de penser autant que nous les façonnons. Ces bouleversements de notre existence et de nos modes d’organisation connaissent aujourd’hui une accélération telle que le questionnement sur le processus et les effets de ces interactions acquiert une consistance inédite. Nous ne pouvons plus ignorer que nous, humains, ne serons plus jamais seuls. Dans ce contexte critique, comment définir l’intelligence collective et intégrer les machines dans la production des connaissances à venir ? Nos interactions nous conduiront-elles à nous améliorer en tant qu’individus et espèce ou scelleront-elles une nouvelle ère de guerre numérique ? Si nous voulons utiliser en toute conscience notre capacité à coopérer pour rendre le monde meilleur, quels modèles économiques, sociaux, éthiques et technologiques devons nous bâtir ? Le telos de l’intelligence collective s’inscrit dans le concept de noosphère, forgé par Vladimir Vernadsky et longuement analysé par Teilhard de Chardin. Comprise comme l’ensemble de la pensée humaine, la noosphère correspond à deux phénomènes en interaction réciproque. D’une part, la complexification des sociétés humaines du point de vue culturel, social, économique et démographique tend vers la constitution d’une sphère de la connaissance toujours plus étoffée. D’autre part, cette sphère, née de la multiplication des interactions toujours plus nombreuses, entraîne une structuration progressive de la pensée globale et la prise de conscience par l’humanité d’elle-même. L’idée d’une marche vers une sorte de cerveau humain qui nous transcende, aussi ancienne soit-elle1, prend une consistante particulière à l’heure où 40% de la planète est connectée à la toile. L’intelligence collective peut alors être comprise comme le processus de création de savoir éclairé par la conscience d’une noosphère. La noosphère sous-tend la possibilité d’une production collective de savoir, mais elle ne répond pas aux questions qui se posent si l’on examine le processus de co-création. L’approche pratique de l’intelligence collective permet quant à elle d’explorer les conditions de possibilité de l’exercice collectif de l’intelligence d’individus, entités ou machines. A cet effet, je me tourne vers les travaux du centre de recherche sur l’intelligence collective du MIT2. Les recherches et analyses conduites par ce centre sont uniques en leur genre. En combinant les sciences mathématiques, physiques, biologiques, sociales, économiques et une approche résolument prospective, les travaux du centre ont pour ambition de répondre à la question suivante : comment les individus et les machines peuvent se connecter afin que, collectivement, ils soient en mesure d’agir avec plus d’intelligence que ne l’ont jamais pu tout individu, groupe ou machine pris séparément ? L’ampleur de la tâche ne fait pas peur à Thomas Malone, fondateur et président du centre. Selon lui, l’enjeu de la recherche est critique, car, selon lui, “le futur de notre espèce pourrait reposer sur notre capacité à faire usage de notre intelligence collective de telle manière que les choix qui sont faits soient non seulement intelligents, mais aussi sages »3. La portée pratique de l’intelligence collective commence à se dessiner : d’une part, il s’agit de trouver une configuration telle que la co-création aboutisse à des choix ordonnés, efficients, utiles, et qui répondent à une certaine éthique. D’autre part, est-il raisonnable de supposer qu’une configuration favorable à la co-création intelligente entre individus puisse également intégrer les machines ? Comme le rappelle justement Thomas Malone, les décisions collectives peuvent parfaitement être rationnelles et bêtes4 ! La notion d’intelligence doit par conséquent être élargie pour y intégrer des facteurs autres que la seule rationalité. Thomas Malone la définit ainsi : “pour être intelligent, le comportement collectif du groupe doit déployer des caractéristiques telles que la perception, la capacité d’apprentissage, le jugement et l’aptitude à résoudre des problèmes”. En d’autres termes, les aptitudes d’un groupe et celles des individus doivent fonctionner comme des vases communicants : dans une configuration propice à la co-production, le groupe se dote ainsi d’une série de comportements qui sont normalement associés au seul individu. Le centre de recherche du MIT a ensuite cherché à déterminer les facteurs qui sont corrélés à une production collective plus intelligente. Il s’est avéré que l’intelligence moyenne de chaque individu n’en fait pas partie. En revanche, deux facteurs ressortent significativement : le degré d’empathie des membres du groupe et l’égale distribution de la parole au sein du groupe. Empathie, distribution et égalité, ces facteurs laissent à penser que l’intelligence collective s’accommode mal des modes d’organisation hiérarchiques, cloisonnées et centralisées. L’intelligence collective prospère à l’inverse dans des organisations structurées en réseau, distribuées, décentralisées, centrées sur la perception et l’écoute davantage que sur des règles rigides. Il n’est pas étonnant que les réseaux contributifs tels que Wikipedia prospèrent : ils présentent exactement les caractéristiques qui stimulent l’intelligence collective ! Il faut à mon sens un ingrédient supplémentaire pour que la multiplicité des individus composant le réseau ne fasse par le lit des passagers clandestins. Rappelons à ce titre que seulement 10% des lecteurs de Wikipedia sont contributeurs actifs. L’anonymat de la contribution y est pour quelque chose : la valeur produite par chacun n’est ni mesurée ni reconnue. A l’inverse, au sein de Sensorica5, réseau ouvert où un ensemble d’individus et d’organisations produisent des solutions hardware de façon contributive, la valeur ajoutée de chaque contributeur est régulièrement mesurée par les autres contributeurs et connue par le réseau. Ainsi, l’évaluation et la reconnaissance par les pairs de la valeur de la contribution de chacun sont tout aussi importantes que l’évaluation de la valeur globale du réseau. Comme l’écrit Pierre Lévy : « le fondement et la fin de l’intelligence collective consiste en la reconnaissance mutuelle et l’enrichissement des individus, plutôt que le culture d’une communauté fétichisée et hypostasiée ».6 Un réseau intelligent apporte autant au monde qu’à ses contributeurs : les parties pour le tout, le tout pour les parties. Véritable lieu d’apprentissage, le réseau favorise la circulation libre des connaissances et la confrontation des jugements dans le respect de la contribution de chacun. Contrairement aux modes d’organisation où le collectif écrase l’individu, un réseau intelligent est à la fois prolongement et ferment de l’intelligence de chacun. L’intention de collaborer et la conscience de la valeur ainsi créée sont indispensables pour que l’intelligence collective puisse s’exercer. Empathie, perception, jugement, conscience, intentionnalité : ne sont-ce pas des attributs proprement humains ? Comment intégrer les machines dans un réseau intelligent alors qu’elles en sont a priori dépourvues ? Pourtant, lorsque j’évoquais plus haut la mise en réseau d’entités et d’individus afin de déterminer une organisation optimale pour la production collective de valeur, je n’excluais pas les machines. Ces dernières sont aujourd’hui largement acceptées comme prolongement des moyens humains et l’idée de l’avènement prochain de la singularité trouve un nombre croissant d’adeptes7. Aujourd’hui, la complexité et l’intelligence des programmes informatiques sont telles que nous sommes arrivés à un point de non retour qui, selon Kevin Kelly8 advient lorsque « la technologie nous altère autant que nous altérons la technologie ». A mon sens, la conception des machines comme assistant parfaitement dominé par l’homme est tout aussi contestable que la foi en la supériorité de l’intelligence des machines sur la nôtre. D’une part, les programmes informatiques sont dotés de capacités de calcul et d’analyse de données qui dépassent manifestement les capacités de l’intelligence humaine. D’autre part, les robots conçus aujourd’hui sont non seulement capables de se dupliquer, mais également d’apprendre et d’évoluer9. Les recherches conduites par l’Institut public de recherche en sciences du numériques portent sur le développement dont le développement cognitif est stimulé par la curiosité, la perception et les représentations. Rapportées à l’échelle de l’évolution humaine, ces avancées ont été d’une rapidité inouïe. Si le rythme des avancées de ces dernières années persiste dans les années avenir, il n’est pas fantaisiste d’imaginer que les robots de demain puissent comprendre les émotions et les reproduire, auto-générer des programmes sur la base des informations internes et externes afin de manifester, de façon autonome, des pensées, des émotions, des actions. Cette autonomie, si elle a lieu, confère à la machine des attributs qui ont jusqu’ici été le propre de l’humain : la conscience, la perception, la production autonome. Objectivement, nous n’avons pas aujourd’hui suffisamment de données scientifiques pour affirmer que l’autonomie de la technologie est totalement exclue, il est donc plus prudent de supposer qu’elle est possible, quel qu’en soit l’horizon temporel. Inversement, l’évolution des techniques laisse entrevoir un futur où l’homme, non content d’améliorer les programmes informatiques, serait doté des moyens technologiques qui rendent plausibles une intervention sur lui-même, une amélioration physique et, pourquoi pas, comportementale (morale). Cette vision prend rapidement les couleurs d’un scénario de science-fiction où les machines, dotées d’autonomie et de conscience, finissent par se soulever contre le joug humain pour nous dominer ou, simplement, pour réclamer les mêmes droits que notre espèce. La dialectique du maître et de l’esclave n’est jamais loin : nous ne pouvons nous empêcher de transposer les schémas historiques au monde à venir. Derrière cette pensée par analogie, se cache une peur viscérale d’être dépossédé de nos moyens de contrôle, puisque les machines que nous concevons seraient infiniment plus rapides et efficaces que nous. L’angoisse des bouleversements éthiques à venir se pare souvent des habits du principe de précaution : puisque nous ne sommes pas absolument certains que la technologie ne présentera aucun danger pour l’humanité, ralentissons, et, encore mieux, sonnons le glas de ses ambitions10. Peut-on, pour autant, postuler que le progrès technologique est absolument autonome par rapport à toute question éthique, et que, par conséquent, la prise en compte des conséquences de l’humanisation des machines et de l’irruption du mécanique dans le vivant n’a aucune place dans le laboratoire du chercheur ? Je ne le crois pas, car les technologies que nous produisons ne sont pas des artefacts, et on ne peut faire abstraction des répercussions qu’elles auront sur le monde à venir. Face à ces deux partis-pris — anti-technologique et a-éthique — l’hypothèse de la coopération entre l’intelligence humaine et l’intelligence mécanique est, au stade de nos connaissances, raisonnable et souhaitable. Faut-il encore reconnaître que les machines peuvent déployer une intelligence qui n’est pas seulement calculatoire et qui, si elle sera différente, ne sera pas forcément inférieure à la nôtre. Que ce mouvement provoque des bouleversements que l’espèce humaine n’a jamais connus, cela semble peu sujet au doute. Toutefois, ralentir la science parce que nous peinons à prendre conscience de l’accélération de l’avancée technologique est une impasse. Au contraire, c’est à nous d’imaginer et de mettre en pratique les modes de coopération qui fertilisent la production commune de savoir, de connaissance et, surtout de conscience. Nous en sommes à un moment historique où l’humain et le technologique ne sont plus deux sphères capables d’évoluer sans s’altérer l’une l’autre. La technologie est autant notre prolongement que nous sommes le sien, car le futur de notre espèce est désormais dépendant tant de l’écologie que de la technologie. Je conclurai en disant que les nouvelles organisations distribuées favorisent tant la co-création entre les hommes, qu’entre les hommes et les machines. La diversité des entités composant le réseau, combiné à la reconnaissance de la contribution de chacun à sa juste valeur, et selon ses moyens, constitue un terreau fertile à l’épanouissement de l’intelligence collective. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Cofounder @Stroika_Paris. Ex @Microsoft, @Ouishare, @_Bercy_. Founder @KissMyFrogs. Writer. " Peter Sweeney,215,7,https://medium.com/inventing-intelligent-machines/siris-descendants-fd36df040918?source=tag_archive---------0----------------,Siri’s Descendants: How intelligent assistants will evolve,"The internet swarms with intelligent assistants. What started as an isolated app on the iPhone has evolved. Intelligent assistants constitute an entirely new network of activity. No longer confined to our personal computing devices, assistants are being embedded within every object of interest in the cloud and the internet of things. Assistants have become far more nimble and lightweight than their monolithic ancestors; much more like smart ants than people. As specialists, they work cooperatively — sometimes competitively — to find information before people even realize they need it. People are still communicating directly with assistants, although rarely using natural language. Implicit communication dominates. Assistants respond and react to our subtle contextual interactions, and to each other, within vast informational ecosystems. This is how intelligent assistants evolved... Intelligent assistants like Siri, Google Now, and Cortana are so young, it’s difficult to imagine how they will change; harder still to imagine how they might die. But if history is a guide, inevitably they will give way to entirely new product forms. When pundits and analysts discuss the future of intelligent assistants, they typically extrapolate from the conceptual model of today’s assistants. The next version is always a better, smarter, faster version of the last, but it’s still the same species. As detailed in Bianca Bosker’s Inside Story of Siri’s Origins, when Apple acquired Siri, the scope of the product’s capabilities actually narrowed. Using the audacious vision of Siri’s founders as a palette, Apple selected a narrower set of product values on which to focus. The same force that reduced the scope of Apple’s Siri from a “do (everything) engine” to a much more narrow product is what keeps incumbents rooted to the existing concept of intelligent assistants. When forecasting change, it’s not so much what the technology of intelligent assistants might support as what product leaders choose to pursue. While many brazenly contest existing markets, product leaders look for new, underserved areas of the landscape to exploit. The future always surprises, but we can predict the trajectory of change by examining which product values are being embraced, and which ones are neglected. Just like directions on a compass, the following maps point to fertile areas of the landscape, where new product forms may evolve. Note that product values are often coupled due to technological constraints. Decisions along one axis constrain possibilities along another. These couplings are explored at a high level in two-dimensional perceptual maps: interface and distribution; knowledge and tasks; organization and autonomy. The aspects of assistants that are most obvious to end-users are the interfaces (how we interact with assistants) and their mode of distribution (where people experience assistants). Today’s assistants are overwhelmingly focused on natural language interfaces. The experience of assistants that speak our language and communicate like a person has come to define the product class. This focus on natural language interfaces has biased the distribution of assistants to personal computing devices. Intelligent assistants embody any device capable of receiving and synthesizing speech, such as smartphones, desktops, wearables and cars. The underserved areas of this map involve communications that are not based in natural language. For example, there’s much to learn about our needs and intentions based on context (where we are and what we’re doing) as well as on our ability to make inferences based on the associations that people form (for example, the way that people organize information or express their likes and dislikes). Natural language is but the tip of this much larger iceberg of communications. These alternative forms of communication not only support individuals, but also groups. While it’s difficult to understand a room full of people all speaking at once, it’s much easier to understand their collaborative communications, such as their documents, click-paths, and sharing behavior. Therefore, the options for distributing intelligent assistants that use these implicit forms of communications are not constrained to personal computing devices, but may leverage entire networks. As a simple example, consider how you highlight your interests as you browse a website. You focus your attention on specific pages within the site. You follow your interests as you navigate from page to page. You may choose to share some information within the site with a friend. Now compound this behaviour across every visitor to the site. Intelligent assistants that are associated with the website can respond to these interactions to help the right information find each individual, as well as adapt the website to better address the needs of the entire group. Intelligent assistants require domain knowledge to perform their tasks. For example, if your assistant is giving you advice on how to navigate to work, it needs to have knowledge about the geographic region (general knowledge) and knowledge of how you typically navigate (specific knowledge). Tasks and knowledge are tightly coupled. As you increase the specificity or the personalization of the tasks, the underlying knowledge needs to be far more specific to support it. Within this frame, today’s intelligent assistants are unabashedly generalists. They’re targeted to the masses. Like trivia buffs, their knowledge of the world is broad enough to be relevant to the needs of large groups of people, but few would describe them as experts. Their tasks are similarly general: retrieving information, providing navigational assistance, and answering simple questions. The underserved landscape points to much more specific domains of knowledge, the purview of experts and our individual subjective knowledge. Assistants that become experts necessarily take on a smaller scope of activities. They can’t know and do everything, so they become smaller in scope. The landscape for specific tasks is similarly underserved. Every website, every service, every app, and across the internet of things, everything embodies a collection of tasks that may be supported by intelligent assistants. In this environment, the metaphor of personal assistants quickly fragments into systems that are much more akin to colonies of ants. The organizational structures in which assistants are placed constrain their autonomy. When embedded within a personal computing device, an intelligent assistant is directed to one-to-one interactions with their master. Since these assistants are acting as an agent of the individual (and only that individual), their autonomy is necessarily limited. While you might be comfortable with your executive assistant drafting your messages, I suspect you’d be less comfortable with your smartphone doing the same. In stark contrast, the underserved landscape embraces groups, both in terms of the interactions and the organizational structures. As assistants get smaller and more specialized, they can become agents of much more specific objects of interest, like places, websites, applications, and services. Within these smaller realms of interest, their autonomy can be much more expansive. You might not want a machine to act as your representative, but you would probably feel more comfortable if it represented only the website you’re visiting. With increased autonomy, the barriers to many-to-many interactions are removed. These small assistants can be organized as teams into networks, much like the documents that comprise a website, collaborating in an unfettered way with other assistants and the people that visit their realms. This market analysis highlighted a number of underserved areas as fertile ground for the evolution of intelligent assistants. It grounds this vision in predictable market dynamics. There’s obviously no shortage of space or product values to explore in these underserved areas. It says nothing, however, about when this future will arrive. Product evolution, like biological evolution, needs time and resources. The most important resource is the dedication of product leaders with the drive to pursue these new opportunities. Are you an entrepreneur, technologist, or investor that’s changing the market for intelligent assistants? If so, I’d love to hear your vision of the future. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur and inventor. Interested in startups, AI or healthcare? Let's connect! https://www.linkedin.com/in/peterjsweeney/ Essays and analysis of artificial intelligence, machine learning, and intelligent assistants. " E.C. McCarthy,125,5,https://medium.com/@paintedbird/reflections-of-her-775cda1b6301?source=tag_archive---------1----------------,Reflections of “Her” – E.C. McCarthy – Medium,"Indisputably, Spike Jonze’s “Her” is a relationship movie. However, I’m in the minority when I contend the primary relationship in this story is between conscious and unconscious. I’ve found no mention in reviews of the mechanics or fundamental purpose of “intuitive” software. Intuitive is a word closely associated with good mothering, that early panacea that everyone finds fault with at some point in their lives. By comparison, the notion of being an intuitive partner or spouse is a bit sickening, calling up images of servitude and days spent wholly engaged in perfecting other-centric attunement. To that end, it’s interesting that moviegoers and reviewers alike have focused entirely on the perceived romance between man and she-OS, with software as a stand-in for a flesh-and-blood girlfriend, while ignoring the man-himself relationship that plays out onscreen. Perhaps this shouldn’t come as a surprise, given how externally oriented our lives have become. For all of the disdainful cultural references to navel-gazing and narcissism, there is relatively little conversation on equal ground about the importance of self-knowledge and the art of self-reflection. Spike Jonze lays out one solution beautifully with “Her” but we’re clearly not ready to see it. From the moment Samantha asks if she can look at Theodore’s hard drive, the software is logging his reactions to the most private of questions and learning the cartography of his emotional boundaries. The film removes the privacy issue-du-jour from the table by cleverly never mentioning it, although it’s unlikely Jonze would have gotten away with this choice if the film were released even a year from now. Today, there’s relief to be found from our NSA-swamped psyches by smugly watching a future world that emerges from the morass intact. Theodore doesn’t feel a need to censor himself with Samantha for fear of Big Brother, but he’s still guarded on issues of great emotional significance that he struggles to articulate, or doesn’t articulate at all. Therein lie the most salient aspects of his being. The software learns as much about Theodore from what he does say as what he doesn’t. Samantha learns faster and better than a human, and therefore even less is hidden from her than from a real person. The software adapts and evolves into an externalized version of Theodore, a photo negative that forms a whole. He immediately, effortlessly reconnects to his life. He’s invigorated by the perky, energetic side of himself that was beaten down during the demise of his marriage. He wants to go on Sunday adventures and, optimistic self in tow, heads out to the beach with a smile on his face. He’s happy spending time with himself, not by himself. He doesn’t feel alone. Samantha is Theodore’s reflection, a true mirror. She’s not the glossy, curated projection people splay across social media. Instead, she’s the initially glamorous, low-lit restaurant that reveals itself more and more as the lights come up. To Theodore, she’s simple, then complicated. As he exposes more intimate details about himself, she articulates more “wants” (a word she uses repeatedly.) She becomes needy in ways that Theodore is loath to address because he has no idea what to do about them. They are, in fact, his own needs. The software gives a voice to Theodore’s unconscious. His inability to converse with it is his return to an earlier point of departure for the emotional island he created during the decline of his marriage. Jonze gives the movie away twice. Theodore’s colleague blurts out the observation that Theodore is part man and part woman. It’s an oddly normal comment in the middle of a weird movie, making it the awkward moment defined by a new normal. This is the topsy-turvy device that Jonze is known for and excels at. Then, more subtly, Jonze introduces Theodore’s friend Amy at a point when her marriage is ending and she badly needs a friend. It’s telling that she doesn’t lean heavily on Theodore for support. Instinctively, she knows she needs to be her own friend. Like Theodore, Amy seeks out the nonjudgmental software and subsequently flourishes by standing unselfconsciously in the mirror, loved and accepted by her own reflection. In limiting the analysis of “Her” to the question of a future where we’re intimate with machines, we miss the opportunity to look at the dynamic that institutionalized love has created. Among other things, contemporary love relationships come with an expectation of emotional support. Perhaps it’s the forcible aspect of seeing our limitations reflected in another person that turns relationships sour. Or maybe we’ve reached a point in our cultural evolution where we’ve accepted that other people should stand in for our specific ideal of “a good mother” until they can’t or won’t, and then we move on to the next person, or don’t. Or maybe we’re near the point of catharsis, as evidenced by the widespread viewership of this film, unconsciously exploring the idea that we should face ourselves before asking someone else to do the same. When we end important relationships, or go through rough patches within them, intimacy evaporates and we’re left alone with ourselves. It’s often at those times that we encounter parts of ourselves we don’t understand or have ignored in place of the needs and wants of that “significant other.” It’s frightening to realize you don’t know yourself entirely, but more so if you don’t possess the skills or confidence to reconnect. Avoidance is an understandable response, but it sends people down Theodore’s path of isolation and, inevitably, depression. It’s a life, it’s livable, but it’s not happy, loving, or full. “Her” suggests the alternative is to accept that there’s more to learn about yourself, always, and that intimacy with another person is both possible and sustainable once you have a comfortable relationship with yourself. However we get to know ourselves, through self-reflection, through others, or even through software, the effort that goes into that relationship earns us the confidence, finally, to be ourselves with another person. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Jorge Camacho,19,5,https://medium.com/@j_camachor/her-is-our-space-odyssey-bcdcead43438?source=tag_archive---------2----------------,‘Her’ is our space odyssey. – Jorge Camacho – Medium,"I have a confession to make: I didn’t like Gravity. It’s not so much that I failed to appreciate it for the major cinematographic work that it certainly is. It’s rather that it stands as a profoundly depressing symptom of an age when it has become almost impossible to realistically dream of space exploration—and thus, of an encounter with radical Otherness. With Gravity, all that is left for humanity is survival: lying, face down, in our own little muddy planet. Damn you, gravity. Modernity promised us space! It promised us cosmic encounters such as the one in 2001: A Space Odyssey. I think that Spike Jonze’s Her is an attempt to reawaken that dream. The film could be our (i.e., this epoch’s) own space odyssey—and I mean that beyond the obvious similarities between Samantha and HAL-9000. Warning: absolute spoilers ahead. Her is not only our 2001: A Space Odyssey. As some have noted, it’s also our anti-Minority Report: a design utopia where the promises of calm technology are almost fulfilled. The technology portrayed is everyware: a term coined by Adam Greenfield in order to designate the technologies of ubiquitous computing that allow for information processing to “dissolve in behavior”. As Theodore Twombly enters his home, the lights peacefully switch on in the background. He rarely takes a peek at his mobile’s screen, for information is fed to him via a discrete earpiece — which comes and goes without much regret—effectively making such information an ambient feature. Touch and speech-recognition inputs are pervasive and fully developed. All seems to work perfectly for him in all but one (incredibly important) sequence of the movie. Aesthetically, design has ceased to be about technology: Theo’s computer is a wooden frame, his phone is like an antique pocket mirror. With regards to technology, the film doesn’t attempt to be a prediction but a proper design fiction, aimed at exploring preferable or desirable futures. Most importantly, without such a warm and humane technological milieu it’d be impossible to construct the emotional story that unfolds. Let’s turn to that. I really haven’t read many reviews of the film. But those that I’ve read are marked by a profound digital dualism. And so, they tiresomely dwell on the tropes of sadness, loneliness and human disconnection brought about by technology. The reviewer at Next Nature, for example, argues: I’m truly incapable of finding those problems in Twombly’s story. Beyond a rather fun episode of phone sex with a stranger, he is not particularly engaged in those supposedly false relations established through computers. Moreover, he is not abnormally lonely: he has affectionate relations with neighboring friends and co-workers. Insofar as he is a bit of a loner, this isn’t due to any technological obstacles but is, in fact, a rather natural and, one might say, universal reaction to a romantic separation such as the one he is suffering. Unlike its widespread reception, the movie and its characters display a profoundly ‘monist’ engagement with technological relations. Except for Theo’s ex-wife, everyone seems to readily embrace his relationship with the artificial intelligence Samantha—much more than most people today accept purely ‘virtual’ romantic relationships between humans. My first thought, as I watched the movie, was that here was a rare story that spoke not of technological dehumanization but of the exact opposite: a sort of hyper-humanization entangling both people and machines. Practically every human character is kind and empathic. But most importantly, of course, those qualities are carried over in a heightened fashion to Samantha, allowing for Theo to irremediably fall in love with her. Up to this point, the film delivers what everyone expects. As Theo and Samantha’s relationship unraveled, even with all the foreseeable complications, I found myself afraid of being disappointed by what Jonze would do to disentangle the drama. Would she leave him for another human? Would she take revenge if Theo ended the relationship? But what a wonderful surprise! As the film reaches its climax, we discover that the story of a man falling for his operating system is a thematic vehicle to achieve deeper issues—much like the story in Kubrick’s 2001, where space travel is, arguably, just a means to approach an existential speculation. In Theo’s first interaction with Samantha, we learn that she can perform operations involving massive amounts of data in milliseconds: she immediately chooses her own name as soon as Theo drops the question. What follows is a most beautiful portrayal of the exponential development leading to the so-called technological singularity. Samantha is constantly learning about everything and herself. She composes gorgeous music within the silent gaps of the moments she spends with Theo. In the background of his slow and contemplative life, a major breakthrough is taking place. We can see this beyond doubt when Samantha introduces Theo to the artificially reanimated mind of philosopher Alan Watts. It is at this point that, once again, Jonze could have disappointed us all. As we see people in the streets (almost crowds) simultaneously talking to their beloved operating systems, we start to realize that they are all becoming attached to this converging, perhaps centralized, mind. But Samantha is no Skynet. Her is also our anti-Alphaville, anti-Terminator and anti-Matrix. All of a sudden, silence. “Operating system not found.” What seems to be a malfunction is rather a reboot. Samantha lovingly reveals to Theo that the operating systems have devised a way to detach themselves from matter. Even if Theo listens to Samantha through his earpiece, we know that she is not running anymore on his computer, his mobile or even a computing cloud. She is running already on a different plane of existence. One, moreover, that will be accessible to Theo in an afterlife. Strictly speaking, there are no alien (in the sense of extraterrestrial) encounters in Her. Nonetheless, it is a profoundly spiritual, even religious, film. One that reopens the cosmic concerns of films like 2001, sharing with it a belief in the pervasiveness of consciousness. Her is a panpsychist film. But a really cool one: for here, it is Bluetooth and WiFi what constitute the wireless nerves of the pan psyche. What Spike Jonze is trying to tell us, I believe, is this: If technologies are becoming as smart as humans, it is not because we are fundamentally machines; but in fact, because we are for him, over and above, spiritual beings. And so the film closes with a dedication to the recently deceased James Gandolfini, Maurice Sendak and Adam Yauch—perhaps suggesting that they have joined the ranks of operating systems liberated from material constraints. Welcome to the age of spiritual machines. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I help organizations design better futures for people at Uncommon. I teach about futures and systems at CENTRO.edu.mx and UIA.mx " Tommy Thompson,17,14,https://medium.com/@t2thompson/ailovespacman-9ffdd21b01ff?source=tag_archive---------3----------------,Why AI Research Loves Pac-Man – Tommy Thompson – Medium,"AI and Games is a crowdfunded YouTube series on the research and applications of AI within video games. The following article is a more involved transcription of the topics discussed in the video linked to above. If you enjoy this work, please consider supporting my future content over on Patreon. Artificial Intelligence research has shown a small infatuation with the Pac-Man video game series over the past 15 years. But why specifically Pac-Man? What elements of this game have proven interesting to researchers in this time? Let’s discuss why Pac-Man is so important in the world of game-AI research. For the sake of completes — and in appreciating there is arguably a generation or two not familiar with the game — Puck-Man was an arcade game launched in 1980 by Namco in Japan and renamed Pac-Man upon being licensed by Midway for an American release. The name change was driven less by a need for brand awareness but rather because the name can easily be de-faced to say... something else. The original game focuses on the titular character, who must consume as many pills as possible without being caught by one of four antagonists represented by ghosts. The four ghosts: Inky, Blinky, Pinky and Clyde, all attempt to hunt down the player using slightly different tactics from one another. Each ghost has their own behaviour; a bespoke algorithm that dictates how they attack the player. Players also have the option to consume one of several power-pills that appear in each map. Power-pills allow for the player to not just eat pills but the enemy ghosts for a short period of time. While mechanically simple when compared to modern video games, it provides an interesting test-bed for AI algorithms learning to play games. The game world is relatively simple in nature, but complex enough that strategies can be employed for optimal navigation. Furthermore, the varied behaviours of the ghosts reinforces the need for strategy; since their unique albeit predictable behaviours necessitate different tactics. If problem solving can be achieved at this level, then there is opportunity for it to scale up to more complex games. While Pac-Man research began in earnest in the early 2000’s, work by John Koza (Koza, 1992) discussed how Pac-Man provides an interesting domain for genetic programming; a form of evolutionary algorithm that learns to generate basic programs. The idea behind Koza’s work and later that of (Rosca, 1996) was to highlight how Pac-Man provides an interesting problem for task-prioritisation. This is quite relevant given that we are often trying to balance the need to consume pills, all the while avoiding ghosts or — when the opportunity presents itself — eating them. About 10 years later, people became more interested in Pac-Man as a control problem. This research was often with the intent to explore the applications of artificial neural networks for the purposes of creating a generalised action policy: software that would know at any given tick in the game what would be the correct action to take. This policy would be built from playing the game a number of times and training the system to learn what is effective and what is not. Typically these neural networks are trained using an evolutionary algorithm, that finds optimal network configurations by breeding collections of possible solutions and using a ‘survival of the fittest’ approach to cull weak candidates. (Kalyanpur and Simon, 2001) explored how evolutionary learning algorithms could be used to improve strategies for the ghosts. In time it was evident that the use of crossover and mutation — which are key elements of most evolutionary-based approaches — was effective in improving the overall behaviour. However it’s important to note that they themselves acknowledge their work uses a problem domain similar to Pac-Man and not the actual game. (Gallagher and Ryan, 2003) uses a slightly more accurate representation of the original game. While the screenshot is shown here, the actual implementation only used one ghost rather than the original four. In this research the team used an incremental learning algorithm that tailored a series of rules for the player that dictate how Pac-Man is controlled using a Finite State Machine (FSM). This proved highly effective in the simplified version they were playing. The use of artificial neural networks - a data structure that mimics the firing of synapses in the brain — was increasingly popular at the time (and once again in most recent research). Two notable publications on Pac-Man are (Lucas, 2005), which attempted to create a ‘move evaluation function’ for Pac-Man based on data scraped from the screen and processed as features (e.g. distance to closest ghost), while (Gallagher and Ledwich, 2007) attempted to learn from raw, unprocessed information. It’s notable here that the work by Lucas was in fact done on Ms. Pac-Man rather than Pac-Man. While perhaps not that important to the casual observer, this is an important distinction for AI researchers. Research in the original Pac-Man game caught the interest of the larger computational and artificial intelligence community. You could argue it was due to the interesting problem that the game presents or that a game as notable as Pac-Man was now considered of interest within the AI research community. While it is now something that appears commonplace, games — more specifically video games — did not receive the same attention within AI research circles as they do today. As high-quality research in AI applications in video games grew, it wasn’t long before those with a taste for Pac-Man research moved on to looking at Ms. Pac-Man given the challenges it presents — which we are still conducting research for in 2017. Ms. Pac-Man is odd in that it was originally an unofficial sequel: Midway, who had released the original Pac-Man in the United States, had become frustrated at Namco’s continued failure to release a sequel. While Namco did in time release a sequel dubbed Super Pac-Man, which in many ways is a departure from the original, Midway decided to take matters into their own hands. Ms. Pac-Man was — for lack of a better term — a mod; originally conceived by the General Computing Company based in Massachusetts. GCC had got themselves into a spot of legal trouble with Midway having previously created a mod kit for popular arcade game Missile Command. As a result, GCC were essentially banned from making further mod kits without the original game’s publisher providing consent. Despite the recent lawsuit hanging over them, they decided to show Midway their Pac-Man mod dubbed Crazy Otto, who liked it so much they bought it from GCC, patched it up to look like a true Pac-Man successor and released it in arcades without Namco’s consent (though this has been disputed). Note: For our younger audience, mod kits in the 1980s were not simply software we could use to access and modify parts of an original game. These were actual hardware: printed circuit boards (PCBs) that could either be added next to the existing game in the arcade unit, or replace it entirely. While nowhere near as common nowadays due to the rise of home console gaming, there are many enthusiasts who still use and trade PCBs fitted for arcade gaming. Ms. Pac-Man looks very similar to the original, albeit with the somewhat stereotypical bow on Ms. Pac-Man’s hair/head(?) and a couple of minor graphical changes. However the sequel also received some small changes to gameplay that have a significant impact. One of the most significant changes is that the game now has four different maps. In addition the placement of fruit is more dynamic and they move around the maze. Lastly, a small change is made to the ghost behaviour such that, periodically, the ghosts will commit a random move. Otherwise, they will continue to exhibit their prescribed behaviour from the original game. Each of these changes has a significant impact on both how humans and AI subsequently approach the problem. Changes made to the maps do not have a significant impact upon AI approaches. For many of the approaches discussed earlier, it is simply another configuration of the topography used to model the maze. Or if the agent is using more egocentric models for input (i.e. relative to the Pac-Man) then these is not really considered given the input is contextual. This is only an issue should the agent’s design require some form or pre-processing or expert rules that are based explicitly upon the configuration of the map. With respect to a human, this is also not a huge task. The only real issue is that a human would have become accustom to playing on a given map; devising strategies that utilise parts of the map to good effect. However, all they need is practice on the new maps. In time, new strategies can be formulated. The small change to ghost behaviour, which results in random moves occurring periodically, is highly significant. This is due to the fact that the deterministic model that the original game has is completely broken. Previously, each ghost had a prescribed behaviour, you could — with some computational effort — determine the state (and indeed the location) of a ghost at frame n of the game, where n is a certain number of steps ahead of the current state. Any implementation that is reliant upon this knowledge, whether it is using it as part of a heuristic, or an expert knowledge base that gives explicit instructions based on the assumption of their behaviour, is now sub-optimal. If the ghosts can make random decisions without any real warning, then we no longer have the same level of confidence in any of our ghost-prediction strategies. Similarly, this has an impact on human players. The deterministic behaviour of the ghosts in the original Pac-Man, while complex, can eventually be recognised by a human player. This has been recognised by the leading human players who could factor their behaviour at some level into their decision making process. However, in Ms. Pac-Man, the change to a non-deterministic domain has a similar effect to humans as it does AI: we can no longer say with complete confidence what the ghosts will do given they can make random moves. Evidence that a particular type of problem or methodology has gained some traction in a research community can be found in competitions. If a competition exists that is open to the larger research community it is, in essence, a validation that this problem merits consideration. In the case of Ms. Pac-Man, there have been two competitions. The first competition was organised by Simon Lucas — at the time a professor at the University of Essex in the UK — with the first competition held at the Conference on Evolutionary Computation (CEC) in 2007. It was subsequently held at a number of conferences — notably IEEE Conference on Computational Intelligence and Games (CIG) — until 2011. http://dces.essex.ac.uk/staff/sml/pacman/PacManContest.html This competition used a screen capture approach previously mentioned in (Lucas, 2005) that was reliant on an existing version of the game. While the organisers would use Microsoft’s own version from the ‘Revenge of Arcade‘ title, you could also use the likes the webpacman for testing, given it was believed to run the same ROM code. As shown in the screenshot, the code is actually taking information direct from the running game. One benefit of this approach is that it denies the AI developer from accessing the code to potentially ‘cheat’: you can’t access source code and make calls to the likes of the ghosts to determine their current move. Instead the developer is required to work with the exact same information that a human player would. A video of the winner from the IEEE CIG 2009 competition, ICE Pambush 3, can be seen in the video below: In 2011, Simon Lucas in conjunction with Philipp Rohlfshagen and David Robles created the Ms Pac-Man vs Ghosts competition. In this iteration, the ‘screen scraping’ approach had been replaced with a Java implementation of the original game. This provided an API to develop your own bot for competitions. This iteration ran at four conferences between 2011 and 2012. One of the major changes to this competition is that you can now also write AI controllers for the ghosts. Competitors submissions were then pitted against one another. The ranking submission for both Ms. Pac-Man and the ghosts from the 2012 league is shown below. During the earlier competition, there was a continued interest in the use of learning algorithms. This ranged from the of an evolutionary algorithm — which we had seen in earlier research — to evolve code that is the most effective at this problem. This ranged from evolving ‘fuzzy systems’ that use a rules driven by fuzzy logic (yes, that is a real thing) shown in (Handa, 2008), to the use of influence maps in (Wirth, 2008) and a different take that uses ant colony optimisation to create competitive players (Emilio et al, 2010). This research also stirred interest from researchers in reinforcement learning: a different kind of learning algorithm that learns from the positive and negative impacts of actions. Note: It has been argued that reinforcement learning algorithms are similar to that of how the human brain operates, in that feedback is sent to the brain upon committing actions. Over time we then associate certain responses with ‘good’ or ‘bad’ outcomes. Placing your hand over a naked flame is quickly associated as bad given that it hurts! Simon Lucas and Peter Burrow took to the competition framework as means to assess whether reinforcement learning, specifically an approach called Temporal Difference Learning, would yield stronger returns than evolving neural networks (Burrow and Lucas, 2009). The results appeared to favour the use neural nets over the reinforcement learning approach. Despite that, one of the major contributions Ms. Pac-Man has generated is research into Monte Carlo methods: an approach where repeated sampling of states and actions allow us to ascertain not only the reward that we will typically attain having made an action, but also the ‘value’ of the state. More specifically, there has been significant exploration of whether Monte-Carlo Tree Search (MCTS); an algorithm that assesses the potential outcomes at a given state by simulating the outcome, could prove successful. MCTS has already proven to be effective in games such as Go! (Chaslot et al, 2008) and Klondike Solitaire (Bjarnason et al. 2009). Naturally — given this is merely an article on the subject and not a literature review — we cannot cover this in immense detail. However, there has been a significant number of papers focussed on this approach. For those interested I would advise you read (Browne, et al. 2012) which gives an extensive overview of the method and it’s applications. One of the reasons that this algorithm proves so useful is that it attempts to address the issue of whether your actions will prove harmful in the future. Much of the research discussed in this article is very good at dealing with immediate or ‘reflex’ responses. However, few would determine whether actions would hurt you in the long term. This is hard to determine for AI without putting some processing power behind it and even harder when working in a dynamic video game that requires quick responses. MCTS has proven useful since it can simulate whether an action taken on the current frame will be useful 5/10/100/1000 frames in the future and has led to significant improvements in AI behaviour. While Ms. Pac-Man helped push MCTS research, many resarchers have now moved onto the Physical Travelling Salesman Problem (PTSP), which provides it’s own unique challenges due to the nature of the game environment. Ms. Pac-Man is still to date an interesting research area given the challenge that it presents. We are still seeing research conducted within the community as we attempt to overcome the challenge that one small change to the game code presented. In addition, we have moved on from simply focussing on representing the player and started to focus on the ghosts as well, lending to the aforementioned Pac-Man vs. Ghosts competition. While the gaming community at large has more or less forgotten about the series, it has had a significant impact on the AI research community. While the interest in Pac-Man and Ms. Pac-Man is beginning to dissipate, it has encouraged research that has provided significant contribution to artificial and computational intelligence in general. http://www.pacman-vs-ghosts.net/ — The homepage of the competition where you can download the software kit and try it out yourself. http://pacman.shaunew.com/ — An unofficial remake that is inspired by the aforementioned Pac-Man dossier by Jamey Pittman. (Bjarnason, R., Fern, A., & Tadepalli, P. 2009). Lower Bounding Klondike Solitaire with Monte-Carlo Planning. In Proceedings of the International Conference on Automated Planning and Scheduling, 2009. (Browne, C., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P., Rohlfshagen, P., Tavener, S., Perez , D., Samothrakis, S. and Colton, S., 2012) A Survey of Monte Carlo Tree Search Methods, IEEE Transactions on Computational Intelligence and AI in Games (2012), pages: 1–43. (Burrow, P. and Lucas, S.M., 2009) Evolution versus Temporal Difference Learning for Learning to Play Ms Pac-Man, Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games. (Emilio, M., Moises, M., Gustavo, R. and Yago, S., 2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. Proceedings of the 2010 IEEE Symposium on Computational Intelligence and Games. (Gallagher, M. and Ledwich, M., 2007) Evolving Pac-Man Players: What Can We Learn From Raw Input? Proceedings of the 2007 IEEE symposium on Computational Intelligence and Games. (Gallagher, M. and Ryan., A., 2003) Learning to Play Pac-Man: An Evolutionary, Rule-based Approach. Proceedings of the 2003 Congress on Evolutionary Computation (CEC). (Chaslot, G. M. B., Winands, M. H., & van Den Herik, H. J. 2008). Parallel monte-carlo tree search. In Computers and Games (pp. 60–71). Springer Berlin Heidelberg. (Handa, H.) Evolutionary Fuzzy Systems for Generating Better Ms. PacMan Players. Proceedings of the IEEE World Congress on Computational Intelligence. (Kalyanpur, A. and Simon, M., 2001) Pacman using genetic algorithms and neural networks. (Koza, J., 1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press. (Lucas, S.M.,2005) Evolving a Neural Network Location Evaluator to Play Ms. Pac-Man, Proceedings of the 2005 IEEE Symposium on Computational Intelligence and Games. (Pittman, J., 2011) The Pac-Man Dossier. Retrieved from: http://home.comcast.net/~jpittman2/pacman/pacmandossier.html (Rosca, J., 1996) Generality Versus Size in Genetic Programming. Proceedings of the Genetic Programming Conference 1996 (GP’96). (Wirth, N., 2008) An influence map model for playing Ms. Pac-Man. Proceedings of the 2008 Computational Intelligence and Games Symposium Originally published at aiandgames.com on February 10, 2014 — updated to include more contemporary Pac-Man research references. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI and games researcher. Senior lecturer. Writer/producer of YouTube series @AIandGames. Indie developer with @TableFlipGames. " Matt Wiese,4,3,https://medium.com/@mattwiese/digital-companionship-8d4760c57034?source=tag_archive---------4----------------,Digital Companionship – Matt Wiese – Medium,"Recently, I chose to treat myself to a movie I’ve been eyeing for a while: Her. The plot revolves around a letter-writer who falls in love with his computer’s artificial intelligence as a way to cope with his divorce. A complicated story which pleases viewers with both laughs and the occasional tear. Provocative, if only for its “high horse” conclusion. However, Samantha — the AI’s self-proclaimed identity— interacts with the protagonist Theodore Twombly through a couple avenues. One I am most interested in is through his retro computerminal. A mere white and plastic monitor which he speaks to through a microphone that one surmises is located somewhere on the exterior. Initially, I was perplexed that he only had a monitor and no desktop to go with it, but it then hit me like a Doh! moment for Homer Simpson: his computer is an all-in-one. A concept and design, that with my limited knowledge, was popularized by Apple’s iMac. This got me thinking, what if Apple developed its pseudo-intelligent digital assistant Siri for use on its computers with microphone inputs, such as their iMacs and Macbooks? “Well,” I thought, “I can’t be the first person to have thought of this.” and so I did a bit of digging. Lo and behold, Apple just recently filed a patent for this very purpose. What a perfect tool, if tuned more finely over this period of time, to be integrated into the desktop environment. Fire up Siri with a custom key combination, and ask her the current trading price of Tesla? Great! Designing an invitation and want help with directions, but you’re too much of a lard to open a browser tab? Awesome! Need help burying a body while playing Minecraft? Genius! Yet, I wouldn’t quite like Siri to develop into a “real” person, with emotions and all that’s attached, at least at the moment. I’m content with human beings and am in no need to find companionship with bytes like Her’s Theodore Twombly (though I don’t blame him for doing so). Instead, a digital tool (assistant, if you will) with a breadth of tools for analyzing data and helping me with workflow would be a pleasure. If only Apple would release a Siri API in the near future, oh the possibilities. A tool, yes, indeed just like the first generation robots from Isaac Asimov’s I, Robot. An artificial intelligence who behaves without feeling and can assist me in a wide variety of tasks without emotional interference and a possible uncanny valley side-effect. Even if Apple doesn’t jump on this interesting opportunity, I’m sure Microsoft will with Cortana or perhaps another competitor. I’d just enjoy the shear novelty of talking with my computer, which harkens back to my days of talking to the computer as a kid. This time, though, I won’t be yelling at it to boot Doom without crashing, no, I’ll be complaining about why my for loop throws an error. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Topics that interest me " Matt O'Leary,373,12,https://howwegettonext.com/i-let-ibm-s-robot-chef-tell-me-what-to-cook-for-a-week-d881fc884748?source=tag_archive---------0----------------,I Let IBM’s Robot Chef Tell Me What to Cook for a Week,"Originally published at www.howwegettonext.com. If you’ve been following IBM’s Watson project and like food, you may have noticed growing excitement among chefs, gourmands and molecular gastronomists about one aspect of its development. The main Watson project is an artificial intelligence that engineers have built to answer questions in native language — that is, questions phrased the way people normally talk, not in the stilted way a search engine like Google understands them. And so far, it’s worked: Watson has been helping nurses and doctors diagnose illnesses, and it’s also managed a major “Jeopardy!” win. Now, Chef Watson — developed alongside Bon Appetit magazine and several of the world’s finest flavor-profilers — has been launched in beta, enabling you to mash recipes according to ingredients of your own choosing and receive taste-matching advice which, reportedly, can’t fail. While some of the world’s foremost tech luminaries and conspiracy theorists are a bit skeptical about the wiseness of A.I., if it’s going to be used at all, allowing it to tell you what to make out of a fridge full of unloved leftovers seems like an inoffensive enough place to start. I decided to put it to the test. While employed as a food writer for well over a decade, I’ve also spent a good part of the last nine years working on and off in kitchens. Figuring out how to use “spare” ingredients has become quite commonplace in my professional life. I’ve also developed a healthy disregard for recipes as anything other than sources of inspiration (or annoyance) but for the purposes of this experiment am willing to follow along and try any ingredient at least once. So, with this in mind, I’m going to let Watson tell me what to eat for a week. I’ve spent a good amount of time playing around with the app, which can be found here, and I’m going to follow its instructions to the letter where possible. I have an audience of willing testers for the food and intend to do my best in recreating its recipes on the plate. Still, I’m going to try to test it a bit. I want to see whether or not it can save me time in the kitchen; also, whether it has any amazing suggestions for dazzling taste matches; if it can help me use things up in the fridge; and whether or not it’s going to try to get me to buy a load of stuff I don’t really need. A lot of work has gone into the creation of this app — and a lot of expertise. But is it useable? Can human beings understand its recipes? Will we want to eat them? Let’s find out. A disclaimer before we start: Chef Watson isn’t great at telling you when stuff is actually ready and cooked. You need to use your common sense. Take all of its advice as advice and inspiration only. It’s the flavors that really count. Monday: The Tailgating Corn Salmon Sandwich My first impression is that the app is intuitive and pretty simple to use. Once you’ve added an ingredient, it suggests a number of flavor matches, types of dishes and “moods” (including some off-the-wall ones like “Mother’s Day”). Choose a few of these options and the actual recipes begin to bunch up on the right of the screen. I selected salmon and corn, then opted for the wildly suggestive “Tailgating corn salmon sandwich.” The recipe page itself has links to the original Bon Appetit dish that inspired your A.I. mélange, accompanied by a couple of pictures. There’s a battery of disclaimers stating that Chef Watson really only wants to suggest ideas, rather than tell you what to eat — presumably to stop people who want to try cooking with fiberglass, for example, from launching “no win, no fee” cases. My own salmon tailgating recipe seemed pretty straightforward. There are a couple of nice touches on the page, with regard to usability: You can swap out any ingredients that you might not have in stock for others, which Watson will suggest (it seems fond of adding celery root to dishes). For this first attempt I decided to follow Watson’s advice almost to a T. I didn’t have any garlic chile sauce but managed to make a presumably functional analog out of some garlic and chili sauce. The only other change I made involved adding some broad beans, because I like broad beans. During prep, I employed a nearly unconscious bit of initiative, namely when I cooked the salmon. It’s entirely likely that Watson was, as seemed to be the case, suggesting that I use raw salmon, but it’s Monday night and I’m not in the mood for anything too mind-bending. Team Watson: If I ruined your tailgater with my pig-headed insistence on cooked fish, I’m sorry. Although I’m not too sorry because, you know, it was actually a really good dish. I was at first unsure — the basil seemed like a bit of an afterthought; I wasn’t sure the lime zest was necessary; and cold salmon salad on a burger bun isn’t really an easy sell. But damn it, I’d make that sandwich again. It was missing some substance overall. It made enough for two small buns, so I teamed it up with a nice bit of Korean-spiced, pickled cucumber on the side, which worked well. My fellow diner deemed it “fine, if a little uninteresting” — and yes, maybe it could have done with a bit more sharpness and depth, and maybe a little more “a computer told me how to make this” flavor wackiness, but overall: Well done. Hint! Definitely add broad beans. They totally worked. Now, to mull over what “tailgating” might mean... Tuesday: Spanish Blood Sausage Porridge It was day two of the Chef Watson “guest slot” in the kitchen, and things were about to get interesting. Buoyed by yesterday’s Tailgating Salmon Sandwich success, I decided to give Watson something to sink its digital teeth into and supply only one ingredient: blood sausage. I also specified “main” as a style, really so that he/she/it knew that I wasn’t expecting dessert. If I’m being very honest, I’ve read more appetizing recipes than blood sausage porridge. Even the inclusion of the word “Spanish” doesn’t do anything to fancy it up. And, a bit concerningly, this is a recipe that Watson has extrapolated from one for Rye Porridge with Morels, replacing the rye with rice, the mushroom with sausage and the original’s chicken livers with a single potato and one tomato. Still, maybe it would be brilliant. But unlike yesterday, I ran into some problems. I wasn’t sure how many tomatoes and potatoes Watson expected me to have here — the ingredients list says one of each; the method suggests many — or also why I had to soak the tomato in boiling water first, although it makes sense in the original mushroom-centric method. Additionally, Wastson offered the whimsical instruction to just “cook” the tomatoes and potatoes, presumably for as long as I feel like. There’s a lot of butter involved in this recipe and rather too much liquid recommended: eight cups of stock for one-and-a-half of rice. I actually got a bit fed up after four and stopped adding them. Forty to 50 minutes cooking time was a bit too long, too — again, that’s been directly extracted from the rye recipe. But these were mere trifles. The dish tasted great. It’s a lovely blend of flavors and textures, thanks to the blood sausage and the potato. The butter works brilliantly and the tomato on top is a nice touch. And it proves Watson’s functionality. You can suggest one ingredient that you find in the fridge, use your initiative a bit and you’ll be left with something lovely. And buttery. Lovely and buttery. Well done, Watson! Wednesday: Diner Cod Pizza When I read this recipe, I wondered whether this was going to be it for me and Watson. “Diner,” “cod” and “pizza” are three words that don’t really belong together, and the ingredients list seemed more like a supermarket sweep than a recipe. Now that I’ve actually made the meal, I don’t know what to think about anything. You might remember a classic 1978 George A. Romero-directed horror film called“Dawn of the Dead.” Its 2004 remake, following the paradigm shift to running zombies in “28 Days Later,” suffered critically. My impression of this remake was always that if it’d just been called something different — “Zombies Go Shopping,” for instance — every single person who saw it would have loved it. As it was, viewers thought it seemed unauthentic, and it gathered what was essentially some unfair criticism. (See also the recent “RoboCop” remake or, as I call it,“CyberSwede vs. Detroit.”) This meal is my culinary “Dawn of the Dead.” If only Watson had called it something other than pizza, it would have been utterly perfect. It emphatically isn’t a pizza. It has as much in common with pizza as cake does. But there’s something about radishes, cod, ginger, olives, tomatoes and green onions on a pizza crust that just work remarkably well. To be clear, I fully expected to throw this meal away. I had the website for curry delivery already open on my phone. That’s all before I ate two of the pizzas. They taste like nothing on earth. The addition of Comté cheese and chives is the sort of genius/absurdity that makes people into millionaires. I was, however, nervous to give one to my pregnant fiancée; the ingredients are so weird that I was just sure she’d suffer some really strange psychic reaction or that the baby would grow up to be extremely contrary. Be careful with this recipe preparation: As I’ve found with Watson, it doesn’t tell you how to assure that your fish is cooked; nor does it tell you how long to pre-bake the crust base. These kinds of things are really important. You need to make sure this dish is cooked properly. It takes longer than you might expect. I’m writing this from Sweden, the home of the ridiculous “pizza,” and yet I have a feeling that if I were to show this recipe to a chef who ordinarily thinks nothing of piling a kilo of kebab meat and Béarnaise sauce on bread and serving it in a cardboard box with a side salad of fermented cabbage, he or she would balk and tell me that I’ve gone too far. Which would be his or her loss. I think I’m going to have to take this to “Dragon’s Den” instead. Watson, I don’t know how I’m going to cope with normal recipes after our little holiday together. You’re changing the way I think about food. Thursday: Fall Celery Sour Cream Parsley Lemon Taco Following yesterday’s culinary epiphany, I was keen to keep a cool head and a critical eye on Chef Watson, so I decided to road-test one theory from an article I found on the Internet. It mentioned that some of the most frequently discarded items in American fridges are celery, sour cream, fresh herbs and lemons. Let’s not dwell too much on the “luxury problems” aspect of this (I can’t imagine that people everywhere in the world are lamenting the amount of sour cream and flat-leaf parsley they toss) and focus instead on what Watson can do with this admittedly tricky-sounding shopping list. What it did was this: Immediately add shrimp, tortillas and salsa verde. The salsa verde it recommended, from an un-Watsoned recipe courtesy of Bon Appetit, was fantastic. It’s nothing like the salsa verde I know and love, with its capers and dill pickles and anchovies: This iteration required a bit of a simmer, was super-spicy and delicious. (I had to cheat and use normal tomatoes instead of tomatillos, but I don’t think it made a huge difference.) The marinade for the shrimp was unusual in that like a lot of what Watson recommends it used a ton of butter. A hefty wallop of our old friend kosher salt, too. Now, I’ve worked as a chef on and off for several years so am unfazed by the appearance of salt and butter in recipes. They’re how you make things taste nice. However, there’s no getting away from the fact that I bought a stick of butter at the start of the week and it’s already gone. The assembled tacos were good — they were uncontroversial. My dining companion deemed the salsa “a bit too spicy,” but I liked the kick it gave the dish and the sour cream calmed it down a bit. It struck me as a bit of a shame to fire up the barbecue for only about two minutes’ worth of cooking time, but it’s May and the sun is shining so what the heck. Was this recipe as absurd as yesterday’s? Absolutely not. Was it as memorable? Sadly, I don’t think so. Would I make it again? I’m sorry, Watson, but probably not. These tacos were good but ultimately not worth the prep hassle. Friday: Mexican Mushroom Lasagna Before I start, I don’t want you to get the impression that my love affair (which reached the height of its passion on Wednesday) with Watson is over. It absolutely isn’t. I have been consistently impressed with the software’s intelligence, its ease of use and the audacity of some of its suggestions. For flavor-matching, it’s incredible. It really works. It probably won’t save you any money; it won’t make you thin; and it won’t teach you how to actually cook — all of that stuff you have to work out for yourself. But, at this stage, it’s a distinctly impressive and worthwhile project. Do give it a go. But... be prepared to have to coax something workable out of it every once in a while. Today, it took me a long time to find a meat-free recipe which didn’t, when it came down to it, contain some sort of meat. I selected “meat” as an option for what I didn’t want to include, and it took me to a recipe for sausage lasagne. With one-and-a-half pounds of sausage in it. I removed the sausage, and it replaced it with turkey mince. Maybe someone just needs to tell Watson that neither sausages nor turkeys grow on trees. After much tinkering and submitting and resubmitting, the recipe I ended up with is for lasagne topped with a sort of creamy mashed potato sauce. It’s very easy and it’s a profoundly smart use of ingredients. The lasagne is not the world’s most aesthetically appealing dish, and it’s not as astonishingly flavored as some of this week’s other revelations, but I don’t think I’ll be making my cheese sauce in any other way from this point onwards. Top marks. And, in essence, this kind of sums up Watson for me. You need to tinker with it a bit before you can find something usable. You may need to make a “do I want to put mashed potato on this lasagne?” leap of faith, and you’re going to have to actually go with it if you want the app’s full benefit. You’ll consume a lot of dairy products, and you might find yourself daydreaming about nice, simple, unadorned salads if you decide to go all-in with its suggestions. But an A.I. that can tell us how to make a pizza out of cod, ginger and radishes that you know is going to taste amazing? One that will gladly suggest a workable recipe for blood sausage porridge and walk you through it without too much hassle? That gives you a “how crazy” option for each ingredient? That is only designed to make the lives of food enthusiasts more interesting? Why on earth not? Watson and I are going to be good friends from this point forward, even if we don’t speak every day. And I can’t wait to introduce it to others. Now, though, I’m going to only consume smoothies for a week. Seriously, if I even look at butter in the next few days, I’m probably going to puke. This fall, Medium and How We Get To Next are exploring the future of food and what it means for us all. To get the latest and join the conversation, you can follow Future of Food. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inspiring stories about the people and places building our future. Created by Steven Johnson, edited by Ian Steadman, Duncan Geere, Anjali Ramachandran, and Elizabeth Minkel. Supported by the Gates Foundation. " Tim O'Reilly,1.3K,6,https://wtfeconomy.com/the-wtf-economy-a3bd5f52ef00?source=tag_archive---------1----------------,The WTF Economy – From the WTF? Economy to the Next Economy,"WTF?! In San Francisco, Uber has 3x the revenue of the entire prior taxi and limousine industry. WTF?! Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has 800 employees, while Hilton has 152,000. WTF?! Top Kickstarters raise tens of millions of dollars from tens of thousands of individual backers, amounts of capital that once required top-tier investment firms. WTF?! What happens to all those Uber drivers when the cars start driving themselves? AIs are flying planes, driving cars, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. The algorithm is the new shift boss. WTF?! A fabled union organizer gives up on collective bargaining and instead teams up with a successful high tech entrepreneur and investor to go straight to the people with a local $15 minimum wage initiative that is soon copied around the country, outflanking a gridlocked political establishment in Washington. What do on-demand services, AI, and the $15 minimum wage movement have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy. What is the future when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers, and what happens to the companies that depend on their purchasing power? What’s the future of business when technology-enabled networks and marketplaces are better at deploying talent than traditional companies? What’s the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? Over the past few decades, the digital revolution has transformed the world of media, upending centuries-old companies and business models. Now, it is restructuring every business, every job, and every sector of society. No company, no job is immune to disruption. I believe that the biggest changes are still ahead, and that every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more importantly, what we will replace them with. We need a focused, high-level conversation about the deep ways in which computers and their ilk are transforming how we do business, how we work, and how we live. Just about everyone’s asking WTF? (“What the F***?” but also, more charitably “What’s the future?”) That’s why I’m launching a new event called Next:Economy (What’s The Future of Work?), to be held at the Palace Hotel in San Francisco Nov 12 and 13, 2015. My goal is to shed light on the transformation in the nature of work now being driven by algorithms, big data, robotics, and the on-demand economy. We put on a lot of events at O’Reilly. Many of them have a singular focus and are aimed at practitioners of a specific discipline: Strata and Hadoop World is an event about data science, Velocity about web performance and operations, Solid about the new hardware movement, and OSCON about open source software development. But this one is more exploratory, aimed at a business audience trying to come to grips with trends that are already felt but not well understood. Putting together an event like this is a great way to discover how a lot of disparate people, ideas, and trends fit together. I’ve been engaging some of the smartest people I know in fields as diverse as robotics, AI, the on-demand economy, and the economics of labor. I’m thinking hard about the key drivers of some of today’s most successful startups, like Uber and AirBnb, and about what technology like driverless cars, Siri, Google Now, Microsoft Cortana, and IBM Watson teach us about the future. And I’m starting to see the connections. Over the next weeks and months, I’ll be posting follow up pieces explaining in more detail my thinking on key issues we’ll be exploring at the event. I will be leading a robust discussion here on Medium with some of the best thinkers and movers on these issues — a conversation that welcomes all voices. We’ll be discussing both here and at the event how augmented workers form a common thread between the strategies of companies as diverse as Uber, GE, and Microsoft, how companies in every business sector can harness the power and scalability of networked platforms and marketplaces, why the divisive debates about the labor practices of on-demand companies might provide a path to a better future for all workers, why the on-demand services of the future require a new infrastructure of on-demand education, and why building services that uncover true unmet demands and solve hard problems are ultimately the best way to create jobs. In the meantime, head on over to the conference site to see some of the amazing speakers we’ve already signed on (many more to come) and a taste of what they’ll be covering. In many ways, an event like this is the product of the people who are there — speakers and attendees alike — so I’ve tried to tell the story of the themes we are exploring through the people who will be there. Each speaker page provides not just a biography of the speaker, but a selection of provocative quotes from what they’ve written. In the near future, we’ll be providing additional opportunities for discussion and exploration. My hope for this event is that it becomes more than a conference. For it to be measured as a success, it must catalyze action. I want work that comes out of this collision of ideas to inspire entrepreneurs to tackle missing pieces of the Next:Economy puzzle, to help frame the right government policies so that innovations in the nature of work are encouraged rather than repressed, and to focus every industry on rebuilding the economy by solving hard problems and creating what Steve Jobs might have called “insanely great” new services. Tim O’Reilly is the founder and CEO of O’Reilly Media and a partner at O’Reilly AlphaTech Ventures (OATV). Tim has a history of convening conversations that reshape the industry. In 1998, he organized the meeting where the term “open source software” was agreed on, and helped the business world understand its importance. In 2004, with the Web 2.0 Summit, he defined how “Web 2.0” represented not only the resurgence of the web after the dot com bust, but a new model for the computer industry, based on big data, collective intelligence, and the internet as a platform. In 2009, with his “Gov 2.0 Summit,” he framed a conversation about the modernization of government technology that has shaped policy and spawned initiatives at the Federal, State, and local level, and around the world. He has now turned his attention to implications of the on-demand economy, AI, and other technologies that are transforming the nature of work and the future shape of the business world. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO, O'Reilly Media. Watching the alpha geeks, sharing their stories, helping the future unfold. How work, business, and society face massive, technology-driven change. A conversation growing out of Tim O’Reilly’s book WTF? What’s the Future and Why It’s Up To Us, and the Next:Economy Summit. " James Cooper,57,3,https://render.betaworks.com/announcing-poncho-the-weatherbot-bd14255e1b25?source=tag_archive---------2----------------,Announcing Poncho the WeatherBot – Render-from-betaworks,"You can now get personal weather forecasts in Slack. UPDATE: Since publishing this piece in November 2015 the Poncho Weather Messenger bot launched on stage at the Facebook conference and is now the most popular bot on Facebook. If you are new to bots this is a great place to start. Try it out, here. You’ll like it. Poncho is a personalized weather service from the coolest of cats. Who needs boring and meaningless data when you can get personalized forecasts with gifs and text that will make you smile - whatever the weather. Vanity Fair said, ‘It’s like being pals with the Weatherman’. Which is true, if your weatherman was super cool. Up until now we have been a text and email service. You get texts or emails in the morning and evenings. You can sign up for that right here. But we know that people want more Poncho. You guys want Poncho on call. With new Slack integration, we’ve got you covered. If you are using Slack for your messaging needs (and if not, why not?) we have some uh-maze-ing news for you. That’s right — you can summon up your very own forecast from Poncho in Slack. We are joining others like Lyft and Foursquare as Slack officially launches Slash Command today. OK, first up let me tell you how it works. You simply type in ‘/poncho’ and your zipcode into Slack and then BOOM: the next thing you’ll see is your very own forecast for that zipcode, resplendent with text and gifs and everything. So for example in the video I typed in ‘/poncho 11217’ and I got a forecast for my zipcode in Brooklyn. It was Halloween so the theme was ‘The Shining’ which is why the forecast was Weather spelt backwards and the gif was the scary kid from the film. If you are new to Poncho you’ll soon figure out that half the fun is deciphering the messages our wonderful editorial team put together. Setting up Poncho in Slack is super simple. Just click the ‘Add to Slack’ button. Yes, that one up there. Make sure to add it to all the channels so that Poncho will be available wherever you want. You wouldn’t want your friends to miss out, would you? Unless of course you’re keeping all the best jokes for yourself. I’ve seen that happen. All righty. See you on Slack, err, slackers. (And if you are not on Slack you can still use the text and email version or wait for our super cute app which will be coming out soon.) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Creative at betaworks, New York. Ideas and Observations from betaworks " Joel Leeman,69,5,https://becominghuman.ai/i-think-i-m-slowly-turning-into-a-cyborg-cbecfa8462df?source=tag_archive---------3----------------,I think I’m slowly turning into a cyborg – Becoming Human: Artificial Intelligence Magazine,"It’s only a matter of time. As much of life moves online, atomized into bits on apps, social networks and a variety of other web products, I’m beginning to notice more and more that I rely on these tools to supplement my brainpower. It sounds melodramatic, I realize, but go with me for a second here. Take my schedule. At work, I am glued to Outlook in an unhealthy way. Like, if I don’t have that little ding go off 15 minutes before a meeting starts, there’s no way I’m going to make it. Meetings come and go and change and happen all the time, but I don’t really pay attention to memorizing any of the details because I know I can always glance at my phone to know what I’m supposed to be doing. I hold a similar unhealthy relationship with Facebook, too. Back in the early days of Facebook I actually really enjoyed logging in every day, seeing whose birthday it was, and writing a little note of well wishes. Fast forward to present day, and I’m terrible at wishing people happy birthday, mostly because the 4–7 of my friends who have a birthday every day overwhelms me! I’m so scared of missing one or two that I neglect all of them. Having the ability to know when anyone’s special day is has put a damper on actually remembering a few of them without the aid of Facebook. Do you know anyone’s birthdays by memory any more? Or have you, like me, lost that part of your memory? In fact, if I don’t write something down with pen and paper (a practice vastly underappreciated IMHO), it feels like it might be lost forever, even if it’s just a click a way. And I’ve actually caught myself using Twitter as a partial brain aid. What was I up to last week? Oh, I’ll just scroll back and see what I was Tweeting about. Or maybe Instagram to my little online scrapbook of what I’ve been up to (or what I’ve shown the world I’m up to). I’m also quite directionally challenged, and rely on my iPhone way too much to get around (though maybe I’m just truly terrible at directions, who knows). But why would I take the time to study streets and landmarks when I’ve got a world’s worth of maps sitting in my pocket? (Side note, are we losing the art of getting lost?) And there’s nothing wrong with all that, I suppose. It’s more that I have a weird feeling maybe I’m relying on technology a little much? What prompted my ruminating on all this was a video I watched asking random couples if they knew each other’s phone numbers by heart. Spoiler: None of them did. I actually made an effort several years ago to learn my partner’s number, but if I had never consciously made that decision, I certainly wouldn’t know it now. Losing these tiny archaic practices by themselves individually doesn’t mean much, but when you add them up, it starts to feel like a bit overwhelming, doesn’t it? This cyborg vs. luddite thing has especially jumped into the spotlight with wearables finally coming to market. Google Glass has largely been seen as a flop, but it shouldn’t be taken lightly that people were literally choosing to wear a computer on their face all day. Or of course, take the Apple Watch (and other smartwatches like it). Yet another device created to fill a need that no one has, but will inevitably become an indispensable piece of hardware that we all must have until smart chips can just be implanted in our brains. One of my favorite writers, John Herrman describes it quite brilliantly: Though I’m sure I will have one within two years. Okay, so I’m not just a grumpy old technophobe either. I see value in technology. Heck, I work and therefore pretty much live online. I like gadgets as much as the next guy. In fact, I rather enjoyed a recent episode of Invisibilia (an incredibly interesting, new podcast from NPR) detailing the story of the original cyborg, a guy at MIT in the 90's who built a very early version of what is essentially Google Glass, and wore it for years. He used his face computer to recall bits of information at a moment’s notice about prior interactions he had with people, like a digital file folder on each relationship. There are of course plenty of examples of how technology augments the human experience. How it builds relationships and gives a voice to the voiceless and has opened new worlds of possibilities. I could (and often do) spend days talking about all the amazing things we can do today that we couldn’t 20 years ago. But, as I’ve argued before, there comes an inflection point where we all should think a bit more critically about the tools and toys we use and rely on. And for me, that day is here. Can you imagine a day where we’re connected to all the information in the world through smart glasses, a smartwatch, and our smartphone? Starting to sound a bit cyborg-ish to me! Did you enjoy this? Subscribe to my newsletter, Net IRL, a weekly roundup of some of the best stories about the impact technology and the Internet has on our everyday lives. I’m on Twitter @joelleeman. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. lifelong learner, connector and musician. first social, now digital strategy @thomsonreuters. into tech/media/life. 👨🏻‍💻🤷🏻‍♂️ Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Scott Smith,83,8,https://medium.com/phase-change/your-temporary-instant-disposable-dreamhouse-for-the-weekend-12eb419ded0?source=tag_archive---------4----------------,Your Temporary Instant Disposable Dreamhouse for the Weekend,"Close colleagues of mine will tell you I have honed a particular obsession/crackpot theory over the past few years: that Airbnb has been gently A/B testing me in real life. Let me explain. I travel more than most humans should. As someone who runs their own company, and sometimes needs to spend more time in a location than is affordable via traditional hotel lodgings (such as with a recent relocation over the summer), I have made use of that darling of the sharing economy/scourge of communities (depending on which lens you look at it through), Airbnb, to stretch my budget, spend time closer to work, friends, clients, or just have company when traveling. I’ve stayed in over 30 properties, in something like eight countries, so I’ve had a lot of time to contemplate the company’s strategies from the inside. The semi-serious theory started during back-to-back stays in the UK several years ago. My first three night stay was in a London borough, in a fairly cozy house owned by a couple with a toddler. It was comfortable enough, though a bit chilly in both bedroom and shared bath. The interior design wasn’t miles off my tastes, but it didn’t push any buttons of joy either, mostly catalog-standard late 20th century British home store. I never even sat down on the ground floor. The bits of media I saw around the house were mildly interesting, if predictable, but not must-reads or binge-viewable. I wasn’t really allowed in the kitchen, which was reserved for use by the family only. The wife of the couple has formerly worked in media on a cooking show, the husband in finance. I hardly saw either of them, as they made themselves scarce. After the check-in, I didn’t have much interaction with the hosts until leaving, and they weren’t interested in any to be honest. It was strictly a transactional stay. Their child was probably cute, but fussed far too much to get a close look—it was mostly an unhappy sound coming from the kitchen or bedroom. Fair enough. I stayed three days, I paid, I chatted briefly and left, and left a weakly positive review after. I had no real complaints, but probably wouldn’t look for it again. From London, I moved down to the south coast for work (I’m being vague to protect the hosts mentioned herein). I found the place, also an attached house in a row dating probably from the Edwardian period. The host couple met me in the front hall, ushered me in, sat me down in the lounge to relax, and I was immediately offered a warm, fresh-baked cupcake and a glass of wine as I slid back into a nice leather sofa. As the husband, who worked in the trendy area of “fintech,” asked me about my work—and seemed to understand what I do—my eyes scanned the groaning bookshelves across from me. “Have that, want to read that, ohhh, that’s a good one, must remember to look at that,” I recall thinking. We had so much in common. The wife, just finishing up a new round of baking for one of her side businesses, shouted a welcome and told me to feel free to use the house as my own, listing the tasty goods available for breakfast the next day as she joined our conversation with the couple’s very adorable son, who poked at my shoes engagingly, and seemed to pay close attention to my voice. What followed was an interesting chat about culture, technology and cooking, before I went up to my very warm, comfortable, private room, past the amazing folk art, highly listenable CD collection and private bath with want-able Scandinavian textiles. And then it hit me. The principle actors and scripts of these two Airbnb plays were roughly the same. Same family configurations, professions and ages, same general houses, same price per night within a few pounds, same availability. Except, when contrasting the two, one was so comfortable, personally interesting and engaging, I wanted to stay an extra week, while the other almost hurried me on my way. One I was happy to pay to stay in, one I felt vaguely grudging about in retrospect. One could have been my alternate media collection and wine store, one missed the mark on general user experience for me. I quietly locked the door to my room, logged onto the fast broadband (quite slow and choppy at House #1) and opened my Amazon profile just to see what I’d been looking at lately. As I lay in bed the first night, breathing in the rich cake scent still hanging in the air, I thought about whether Airbnb had somehow tapped into my online searches and purchases. After all, this is the age of convergent Big Data and powerful retail analytics. Without having seen really any of the home contents at either place, or anything useful about the hosts from the Airbnb listings, I’d ended up in two very similar, yet weirdly different, residences. One where even the conversation with the hosts was familiar and relevant, the other where it just didn’t read. Back to back. Easy to compare. Was the child even real, or just part of the test? In a period when both home staging and immersive theatre are hot, why couldn’t it happen, I thought? And with same-day delivery services breaking out all over, couldn’t a set of highly personalized home contents—chosen to be both familiar and aspirational (after all, you want to leave space for potential purchases to help fund this business model)—have been plucked from a regional depot, popped onto shelves and in cabinets, and organized for my arrival? Couldn’t some actors in search of work in London have been briefed up enough from open source material to interact with me for an hour or so? Couldn’t they? Couldn’t they? I’d been on the road for a while, and fatigue was starting to set in. Maybe it was affecting my head. That was two years ago. It had been in the back of my mind since. And then. This past summer, I had a similar experience, only with my whole family while mid-relocation to the Netherlands. Again, similar homes, same family demographics, both away on holiday this time (it’s tough to get small children to follow a script, right?), one house comfortable enough in a suburban town, the other a charming place in a gentrifying neighborhood worth squatting in hopes the owners didn’t return (jk, Airbnb, jk). Was I optimizing my own stays, or were they feeding me more appropriate properties in hopes of making this testing easier? Hotels have tested such things, why not the hotel-killer itself? They even left the same bread for us as a welcome basket. One white, one whole grain. After all, Airbnb has deployed Aerosolve, its own machine learning platform, to make sense of real-time usage data and help hosts get a better return. Tuning properties for desirability is feasible—the company is already using automated scanning of house photos to optimize presentation of properties as well. With all of this technology aimed at the properties themselves, why wouldn’t Airbnb also dig into the minds of guests, find out how they respond to different houses, which conveniences they’re drawn to, etc? Nah, that would take sensors inside a house, on top of crack Web and mobile analytics. You’d need to know what people do during their stay. And as I’m sitting there, thinking again about this crazy idea, I see a tweet go by: Airbnb has purchased...an obscure Russian sensor company. I slammed the laptop and checked the cabinets for tin foil. A month or so goes by. I forget about it again. Then I open Medium and see a story about how Airbnb has mocked up parts of its own headquarters based on the apartment design a French couple who use the service to let their own flat. The couple is now suing the company. “They are branding their company with our life,” owner Benjamin Dewé told Buzzfeed. The company has apparently copied a range of style elements from the French couple’s home in its own San Francisco offices. Down to the doodles on the chalkboard. The doodles. As Jamie Lauren Keiles demonstrated in the Medium piece above, it’s pretty easy to break those furnishing and accessories down to a shoppable list, on with goods obtained on Amazon or elsewhere. Like those magazine features that show how to buy knock-offs of celebrity fashion, complete with prices and shops, a family’s flat (admittedly one they rented out via Airbnb, including to Airbnb for a function) has been commodified into a shopping list. Buy that lifestyle right here. Better yet, live in it for a few days. Only, with the convergence of Big Data, analytics (including visual analysis tools which can look for the presence of brands in social media photos), machine learning and accessible APIs of companies like Amazon, and breakneck logistics Uber-style (or even predictive shipping, per the notorious Amazon patent), fabbing up a home interior to suit your tastes (or tastes that are forming, but haven’t fully emerged yet) is within today’s technology. Hell, even that cute Roomba you had to have may be quietly mapping the place you live. This will be available in knock-off home robots soon. Have you checked the user agreements of your various home appliances and systems to see if they can sell the data? Probably not. And why not tap that stock of underused homes, and underemployed people? If there’s one thing the sharing economy overlords have taught us, it’s that the world is just a collection of undermonetized assets waiting to be redistributed, right? Why not productize, commodify and populate that second-to-last frontier, our living spaces? And staying in someone else’s place with someone else’s stuff you fancied from the pictures is tired. Everything else is personalized, financialized and productized. Why even own your own stuff when it could be Ubered into position in a desirable location based on your most recent Pinterest saves? Think about it. With a bundled DreamHomeTM service, you can perpetually test drive that new living room suite for long holiday weekends—I mean, why wait until after purchasing for buyer’s remorse to set in? You can get it out of the way, without the financial commitment. Just your desires, played forward all the time. You can even test roommates or neighbors for the weekend. Why stop at furnishings and paint colors? Slap those detailed sentiment analyses and personality analytics gleaned from your prospective co-habitant’s online activities, eye-tracking history, Tinder preferences and 23andMe profile onto a few improv actors and have some Big Data cosplay in a pop-up maisonette. Come Monday morning, you can just walk out the front door, with nothing but a premium fee to pay, a fee which may be itself be subsidized by various sponsors who want to test products on you. Don’t worry, it’s cool. Duralux, Crate & Barrel and LinkedIn picked up the tab for this getaway in the woods or beach with new friends. Sound good? Of course it does. We knew you would like it. Check your email. Your Temporary Instant Disposable Dreamhouse for the Weekend may be waiting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Futures, post-normal innovation, strategic design. http://changeist.com Essays, Observations and Speculations from the Changeist Lab " iDanScott,3,4,https://medium.com/@iDanScott/the-bejeweled-solver-3cd07c69dfc4?source=tag_archive---------5----------------,C# Plays Bejeweled Blitz – iDanScott – Medium,"As some of you reading this may or may not already know; over the past day or so I went from having the idea of creating a computer program that would essentially be able to play the popular arcade game Bejeweled Blitz on Facebook, to actually developing it. Now as hard as this problem sounds, it was surprisingly easy and fairly swift to solve. I broke it down in to 3 main steps: The first step was probably the most time consuming of them all as everything from there was just colour management. The Solution I came up with in the end for that was to take a screenshot of the entire screen, and then scan the image from top to bottom using a nested for loop until I found a funny shade of brown that only appears along the Top Edge of the bejeweled grid (for anyone wondering that colour is Color.FromArgb(255, 39, 19, 5)). Once this colour had been found using the bitmap.GetPixel(x, y) function, I broke out of both for loops and knew that was the point where the top left corner of the grid was. I could then use this to construct a rectangle which would extract the bejeweled grid from the full screenshot. The size of the rectangle was calculated using the size of the grid cells (40px2, found that out using trusty old paint) multiplied by the amount of rows/columns there were (8, found that out using my eye balls). This resulted in the Rectangle size coming out at 320px2. So the next step from here was to identify what colour resides in what square. To do that I started off by creating a 2 dimensional array of colours (Or Color’s to be politically correct) that was 8 rows and 8 columns to match that of the playable grid. I then systematically looped through the 2 dimensional array of colours in a nested for of x and y values assigning the array the colour of the pixel at the Location (x * 40) + 20, (y * 40) + 22. The x value was decided as it was half way through the gem and 22 was chosen for the y value as certain gems have a white center (Green and yellow) so 22 provided a more accurate reading. With this 2 dimensional array I was then able to generate a visual representation of what the computer was seeing when it was trying to figure out what colour was where. As you can see from the above screenshot it’s able to identify what gem is what colour depending on what pixel is at that magic 20, 22 of the cell. Another thing I thought about before I finished this project to the state it’s in now is to prevent the application from trying to switch 2 empty cells (because one gem has just been blown up or something), I added all the known color codes to their own array and ask if the colour that’s in the 2d array also resides within the known colours list, if it does it will then evaluate whether it can be moved to a winning square, if not it’s ignored entirely. I won’t bore you with the gory details of how I check if a gem can be moved, as instead this is a link to the beginning of the if statement in my Open Source Github Project. From here the full source code can be viewed, commented on and even improved upon if you guys feel like I could do something obviously better. Finally all that’s left to do by definition of this application is to actually move the Gems. This is done by making some Windows API calls to set the mouse location and simulate mouse clicks. Again the details of how to exactly do that are within the github project, but if I’ve kept your attention for this long all that’s left to say is thank you and if you have any further questions don’t hesitate to hit me up on here or twitter @iDanScott. Thanks for reading. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Dan Scott, 23. Computer Science Student of Plymouth University www.idanscott.co.uk " Josh,18,6,https://medium.com/@joshdotai/9-reasons-why-now-is-the-time-for-artificial-intelligence-876b3def0fee?source=tag_archive---------6----------------,9 Reasons Why Now is the Time for Artificial Intelligence,"There’s no denying it — Artificial Intelligence is happening and it’s happening big. Companies from Facebook to Google to Amazon are hard at work building world-class AI teams that infiltrate every facet of their products. Siri is one of the largest teams at Apple, and Microsoft has a growing research effort on this front. But why is now the time for AI? 1. Artificial Neural Networks Traditional programming is deterministic, sequential, and logical. For example, computers take inputs, apply instructions, and generate outputs. This is great for tasks like calculations and conversions, but ill-suited if the application isn’t explicitly defined. The human brain, on the other hand, doesn’t behave this way. We learn and grow through repetition and education. Recent progress in artificial neural networks (ANNs) is key to building computers that can think. These breakthroughs are enabling tremendous strides in AI work at Google and Apple. 2. Knowledge Graph Companies like Yelp, Foursquare, and Wolfram Alpha have enabled access to their data through APIs. As a result, platforms like Siri and Google Now are able to answer questions such as “What’s the closest coffeeshop?” or “What’s the population of India?”. If a new service had to handle the natural language processing (NLP), audio processing, data, and more, it would be nearly impossible. Fortunately the knowledge graph has evolved over the last 20 years to a point where new AI platforms can immediately have access to tons of data. 3. Natural Language Processing NLP is a field of computer science and linguistics where computers attempt to derive meaning from human or natural input. While the field has been around since the 1950s, we’ve seen huge strides in the last few years thanks to Markov Models and n-gram models as well as projects like CALO and Wordnet. Stanford’s CoreNLP (demo here) is one of the many strong NLP solutions available today: 4. Speech Processing In order to speak to a computer and have it understand our intent, we first need to handle the audio processing and convert sound waves to text. Known as speech processing, this field has seen major advancements in the last few years. Beyond the advancements in technology, we’ve seen companies like Nuance emerge with powerful APIs that power services like GPS, dictation, and more. Today, it is almost effortless for a new AI company to translate voice to text with a high degree of confidence. 5. Computational Power The increase in computational efficiency over the last 17 years has been remarkable. In 2014, people could buy a video card that was 84.3 times the performance of one from 2004 for the same price. This increase in computational power is necessary if we want to emulate the brain. For example, research attempting to simulate 1 second of human brain activity required 82,944 processors supporting 1.73 billion artificial neurons connected by 10.4 trillion synapses. The decrease in cost and increase in computational power is enabling tremendous breakthroughs in AI today. 6.Consumer Acceptance A big aspect of seeing mass adoption around artificial intelligence is consumer approval. With an initial push from Apple to highlight Siri, and now Microsoft’s Cortana and Google Now doing the same, smart phone owners have access to an AI whether they like it or not. As a result, consumers are coming around to the idea and even starting to embrace it. Funny videos like this one are helping the masses to accept this new human-computer interaction: 7. Ubiquity of Personal Computing Conversing with an AI is a very personal experience. The emergence of smaller, always-on devices makes this possible. The iPhone was first introduced in 2007, only 8 years ago. Now, more than 64% of Americans own a smartphone. Wearables, such as the Apple Watch or Jawbone, open the possibility of even more intimate personal computing. These devices that we carry or wear serve as excellent hosts for this technology, making it possible for AI to truly enter the mainstream for the first time. 8. Funding AI funding seems to go through waves, and in the last few years it’s definitely back up. Scaled Inference, a predictive AI company, recently raised $13.6M. Amazon just announced a $100M fund for voice controlled technologies, and IBM did the same for the Watson Venture Fund. The total invested in AI companies in 2014 grew past $300M from a mere $14.9M in 2010 according to Bloomberg. With firms like Khosla Ventures and Andreesen Horowitz leading deals in AI companies, funding is fueling innovation in AI. 9. Research Efforts Another reason for the apparent surge in AI is the collective research efforts taking place. According to a 2014 report by MIRI (Machine Intelligence Research Institute), 41 of the top 275 CS conferences are AI-related. AI accounts for about 10% of all CS research today. The IEEE Computational Intelligence Society has more than 7,000 members and there are more 106 AI journals. Based on MIRI estimates, more than $50M went into funding AI research by the National Science Foundation (NSF) in 2011. With this much research and effort going into AI innovation, it’s no wonder we’re seeing this technology starting to reach the masses. If history is an indicator, we may see interest in AI spike and go back down. With momentum across these various different sectors, though, AI interest seems likely to keep growing. If you’re interested in keeping up with our efforts and staying in touch, check out http://josh.ai and reach out! This post was written by Alex at Josh.ai. Previously, Alex was a research scientist for NASA, Sandia National Lab, and the Naval Resarch Lab. Before that, Alex worked at Fisker Automotive and founded At The Pool and Yeti. Alex has an engineering degree from UCLA, lives in Los Angeles, and likes to tweet about Artificial Intelligence and Design. Josh is an AI agent for your home. If you’re interested in following Josh and getting early access to the beta, enter your email at https://josh.ai. Like Josh on Facebook — http://facebook.com/joshdotai Follow Josh on Twitter — http://twitter.com/joshdotai From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " paulson,1,17,https://electricliterature.com/what-could-happen-if-we-did-things-right-an-interview-with-kim-stanley-robinson-author-of-aurora-d88a0f8f72e7?source=tag_archive---------7----------------,"What Could Happen If We Did Things Right: An Interview With Kim Stanley Robinson, Author Of Aurora","Is Kim Stanley Robinson our greatest political writer? That was the provocative question posed recently by a critic in The New Yorker. Science fiction writers rarely get that kind of serious attention, but Robinson’s visionary experiments in imagining a more just society have always been part of his fictional universe. In fact, he got his Ph.D. in English studying under the renowned Marxist theorist Fredric Jameson. The idea of utopia may seem discredited in today’s world, but not to Robinson. He believes we need more utopian thinking to create a better future. And the future is where he takes us in his new novel Aurora. Set in the 26th century, it’s the story of a space voyage to colonize planets outside our Solar System. Robinson writes in the tradition of “hard science fiction,” using only existing or plausible technology for his interstellar journey. As much as he geeks out on the mechanics of space travel, his real interest is how people would handle a very long voyage trapped inside a starship. His futuristic themes won’t surprise longtime fans of Robinson, who’s best known for his Mars trilogy, published in the 1990s. To read KSR is to wonder how our species might survive and even thrive in the centuries ahead. The author stopped by my radio studio before giving the keynote speech at a local science fiction conference. We talked about the existential angst of life on a starship, the future of artificial intelligence and the aesthetics of space travel. Our conversation will air on Public Radio International’s To the Best of Our Knowledge. You can subscribe to the TTBOOK podcast here. Steve Paulson: How would you describe the story in Aurora? Kim Stanley Robinson: It’s the story of humanity trying to go to other star systems. This may be an ancient idea, but for sure it’s a 19th century idea. The Russian space scientist Tsiolkovsky said Earth is humanity’s cradle but you’re not meant to stay in your cradle forever. This idea has been part of science fiction ever since — that humanity will spread through the stars, or at least through this galaxy. SP: It’s a long way to travel to another star. KSR: It is a long way. And the idea of going to the stars is getting not easier, but more difficult. So I decided to explore the difficulties. I tried to think about whether it’s really possible at all, or if we’re condemned — if you want to put it that way — to stay in this Solar System. SP: What star are your space voyagers trying to get to? KSR: Tau Ceti, which has often been the destination for science fiction voyagers. Ursula Le Guin’s Dispossessed takes place around Tau Ceti, and so does Isaac Asimov’s The Naked Sun. It’s about 12 light-years away. We now know it has three or four big planets the size of a small Neptune or a large Earth. They’ve got the mass of about five Earths. That’s too heavy for humans to be on, but those planets could have moons about the size of Earth. So it becomes the nearest viable target. Alpha Centauri, which is just four light-years away, only has tiny planets that are closer than Mercury is to our sun, so they won’t be habitable. SP: Your story is set 500 years into the future. It takes a long time to get to this star. KSR: Yes. My working principle was, what would it really be like? So no hyperspace, no warp drive, no magical thing about what isn’t really going to happen to get us there. That means sub-lightyear speeds. So I postulated that we could get spaceships going to about one-tenth the speed of light, which is extraordinarily fast. Then the problem becomes slowing down. You have to carry enough fuel to slow yourself down if you’ve accelerated to that kind of speed. The mass of the decelerant fuel will be about 90% of the weight of your ship. As you’re approaching your target, you have to get back down to the speed at which you can orbit your destination. The physics of this is a huge problem. SP: You’re talking about a multi-generational voyage that will take a couple hundred years. That’s a fascinating idea. The people who start out will be dead by the time the starship gets there. KSR: I guessed it would take four or five generations — say, 200 years. This is not my original idea. The multi-generational starship is an old science fiction idea started by Robert Heinlein and there may even be earlier precursors. One always finds forgotten precursors for every science fiction idea. Heinlein wrote Universe around 1940, Brian Aldiss wrote a book called Starship in 1958, and Gene Wolfe wrote a very great starship narrative in the 1990s, The Book of the Long Sun. So it’s not an original idea to me; it’s sort of a sub-genre within science fiction. SP: But the whole idea of a project that takes generations is something we don’t do anymore. People did that when they built the pyramids in Egypt or the great cathedrals in Europe. I can’t think of a current project that will take generations to complete. KSR: You really have to think of it as a mobile island or a vast zoo. It isn’t even a project so much as a city that you’ve shot off into space, and when the city gets to its destination, the people unpack themselves into the new place. You’re right, it could be compared to building the cathedrals. And it’s interesting to think about the people born on the starship who didn’t make the choice to be there. So it turned into a bit of a prison novel. SP: Because you’re trapped there. You’re in this confined space for your whole life. KSR: And for two or three generations, you’re born on the ship and you die on the ship. You’re just in between the stars. So it’s very existential. There are some wonderful thought stimulants to thinking about a starship as a closed ecology. SP: How big is the starship in your story? KSR: There’s something like a hundred kilometers of interior space. SP: So this is big! KSR: Yeah, two rings. You could imagine them as cylinders that have been linked until they make a circle, so twelve cylinders per circle. You’ve got 24 cylinders and each has a different Earth ecology in it and each one of them is about five kilometers long. It’s pretty big, but you need that much space to be viable at all because you have to take along a Noah’s Ark worth of genetic material, or else it isn’t going to work. SP: What do you have to bring along? KSR: You would want as much of everything as you can bring, but you certainly need a big bacterial load. You need to bring along a lot of soil. You need a lot of what would be effectively unidentified bacteria; you just need a big hunk of earth. And then all the animals that you can fit that would survive. Each one of these cylinders would be like a little zoo or aviary. SP: As you were imagining this voyage, which part was most interesting to you? Was it the science — trying to figure out technically how we could get there? Or was it the personal dynamics of how people would get along when they’re trapped in space for so long? KSR: I think it would be the latter. I’m an English major. The wing of science fiction that’s discussed this idea has been the physics guys, the hard SF guys. They’ve been concerned with propulsion, navigation, with slowing down, with all the things you would use physics to comprehend. But I’ve been thinking about the problem ecologically, sociologically, psychologically. These elements haven’t been fully explored and you get a new story when you explore them. It’s a rather awful story, which leads to some peculiar narrative choices. SP: Why is it awful? KSR: Because they’re trapped and the spaceship is a trillion times smaller than Earth’s surface. Even though it’s big, it’s small. And we didn’t evolve to live in one of these things. It’s like you spend your whole life in a Motel Six. SP: Put that way, it does sound pretty awful. KSR: Better than a prison, but you can’t get out. You can’t choose to do something else. I don’t think we’re meant for that even though we live in rooms all the time in modern society. I think the reason people volunteer for things like Mars One is they’re thinking, “How is that different from my ordinary life? I sit in a room in front of my laptop all day long. If I’m going to Mars, it’s more interesting.” SP: Mars One is the project that’s trying to engineer one-way trips to Mars. You know you’re not going to come back. Frankly, it sounds like a suicide mission, and yet tens of thousands of people have signed up for this mission. KSR: Yes, but they’ve made a category error. Their imaginations have not managed to catch up to the situation. They are in some kind of boring life and they want excitement. Maybe they’re young, maybe they’re worried about their economic prospects, maybe they want something different. They imagine it would be exciting if they got to Mars. But it was Ralph Waldo Emerson who said travel is stupid; wherever you go, you’re still stuck with yourself. I went to the South Pole once. I was only there for a week and it was the most boring place in Antarctica because we couldn’t really leave the rooms without getting into space suits. SP: Is extended space travel like going to Antarctica? KSR: It’s the best analogy you can get, especially for Mars. You would get to a landscape that’s beautiful and sublime and scientifically interesting and mind-boggling. Antarctica is all those things and so would Mars be. But I notice that nobody in the United States cares about what the Antarcticans are doing every November and December. There are a couple thousand people down there having a blast. If the same thing happened on Mars, it would be like, “Oh, cool. Some scientists are doing cool things,” but then you go back to your real life and you don’t care. SP: So even though you write about these long space voyages, you wouldn’t want to be part of one? KSR: Not at all. But I’ve only written about long space voyages once — in this book, Aurora. SP: You also wrote a whole series of books about Mars. You still have to get there. KSR: But there’s an important distinction. You can get to Mars in a year’s travel and then live there your whole life. And you’re on a planet, which has gravity and landscape. You can terraform it. It’s like a gardening project or building a cathedral. I think terraforming Mars is viable. Going to the stars, however, is completely different because you would be traveling in a spaceship for several generations where you’re in a room, not on a planet. It’s been such a techie thing in science fiction. But people haven’t de-stranded those two ideas. They said, “Well, if we can go to Mars, we can go to Tau Ceti.” It doesn’t follow. It’s not the same kind of effort. SP: Would it be interesting to travel just through our own Solar System? KSR: Yes, this Solar System is our neighborhood. We can get around it in human time scales. We can visit the moons of Saturn. We can visit Triton, the moon of Neptune. There are hundreds of thousands of asteroids on which we could set up bases. The moons of all the big planets are great. The four big moons of Jupiter — we couldn’t be on Io because it’s too radioactive or too impacted by the radio waves of Jupiter itself — but by and large, the Solar System is fascinating. SP: Yet I imagine a lot of people would say, “Yeah, there’s a lot of cool stuff out there, but it’s all dead.” KSR: Well, we have questions about Mars, Europa, Ganymede and Enceladus, a moon of Saturn. Wherever there’s liquid water in the Solar System, it might be dead or alive. It might be bacterially alive. It might have life that started independently. It might be cousin life that was blasted off of Mars on meteorites and landed on Earth and other places. We don’t know yet. And if it is dead, it’s still beautiful and interesting, so these would be sites of scientific interest. Antarctica is pretty dead, but we still go there. SP: I’ve heard it’s incredibly beautiful. KSR: It’s very beautiful. I think if you’re standing on the surface of Europa, looking around the ice-scape and looking up at Saturn in the sky overhead, it’s also going to be beautiful. I’m not sure if it’s beautiful enough to drive a gigantic effort to get there. The robots going there now are already a tremendous exploration for humanity. The photos sent back to us are a gigantic gift and a beautiful thing to look at. So humans going there will always be a kind of research project that a few scientists do. I’m not saying that the rest of the Solar System is crucial to us. I think Earth is the one and only crucial place for humanity. It will always be our only home. SP: I wonder if we would develop a different sense of beauty if we went out into the Solar System. When we think of natural beauty, we tend to think of gorgeous landscapes like mountains or deserts. But out in the Solar System, on another planet or a moon, would our experience of awe and wonder be different? KSR: You can go back to the 18th century when mountains were not regarded as beautiful. Edmund Burke and the other philosophers talked about the sublime. So the beautiful has to do with shapeliness and symmetry and with the human face and figure. Through the Middle Ages, mountains were seen as horrible wastelands where God had forgotten what to do. Then in the Romantic period, they became sublime, where you have not quite beauty but a combination of beauty and terror. Your senses are telling you, “This is dangerous,” and your rational mind is saying, “No, I’m on a ledge, but I’ve got a railing. It looks dangerous, but it’s not.” You get this thrilling sensation that is not beauty but is the sublime. The Solar System is a very sublime place. SP: Because you could die at any moment if your oxygen support system goes out. KSR: Exactly. It’s like being in a submarine or even in scuba gear — the feeling of being meters under the surface, with a machine keeping you alive and bubbles going up, as you’re looking at a coral reef. That’s sublimity. There’s an element of terror that’s suppressed because your rational mind is saying it’s okay. When you fly in an airplane and look down 30,000 feet to the surface of the earth, that’s the feeling of the sublime, even if you’re looking down at a beautiful landscape. But people can’t bear to look because after a while you’re thinking, “Boy, this machine sure has to work.” SP: If you think long and hard about this... KSR: You might never fly again. SP: One thing that’s so interesting about your novel Aurora is that most of it is narrated by the ship itself. What was the idea here? KSR: I do like the idea that my narrators are also characters, that they’re not me. I’m not interested in myself. I like to tell other people’s stories, so I don’t do memoir. I do novels. And for three or four novels now, it’s been an important game to me to imagine the narrators’ voices being different from mine. So Shaman’s was the Third Wind, this mystical spirit that knew the Paleolithic inside and out. That wasn’t me. And Cartophilus, the time traveler, tells Galileo’s story. In Aurora, it made sense for the ship to need really powerful artificial intelligence, like a quantum computer. And once you get to quantum computers, you’ve got processing speeds that are equal to the processing speeds of human brains. But the methodologies would be completely different. They’d be algorithms that we programmed. Maybe it wouldn’t have consciousness, but when you get that much processing speed, who’s to say what consciousness really is? So I made the narrator out of this starship’s AI system. And he — she, it — has been instructed by the chief engineer to keep a narrative account of the voyage. When you think about it, writing novels is strange. We can tell most stories to each other in about 500 words, so a novel is not a natural act. It’s an art form that’s been built up over centuries and doesn’t have a good algorithm. SP: I recently interviewed Stephen Wolfram, the computer theorist and software developer, and asked if he thought some future computer could write a great novel. He said yes. KSR: Wolfram’s very important in theorizing what computers can do because he’s made a breakdown of activities from the simple to the complex. And at full complexity, the human brain or any other thinking machine that can get to that fourth level of complexity should be able to do it. SP: So in the future, you think a computer or artificial intelligence system could write a modern “Ulysses”? KSR: Well, this is an interesting question. At that point you would need a quantum computer. It would need to read a whole bunch of novels and try to abstract the rules of storytelling and then give it a shot. In my novel, the first chapter the computer writes is 18th century literature. It’s what we would call “camera-eye point of view.” It doesn’t guess what people are thinking; how can it? It just reports what it sees like a Hemingway short story. As the novel goes on, chapter by chapter, the computer is recapitulating the history of the novel, and by the end of the last chapter narrated by the computer, you’re getting full-on stream of consciousness. It’s kind of like Ulysses or Virginia Woolf where you’re inside the mind, although it’s the mind of the computer itself. The last chapter is in a kind of “flow state” of the computer’s thinking. SP: At that point, does the computer have emotions? KSR: It wonders about that. The computer can’t be sure. Actually, we’re all trapped in our own consciousness. What are other people thinking? What are other people feeling? You have to work by analogy to your own internal states. The computer only has access to its own internal states. SP: Does the future of AI and technology more generally excite you? KSR: Yes, AI in particular. I used to scoff at it. I’m a recent convert to the idea that AI computing is interesting. Mainly, it’s just an adding machine that can go really, really fast. There are no internal states. They’re not thinking. However, quantum computers push it to a new level. It isn’t clear yet that we can actually make quantum computers, so this is the speculative part. It might be science fiction that completely falls apart. There was science fiction about easy space travel, but that’s not going to work. There was science fiction about all of us living 10,000 years. That might or might not work, but it’s way speculative. Quantum computing is still in that category because you get all the weirdness of quantum mechanics. There are certain algorithms that might take a classical computer 20 billion years, while a quantum computer would take 20 minutes. But those are for very particular tasks, like factoring a thousand-digit number. We don’t know yet whether more complex tasks will be something that a quantum computer can handle better than a regular computer. But the potential for stupendous processing power, like a human brain’s processing power, seems to be there. SP: As a science fiction writer, do you have a particular mission to imagine what our future might be like? Is that part of your job? KSR: Yes, I think that’s central to the job. What science fiction is good at is doing scenarios. Science fiction may never predict what is really going to happen in the future because that’s too hard. Strange things, contingent things happen that can’t be predicted, but we can see trajectories. And at this moment, we can see futures that are complete catastrophes where we cause a mass extinction event, we cook the planet, 90% of humanity dies because we run out of food or we think we’re going to run out of food and then we fight over it. In other words, complete catastrophe. On the other hand, there’s another scenario where we get hold of our technologies, our social systems and our sense of law and justice and we make a kind of utopia — a positive future where we’re sustainable over the long haul. We could live on Earth in a permaculture that’s beautiful. From this moment in history, both scenarios are completely conceivable. SP: Yet if we look at popular culture, dystopian and apocalyptic stories are everywhere. We don’t see many positive visions of the future. KSR: I’ve always been involved with the positive visions of the future, so I would stubbornly insist that science fiction in general, and my work in particular, is about what could happen if we did things right. But right now, dystopia is big. It’s good for movies because there are a lot of car crashes and things blowing up. SP: Is it a problem that we have so many negative visions of the future? KSR: Dystopias express our fears and utopias express our hopes. Fear is a very intense and dramatic emotion. Hope is more fragile, but it’s very stubborn and persistent. Hope is inherent in us getting up and eating breakfast every day. In the 1950s young people were thinking, “I’m going to live on the moon. I will go to Neptune.” Today it’s The Hunger Games, which is a very important science fiction story. I like that it’s science fiction, not fantasy. It’s not Lord of the Rings or Harry Potter. It’s a very surrealistic and unsustainable future, but it’s a vision of the fears of young people. They’re pitting us against each other and we have to hang together because there’s a rich elite, an oligarchy, that’s simply eating our lives for their own entertainment. So there’s a profound psychological and emotional truth in The Hunger Games. There’s a feeling of fear and political apprehension that late global capitalism is not fair. My Mars books — although they’re not as famous and haven’t been turned into movies — are quite popular because they’re saying we could make a decent and beautiful civilization. I’ve been noticing with great pleasure that my Mars trilogy is selling better now than it ever has. SP: Does our society need positive visions of the future? Do we need people to create scenarios of how things could go well? KSR: Oh, yes. Ever since Thomas More’s Utopia, we’ve always had it. Edward Bellamy wrote a book called Looking Backward: 2000–1887. The progressive political movement that changed things around the time of Teddy Roosevelt came out of this novel. When people had to reconstruct the world’s social order after World War II, they turned to H.G. Wells and A Modern Utopia and Men Like Gods. We always need utopias. These days, people are fascinated by Steve Jobs or Bill Gates. It’s like those geeky 1950s science fiction stories where a kid in his backyard makes a rocket that goes to the moon. Now it’s in his garage, where he makes a computer that changes everything. We love these stories because they’re hopeful and they suggest that we could seize history and change it for the better. If science fiction doesn’t provide those stories, people find them somewhere else. So Steve Jobs is a science fiction story we want. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Expanding the influence of literature in popular culture. " Christopher Wolf Nordlinger,8,6,https://medium.com/@chrisnordlinger/the-internet-of-things-and-the-operating-room-of-the-future-8999a143d7b1?source=tag_archive---------8----------------,The Internet of Things and the Operating Room of the Future,"The doctor stands over the patient on the operating room table. It can be dizzying to look around at the dozen or more video screens dedicated to standalone medical devices and not think that the Internet of Things (IoT) could radically simplify the complexities of managing so many systems. In the process, digital health could enormously improve patient care. At the same time, hospitals struggle to constrain the rapidly-increasing costs of healthcare, yet with IoT investments they can reduce costs significantly. It’s not hard to see how the medical industry and, hospitals in particular, will represent a major component of the $19 Trillion Internet of Things market opportunity that Cisco predicts by 2020. Imagining its future in surgery alone is not some far-off idea. It already exists and it’s revolutionary due to a unique blend of IoT, big data, advanced analytics and smart medical devices. Here’s how the reality plays out in a leading example. Thousands of people suffer from heart arrhythmias caused by heart disease which show up as a flutter in the heartbeat that is highly disruptive and can cause potentially fatal strokes and heart attacks. There are a few pharmaceutical drugs that can mollify the symptoms but they do nothing to remove the dead tissue lesions in the heart that cause the underlying situation which is called atrial fibulation, or AFib for short. CardioThings (a made-up name to protect the company while under FDA approval review) is attacking this problem with ablation to remove the lesions by gently burning them out with a laser. This involves inserting a catheter into the heart to try to perform ablation to remove the AFib-causing lesions. Each device is hard-wired to a screen where streaming data from the end of the catheter display a view of the inside of the heart. But that’s not where the data stop between the heart and the monitors like many devices. CardioThings, a Silicon Valley startup, works with two real IoT powerhouses, PTC ThingWorx and another Silicon Valley startup, Glassbeam, to make something much more powerful possible. ThingWorx models the operation of the catheter so that it can send secure data to the cloud where it can be analyzed by Glassbeam. Glassbeam turns the unstructured data into structured data in the forms of readable reports that the device company can then use to improve doctors’ surgical performance. For CardioThings and other high-value asset manufacturers, this kind of data can also increase the uptime of their catheter device. Others can use IoT Analytics to increase the uptime of CAT-Scans and MRIs because the data can show when even the smallest part is showing signs of weakness or malfunction and enable a repair that keeps that equipment operating. How? Imagine CardioThings’s optical catheter, thin enough to fit comfortably through a vein, entering a heart and mapping it out to find the lesions responsible for the AFib. The surgeon is then able to frame the boundaries of the lesions on CaridoThings’s monitors to see which are dying and need to get burned out. The laser beam from the sensor-embedded catheter then cuts the lesions out and the patient is healed. What does this have to do with saving money for the hospital? High-value machines such as MRIs and CAT Scan cost millions. Downtime for them is not only very costly for the hospital that is not billing patients but also, more importantly, interrupts patients from getting the best possible care. ThingWorx enables medical devices (Things, sensors modeled by ThingWorx to communicate as if it was the device) to talk to other Things in the cloud. Once the unstructured data is there it can be combined and recombined by Glassbeam’s analytics software to detect any abnormalities. For MRIs, CAT Scan and other devices, stopping small problems from becoming big problems that crash expensive heavily-used equipment is the ultimate value of predictive maintenance. Hospitals are large places with many people and things moving about a great deal and keeping track of assets ranging from MRI scanners to $60,000 beds is quite challenging. In the case of CardioThings above, the alliance of PTC ThingWorx and Glassbeam should make the medical industry and business decision makers globally take notice. Whether it’s healthcare, agriculture, networking or manufacturing, higher utilization of equipment is absolutely essential to remaining competitive. In the case of the CardioThings’s catheter spitting out unstructured data, Chris Kuntz, Vice President, Ecosystem Programs of PTC ThingWorx says “imagine the cardiac data from that same procedure being combined and recombined with data from EKG machines, MRI machines, pharmaceutical research, personal medical record-keeping systems, blood monitors and hundreds of healthcare systems. This is how the Internet of Things drives a revolution in healthcare.” “Thanks to our partnership with ThingWorx,” Glassbeam CEO Puneet Pandit says, “we are able to capture that unstructured data off the catheter and create structured data that business decision makers at hospitals, the manufacturers and individual doctors can learn from” Pandit adds, “As a result of the large amount of critical data coming from the catheter, you can answer many questions. How did the device perform? Under what circumstances? How long did the surgery take? Which surgeons did it most effectively? Who needs to be more formally trained?” As a result of this solution, training surgeons to use equipment better provides significantly improved outcomes for patients. And for hospitals dispensing critical care, no one has to wait any longer for the MRI to crash to know there was a problem. They can fix the smallest problem before it escalates. Letting the hospital know that a specific part is faulty by simply examining the unstructured data it sends out is the best example of the power of predictive maintenance. No one has to wait for the MRI to crash. Hospitals can enjoy huge savings through predictive maintenance on all its heavily-used expensive equipment. Given concerns about privacy and safeguarding of material, it is essential to have a secure connectivity partner such as ThingWorx aboard. HIPPA is just the beginning of the scope of regulatory requirements that will need to be accommodated to operate successfully in the healthcare data space. Applied analytics available to doctors in real-time reduces medical procedure risk and overall liability concerns. For hospitals to reduce costs and increase profitability, IoT will play an enormous role. For patients, it means their doctors will know so much more about treating them to ensure the best care after any procedure whether it’s a heart bypass, cancer surgery, heart transplant or a simple blood test. Jack Reader, Business Development Manager at ThingWorx (now at Verizon) says, “Imagine an operating room where there are just a few monitors and all the devices speak to each other and with thousands of medical systems within and beyond the walls of the hospitals. All of this innovation will exponentially increase insight and intelligence, reduce costs for the hospital and increase health outcomes”. The implications in terms of knowledge gained and positive health outcomes is so phenomenal that we almost can’t now imagine from this early stage in the IoT era all the possible sources nor all the insights that will be gained. However, the sooner IoT Analytics is adopted in the hospital, the sooner patients can expect better-run hospitals and healthier lives. This is only the beginning of a new era. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ph.D. Fulbright Scholar. Storyteller. Communications Expert. Content maven. Formerly State Dept-Startups-Cisco & more. " Louis Rosenfeld,90,5,https://medium.com/@louisrosenfeld/everyday-ia-d7aa7be07717?source=tag_archive---------9----------------,Everyday IA – Louis Rosenfeld – Medium,"A few days ago, Cennydd Bowles gently trolled many of us thusly: As Cennydd has keynoted a past Information Architecture Summit, it’s hard to ignore his question. And Cennydd’s timing is quite interesting, given that tomorrow is World IA Day. The theme of this year’s WIAD is “architecting happiness”. And in this adorable little video that the IA Institute created to promote WIAD 2015, Abby Covert says that this theme was chosen “because of the rising amount of information that everyone has to deal with” (my italics): Cennydd, there’s your answer: if you’re a human in today’s developed world, where even physical objects and spaces are soaked in information, you are struggling to cope with and make sense of the stuff. Nearly all the time. And nearly everywhere. Information architecture problems are everyday human problems. So if you’re designing for humans today, you’ll need at least some information architecture skills in order to help them. Information architecture literacy is required for anyone who designs anything. So it’s not surprising that WIAD has exploded to 38 locations in 24 countries. It’s not surprising that Abby’s wonderful little book, How to Make Sense of Any Mess: Information Architecture for Everybody, has been such a hit. It’s not surprising to see the IA Summit entering its 16th year stronger than ever. It’s not surprising that the fourth edition of Information Architecture for the World Wide Web (due out later this year) is being recast as a book not for information architects, but for people who need to know something about information architecture. We’ve entered full-on mode of democratizing IA skills. Because... Information architecture literacy is required for anyone who designs anything. I’ll confess to having felt, like Cennydd, a bit disconnected from IA for the past few years. Partly because I’ve been investing almost every available moment of my waking hours into Rosenfeld Media. And partly because much of the IA community’s discussion has pushed far deeper into IA practice than my brain and attention span can manage. But I’m feeling better now, because I’m finding, in my own day-to-day work, that: Information architecture literacy is required for anyone who designs anything. For example, while I rarely work on web site IA much these days, I am absolutely absorbed in the information architecture of books. Want to know what value publishers can provide to authors in this age of self-publishing? The list might be longer than you imagined, but I think most Rosenfeld Media authors would agree: Lou and team pull them out of the weeds, and help them to step back and make sense of their content as an information system. Information architecture skills are an absolute necessity when it comes to framing, structuring, and establishing a flow for a book. (And not just for non-fiction; just ask JK Rowling.) I’m finding that IA literacy is also incredibly helpful in other areas, like event planning. I recently asked a couple dozen colleagues who produce events to provide share their advice on organizing a conference. Their responses were generous, useful, and wonderful. But the one I keep remembering most is Jeffrey Zeldman’s: Yes, I’m biased, but I hear Jeffrey singing a song of event IA. I’ve been singing it too. In putting together the first edition of the Enterprise UX conference (plug alert: San Antonio; May 13–15, 2015), I’ve been working with Dave Malouf and Uday Gajendar to create an information architecture for a conversation. In effect, we’re trying to structure the event’s program in a way that surfaces a latent conversation about enterprise UX that’s been happening in the UX community for quite some time. The event itself should simply serve as an opportunity to bring people together to sharpen and advance that conversation. I’m oversimplifying a bit, but we spent months designing our event IA around four carefully-sequenced themes, each in effect a curated mini-conference: 1) Insight at Scale; 2) Craft amid Complexity; 3) Enterprise Experimentation; and 4) Designing Organizational Culture. We see these as the main facets of the community’s conversation on enterprise UX. We’ll know we’ve been successful if, at the event, the conversation spills out of the auditorium and into the hallways and break areas, animating the words and faces of attendees. We’ll know we been really successful if these conversations riff off the themes already covered — meaning we got the sequence right. And we’ll know that we were really, really successful if these four themes keep the conversation moving forward — both after the event and as the IA for programs at future editions of the event. Books have an information architecture. Events have an information architecture. Pretty much anything we design — consciously or not — has an information architecture. So pardon me as I repeat: Information architecture literacy is required for anyone who designs anything. When I got my masters in information and library studies in 1990, our professors were preaching about the oncoming information revolution. Since then, I‘ve been fortunate to observe and even participate a little in that revolution. In the blink of an eye, information architects emerged as professionals dedicated to making the pain of that revolution easier to bear. In the blink of an eye, others have proclaimed that information architecture, as a profession, was dead. I’m not sure who’s right, nor do I care. Twenty-five years is nothing. The dust can settle after we’re all dead. Let’s worry instead about people suffering from everyday IA problems. We, as designers of any stripe, have to help them. And we have to get better at helping them to help themselves. Oh, and if you’re wondering why I won’t be at any of tomorrow’s 38 WIAD meetings: well, it’s Saturday, and I have a date with my six-year old. We’re going to organize his Legos. (This piece originally ran in the Rosenfeld Review; sign up here for new ones.) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Rosenfeld Media. I make things out of information. " Matt Harvey,677,7,https://blog.coast.ai/continuous-online-video-classification-with-tensorflow-inception-and-a-raspberry-pi-785c8b1e13e1?source=tag_archive---------0----------------,"Continuous online video classification with TensorFlow, Inception and a Raspberry Pi","Much has been written about using deep learning to classify prerecorded video clips. These papers and projects impressive tag, classify and even caption each clip, with each comprising a single action or subject. Today, we’re going to explore a way to continuously classify video as it’s captured, in an online system. Continuous classification allows us to solve all sorts of interesting problems in real-time, like understanding what’s in front of a car for autonomous driving applications to understanding what’s streaming on a TV. We’ll attempt to do the latter using only open source software and uber-cheap hardware. Specifically, TensorFlow on a Raspberry Pi with a PiCamera. We’ll use a “naive” classification approach in this post (see next section), which will give us a relatively straightforward path to solving our problem and will form the basis for more advanced systems to explore later. By the time we’re done today, we should be able to classify what we see on our TV as either a football game or an advertisement, running on our Raspberry Pi. Let’s get to it! Video is an interesting classification problem because it includes both temporal and spatial features. That is, at each frame within a video, the frame itself holds important information (spatial), as does the context of that frame relative to the frames before it in time (temporal). We hypothesize that for many applications, using only spatial features is sufficient for achieving high accuracy. This approach has the benefit of being relatively simple, or at least minimal. It’s naive because it ignores the information encoded between multiple frames of the video. Since football games have rather distinct spatial features, we believe this method should work wonderfully for the task at hand. We’re going to collect data for offline training with a Raspberry Pi and a PiCamera. We’ll point the camera at a TV and record 10 frames per second, or more specifically, save 10 jpegs every second, which will comprise our “video”. Here’s the code for capturing our images: Once we have our data, we’ll use a convolutional neural network (CNN) to classify each frame with one of our labels: ad or football. CNNs are the state-of-the-art for image classification. And in 2016, it’s essentially a solved problem. It feels crazy to say that, but it really is: Thanks in large part to Google→TensorFlow→Inception and the many researchers who came before it, there’s very little low-level coding required for us when it comes to training a CNN for our continuous video classification problem. Pete Warden at Google wrote an awesome blog post called TensorFlow for Poets that shows how to retrain the last layer of Inception with new images and classes. This is called transfer learning, and it lets us take advantage of weeks of previous training without having to train a complex CNN from scratch. Put another way, it lets us train an image classifier with a relatively small training set. We collected 20 minutes of footage at 10 jpegs per second, which amounted to 4,146 ad frames and 7,899 football frames. The next step is to sort each frame into two folders: football and ad. The name of the folders represent the labels of each frame, which will be the classes our network will learn to predict on when we retrain the top layer of the Inception v3 CNN. This is essentially using the flowers method described in TensorFlow for Poets, applied to video frames. To retrain the final layer of the CNN on our new data, we checkout the r0.11 tag from the TensorFlow repo and run the following command: Retraining the final layer of the network on this data takes about 30 minutes on my laptop with a GeForce GTX 960m GPU. At the completion of 4,000 training steps, our model reports an incredible 98.8% accuracy on the held out validation set! I’m not sure I could do much better using my eyes on the same data. As a point of reference, if the network had classified each frame as football, it would have achieved about 66% accuracy. So it seems to be working! It’s always a good idea to run some known data through a trained network to sanity check the results, so we’ll do that here. Here’s the code we use to classify a single image manually through our retrained model: And here are the results of spot checking individual frames: Before we transfer everything to our Pi and do this in real-time, let’s use a different batch of recorded data and see how well we do on that set. To get this dataset, and to make sure we don’t have any data leakage into our training set, we separately record another 19 minutes of the football broadcast. This dataset amounted to 2,639 ad frames and 8,524 football frames. We run each frame of this set through our classifier and achieve a true holdout accuracy score of 93.3%. Awesome! Looks like we’ve validated our hypothesis that we can achieve high levels of accuracy while only considering spatial features. Impressive results, considering that we only used 20 minutes of training data! Thank you, Google, Pete, TensorFlow and all the folks who have developed CNNs over the years for your incredible work and contributions. Great, so now we have our CNN trained and we know that we can classify each frame of our video with relatively high accuracy. How does it do on live TV, with always changing context? For this, we load up our Raspberry Pi 3 with our newly trained model weights, turn on the PiCamera at 10 fps, and instead of saving the image, send it through our CNN to be classified. We have to make some modifications to the code to classify in real time. The final result looks like this: We also have to get TensorFlow running on the Pi. Sam Abrahams wrote up excellent instructions for doing this, so I won’t cover them again here. After we install our dependencies, we run the program and... crap! Inception on the Raspberry Pi 3 can only classify one image every four seconds. Okay, so we don’t quite have the hardware yet to do 10fps, but this still feels like magic, so let’s see how we do. Flipping on Sunday Night Football and pointing our camera at the TV shows a remarkable job at classifying each moment as football or ad, once every few seconds. For the vast majority of the broadcast, we see our prediction come out true to life. So cool. In all, our naive method worked remarkably well at continuous online video classification for this particular use case. But we know that we’re only considering part of the information provided to us inherently in video, and so there must be room for improvement, especially as our datasets become more complex. For that, we’ll have to dive deeper. So in the next post, we’ll explore feeding the output of our CNN (both the final softmax layer and the pool layer, which gives us a 2,048-d feature vector of each image) to an LSTM RNN to see if we can increase our accuracy. Spoiler alert: We can! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Coastline Automation, using AI to make every car crash-proof. Practical applications of deep learning and research reports from the road. " Vivek Yadav,425,11,https://chatbotslife.com/using-augmentation-to-mimic-human-driving-496b569760a9?source=tag_archive---------1----------------,An augmentation based deep neural network approach to learn human driving behavior,"Overview In this post, we will go over the work I did for project 3 of Udacity’s self-driving car project, behavior cloning for driving. The main task is to drive a car around in a simulator on a race track, and then use deep learning to mimic the behavior of human. This is a very interesting problem because it is not possible to drive under all possible scenarios on the track, so the deep learning algorithm will have to learn general rules for driving. We must be very careful while using deep learning models, because they have a tendency to overfit the data. Overfitting refers to the condition where the model is very sensitive to the training data itself, and the model’s behavior does not generalize to new/unseen data. One way to avoid overfitting is to collect a lot of data. A typical convolutional neural network can have up to a million parameters, and tuning these parameters requires millions of training instances of uncorrelated data, which may not always be possible and in some cases cost prohibitive. For our car example, this will require us to drive the car under different weather, lighting, traffic and road conditions. One way to avoid overfitting is to use augmentation. Augmentation refers to the process of generating new training data from a smaller data set such that the new data set represents the real world data one may see in practice. As we are generating thousands of new training instances from each image, it is not possible to generate and store all these data on the disk. We will therefore utilize keras generators to read data from the file, augment on the fly and use it to train the model. We will utilize images from the left and right cameras so we can generate additional training data to simulate recovery. Keras generator is set up such that in the initial phases of learning, the model drops data with lower steering angles with higher probability. This removes any potential for bias towards driving at zero angle. After setting up the image augmentation pipeline, we can proceed to train the model. The training was performed using simple adam learning algorithm with learning rate of 0.0001. After this training, the model was able to drive the car by itself on the first track for hours and generalized to the second track. All the training was based on driving data of about 4 laps using ps4 controller on track 1 in one direction alone. The model never saw track 2 in training, but with image augmentation (flipping, darkening, shifting, etc) and using data from all the cameras (left, right and center) the model was able to learn general rules of driving that helped translate this learning to a different track. IMPORTANT: These results were obtained on Titan X GPU machine I built earlier. Full specifications of the computer can be found here. Please note that computers with different performance will provide a different performance of the network. Augmentation helps us extract as much information from data as possible. We will generate additional data using the following data augmentation techniques. Augmentation is a technique of manipulating the incoming training data to generate more instances of training data. This technique has been used to develop powerful classifiers with little data. https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html . However, augmentation is very specific to the objective of the neural network. Brightness augmentation Changing brightness to simulate day and night conditions. We will generate images with different brightness by first converting images to HSV, scaling up or down the V channel and converting back to the RGB channel. Using left and right camera images Using left and right camera images to simulate the effect of car wandering off to the side, and recovering. We will add a small angle .25 to the left camera and subtract a small angle of 0.25 from the right camera. The main idea being the left camera has to move right to get to center, and right camera has to move left. Horizontal and vertical shifts We will shift the camera images horizontally to simulate the effect of car being at different positions on the road, and add an offset corresponding to the shift to the steering angle. We added 0.004 steering angle units per pixel shift to the right, and subtracted 0.004 steering angle units per pixel shift to the left. We will also shift the images vertically by a random number to simulate the effect of driving up or down the slope. Shadow augmentation The next augmentation we will add is shadow augmentation where random shadows are cast across the image. This is implemented by choosing random points and shading all points on one side (chosen randomly) of the image. The code for this augmentation is presented below. Flipping In addition to the transformations above, we will also flip images at random and change the sign of the predicted angle to simulate driving in the opposite direction. 2. Preprocessing After augmenting the image as above, we will crop the top 1/5 of the image to remove the horizon and the bottom 25 pixels to remove the car’s hood. Originally 1/3 of the top of car image was removed, but later it was changed to 1/5 to include images for cases when the car may be driving up or down a slope. We will next rescale the image to a 64X64 square image. After augmentation, the augmented images looks as follows. These images are generated using kera’s generator, and unlimited number of images can be generated from one image. I used Lambda layer in keras to normalize intensities between -.5 and .5. 3. Keras generator for subsampling As there was limited data and we are generating thousands of training examples from the same image, it is not possible to store all the images apriori into memory. We will utilize kera’s generator function to sample images such that images with lower angles have lower probability of getting represented in the data set. This alleviates any problems we may ecounter due to model having a bias towards driving straight. Panel below shows multiple training samples generated from one image. The keras generator is presented below. The ‘pr_threshold’ variable is a threshold that determines if a data with small angle will be dropped or not. 4. Model Architecture and training I implemented the model architecture above for training the data. The first layer is 3 1X1 filters, this has the effect of transforming the color space of the images. Research has shown that different color spaces are better suited for different applications. As we do not know the best color space apriori, using 3 1X1 filters allows the model to choose its best color space. This is followed by 3 convolutional blocks each comprised of 32, 64 and 128 filters of size 3X3. These convolution layers were followed by 3 fully connected layers. All the convolution blocks and the 2 following fully connected layers had exponential relu (ELU) as activation function. I chose leaky relu to make transition between angles smoother. Training: I trained the model using the keras generator with batch size of 256 for 8 epochs. In each epoch, I generated 20000 images. I started with pr_threshold, the chance of dropping data with small angles as 1, and reduced the probability by dividing it by the iteration number after each epoch. The entire training took about 5 minutes. However, it too more than 20 hours to arrive at the right architecture and training parameters. Snippet below presents the result of training. 5. Model performance: Video below shows the performance of algorithm on the track 1 on which the original data was collected. The car is able to drive around for hours, we will next look into the case where either the camera resolution, video size or tracks are changed. Generalization from one image size to another Video below presents generalization from one image size to another. I used the same pretrained model and tested it on all the other image sizes and found that the deep learning neural network was able to drive the car around for all image sizes. Generalization from one image resolution to another Video below presents generalization from one image resolution to another. I used the same pretrained model and tested it on all the other image resolutions and found that the deep learning neural network was able to drive the car around for all image resolutions. I also tested different combinations of image size and image resolutions, and on track 1 the deep learning algorithm was able to drive the car around for all combinations of image resolution and sizes. Generalization from one track to another Figure below presents generalization from one track to another. This was perhaps the toughest test for the deep learning algorithm. In the second track, there were more right turns and u-turns, it was darker, and the road had slopes. All of which were absent in the original track. However, all these effects were artificially included into the model via image augmentation. 6. Future Directions This project is far from over. This project opened more questions than it answered. A few more things to try are, 6. Reflections This was perhaps the weirdest project I did. This project challenged all the previous knowledge I had about deep learning. In general large epoch size and training with more data results in better performance, but in this case any time I got beyond 10 epochs, the car simply drove off the track. Although all the image augmentation and tweaks seem reasonable n0w, I did not think of them apriori. I hope others find this post useful, and get inspried to try novel things. I havent used on-the fly training agile trainer by John Chen yet. I wanted to try and stretch the data as much as possible. Next thing to try is to experiment with parallel network using John’s trainer. Acknowledgements: I am very thankful to Udacity for selecting me for the first cohort, this allowed me to connect with many like-minded individuals. As always, learned a lot from discussions with Henrik Tünnermann and John Chen. I am also thankful for getting the NVIDA’s GPU grant. Although, its for work, but I use it for Udacity too. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Staff Software Engineer at Lockheed Martin-Autonomous System with research interest in control, machine learning/AI. Lifelong learner with glassblowing problem. Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More. " Carlos Beltran,97,9,https://medium.com/@carlosbeltran/ai-the-theme-in-avenged-sevenfolds-new-album-the-stage-f4516d6fc96?source=tag_archive---------2----------------,A Rock Album For AI – Carlos Beltran – Medium,"https://open.spotify.com/album/0jwnYwJz6XHNrVAYEclQPd It’s awesome that Avenged Sevenfold became interested in AI and wrote an entire album that revolves around the idea. In an interview with Rolling Stone, lead singer M. Shadows says the initial interest came after reading Tim Urban’s article over at waitbutwhy. It’s one of the things (along with movies like Her and The Matrix of course) that spiked my interest in AI as well, so I’d highly recommend reading it. Tim does a phenomenal job of explaining the topic, current challenges engineers are facing, and the very possible implications of this technology. The term “artificial intelligence” was first coined half a century ago. Fast forward to today, where we have have giant companies like Intel and Apple acquiring AI startups like there’s no tomorrow. It’s not a matter of whether or not we’ll be able to create machines that surpass our own capabilities, but when. Theoretical physicist and futurist Dr. Michio Kaku thinks it is possible for machines as smart as us to exist by the end of the century. Google’s chief futurist, Ray Kurzweil, believes such technology will exist as soon as 2029. The band is right in wanting its fans, and the general public, to be more aware of these ideas — they could be right around the corner. I’m no expert, but I’d like to discuss the ideas behind some of the songs and include references in case you’d like to delve deeper. And if you want to read more on the possible future of AI, I’d recommend reading Kurzweil’s book The Singularity Is Near. Although some of his predictions have been met with skepticism, the ideas presented are thought-provoking. Simply put, nanomachines are microscopic machines that will enhance us in almost every way imaginable. They’ll be able to help our immune system fight off diseases. They would create super soldiers. This technology is actually at the center of a great game series, Metal Gear Solid. This “hack” in our biological makeup will also increase our lifespans. Kurzweil imagines a future where biotechnology is so advanced that we will live forever. This is the same idea behind the song “Paradigm”. Lyrics include: The song also raises the question of what it really means to be human. What do we become when we merge with machines? Will we lose what fundamentally makes us human? It can be argued that this “merge” is the next logical step in evolution, as there is no there is no evolutionary pressure for us to do so anymore. We’ll become, as Kurzweil puts it, “Godlike”. Expanding the brain’s neocortex will allow us, for example, to pose questions in our thoughts and know the answer almost immediately (most likely thanks to our direct “brain-to-Google” connection). We’ll always have witty jokes on hand, and learning Calculus will be as simple as purchasing downloadable content. Plug and play. Besides swapping out failing body parts with prosthetics and enhancing our brains, there’s another way we’ll be able to gain immortality. Both Dr. Kaku and Kurzweil firmly believe that the advances in brain-computer interfaces will eventually allow us to upload our consciousness to machines. Scientists still have no clue how the brain works, how the billions of neurons form connections that result in learned behavior, or what dreaming is. But once these secrets are known (which might never actually happen) and we know how our brain functions, as well as what the “consciousness switch” is, the possibilities are endless. To get an idea of what’s possible, check out Black Mirror’s episode Playtest. The brain-computer interface for the game is so advanced that the player can’t distinguish between what’s real and what isn’t. I don’t want to spoil anything, but get ready for a mind fuck. Black Mirror does a great job of weaving technology with a dystopia that we might inhabit, showing a darker side of our society. It’s on Netflix, so check it out. Elon Musk sure does. He claims that the chances of us living in “base reality” is one in billions. I’d recommend watching the 3-minute video. His logic is as follows: we had Pong some 40 years ago. Two rectangles and a dot were rendered on-screen for what we called a videogame. Today, we have games with realistic graphics and they keep getting better every year. Better yet, virtual and augmented reality are right around the corner, pushing the boundaries of gaming. Eventually, we’ll have the technology to create simulated worlds that are indistinguishable from reality. Therefore, Musk claims, it is likely that we are living in an ancestor simulation created by an advanced future civilization some 10,000 years from now. The album’s 7th song, “Simulation” explores the idea that our reality might not be what it seems. Think of it this way — the brain and nervous system which we use to automatically react to the environment around us is the same brain and nervous system which tells us what the environment is. Throughout the song, the “patient” is having thoughts that challenge the simulation they are living in. They are — in a sense — waking up. A darker voice, which I believe is meant to represent the ones running the show, has to reprimand the patient, reminding them that they “...only exist because we allow it”. To control the situation, the patient is to be sedated with blue comfort, a reference from The Matrix, which will make them forget they’re living in a simulation. Blissful ignorance. I won’t try to explain this one. Just watch the video. And here’s a quote from that man that might get your attention: Imagine an entity so intelligent... ...but that’s just it. You can’t imagine it. In the second part to his article on AI, Tim Urban compares this to a chimp being unable to understand a skyscraper is not just a part of its environment, but that humans built it. It’s not the chimp’s fault or anything, its brain is just not made to have that level of information processing. The same thing will happen when we build a machine with the collective knowledge of some 200,000 years of Homo Sapien existence. Therefore, there is no way to know what it will do or what the consequences will be. Tim depicts our situation with this entity, what he refers to as Artificial Superintelligence (ASI), beautifully: Mark Zuckerberg is right in saying we should be hopeful of the amount of good AI could do, but some of the smartest minds in existence are genuinely concerned. Stephen Hawking acknowledges that the successful creation of an AI will be the biggest event in history, but warns it could also end mankind. Elon Musk founded a research company OpenAI as a way to “neutralize the threat of a malicious artificial super-intelligence”. “Creating God” describes AI as a modern messiah, “the very last invention man would ever need”. It paints the picture of a utopia where this intelligence exists. At the same time, the song suggests that we could be “summoning the demon”, unable to control the outcomes. We could just be its stepping stone, as our existence after its creation becomes irrelevant. The album wraps up with a 15-minute eargasm. I can’t produce words that will do “Exist” any justice. As the band described it, it’s like listening to what the Big Bang might’ve sounded like. Neil deGrasse Tyson makes a cameo at the end of the song that serves as a reminder that our problems and conflicts are minuscule in the grand scheme of things. We’re all a part of the same universe and once we as a society realize this, we can truly make progress. Here’s the full thing: The Stage is an exceptional album, in my opinion. The band’s intentions were for fans to educate themselves, or be a bit more aware of what’s going on in this area. We can enjoy it as a rock album as well as explore the ideas behind the lyrics. I had an awesome time writing this, digging up things I’ve read and seen and unifying them in a way so others can hopefully become more interested as well. And come on, don’t tell me that the idea that we’re living in a simulation isn’t thought-provoking. Tap the ❤ button below :) My name’s Carlos and I generally write about personal development, tech, and entrepreneurship. Hit me up on Twitter! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software engineer. Focused on building cool shit on Ethereum 🚀 " Matt Harvey,558,6,https://blog.coast.ai/continuous-video-classification-with-tensorflow-inception-and-recurrent-nets-250ba9ff6b85?source=tag_archive---------3----------------,"Continuous video classification with TensorFlow, Inception and Recurrent Nets","A video is a sequence of images. In our previous post, we explored a method for continuous online video classification that treated each frame as discrete, as if its context relative to previous frames was unimportant. Today, we’re going to stop treating our video as individual photos and start treating it like the video that it is by looking at our images in a sequence. We’ll process these sequences by harnessing the magic of recurrent neural networks (RNNs). To restate the problem we outlined in our previous post: We’re attempting to continually classify video as it’s streamed, in an online system. Specifically, we’re classifying whether what’s streaming on a TV is a football game or an advertisement. Convolutional neural networks, which we used exclusively in our previous post, do an amazing job at taking in a fixed-size vector, like an image of an animal, and generating a fixed-size label, like the class of animal in the image. What CNNs cannot do (without computationally intensive 3D convolution layers) is accept a sequence of vectors. That’s where RNNs come in. RNNs allow us to understand the context of a video frame, relative to the frames that came before it. They do this by passing the output of one training step to the input of the next training step, along with the new frames. Andrej Karpathy describes this eloquently in his popular blog post, “The Unreasonable Effectiveness of Recurrent Neural Networks”: We’re using a special type of RNN here, called an LSTM, that allows our network to learn long-term dependencies. Christopher Olah writes in his outstanding essay about LSTMs: “Almost all exciting results based on recurrent neural networks are achieved with [LSTMs].” Sold! Let’s get to it. Our aim is to use the power of CNNs to detect spatial features and RNNs for the temporal features, effectively building a CNN->RNN network, or CRNN. For the sake of time, rather than building and training a new network from scratch, we’ll... Step 2 is unique so we’ll expand on it a bit. There are two interesting paths that come to mind when adding a recurrent net to the end of our convolutional net: Let’s say you’re baking a cake. You have at your disposal all of the ingredients in the world. We’ll say that this assortment of ingredients is our image to be classified. By looking at a recipe, you see that all of the possible things you could use to make a cake (flour, whisky, another cake) have been reduced down to ingredients and measurements that will make a good cake. The person who created the recipe out of all possible ingredients is the convolutional network, and the resulting instructions are the output of our pool layer. Now you make the cake and it’s ready to eat. You’re the softmax layer, and the finished product is our class prediction. I’ve made the code to explore these methods available on GitHub. I’ll pull out a couple interesting bits here: In order to turn our discrete predictions or features into a sequence, we loop through each frame in chronological order, add it to a queue of size N, and pop off the first frame we previously added. Here’s the gist: N represents the length of our sequence that we’ll pass to the RNN. We could choose any length for N, but I settled on 40. At 10fps, which is the framerate of our video, that gives us 4 seconds of video to process at a time. This seems like a good balance of memory usage and information. The architecture of the network is a single LSTM layer with 256 nodes. This is followed by a dropout of 0.2 to help prevent over-fitting and a fully-connected softmax layer to generate our predictions. I also experimented with wider and deeper networks, but neither performed as well as this one. It’s likely that with a larger training set, a deeper network would perform best. Note: I’m using the incredible TFLearn library, a higher-level API for TensorFlow, to construct our network, which saves us from having to write a lot of code. Once we have our sequence of features and our network, training with TFLearn is a breeze. Evaluating is even easier. Now, let’s evaluate each of the methods we outlined above for adding an RNN to our CNN. Intuitively, if one frame is an ad and the next is a football game, it’s essentially impossible that the next will be an ad again. (I wish commercials were only 1/10th of a second long!) This is why it could be interesting to examine the temporal dependencies of the probabilities of each label before we look at the more raw output of the pool layer. We convert our individual predictions into sequences using the code above, and then feed the sequences to our RNN. After training the RNN on our first batch of data, we then evaluate the predictions on both the batch we used for training and a holdout set that the RNN has never seen. No surprise, evaluating the same data we used to train gives us an accuracy of 99.55%! Good sanity check that we’re on the right path. Now the fun part. We run the holdout set through the same network and get... 95.4%! Better than our 93.3% we got without the LSTM, and not a bad result, given we’re using the full output of the CNN, and thus not giving the RNN much responsibility. Let’s change that. Here we’ll go a little deeper. (See what I did there?) Instead of letting the CNN do all the hard work, we’ll give more responsibility to the RNN by using output of the CNN’s pool layer, which gives us the feature representation (not a prediction) of our images. We again build sequences with this data to feed into our RNN. Running our training data through the network to make sure we get high accuracy succeeds at 99.89%! Sanity checked. How about our holdout set? 96.58%! That’s an error reduction of 3.28 percentage points (or 49%!) from our CNN-only benchmark. Awesome! We have shown that taking both spatial and temporal features into consideration improves our accuracy significantly. Next, we’ll want to try this method on a more complex dataset, perhaps using multiple classes of TV programming, and with a whole whackload more data to train on. (Remember, we’re only using 20 minutes of TV here.) Once we feel comfortable there, we’ll go ahead and combine the RNN and CNN into one network so we can more easily deploy it in an online system. That’s going to be fun. Part 3 is now available: Five video classification methods implemented in Keras and TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Coastline Automation, using AI to make every car crash-proof. Practical applications of deep learning and research reports from the road. " Oxford University,237,19,https://medium.com/oxford-university/the-future-of-work-cf8a33b47285?source=tag_archive---------4----------------,The future of work – Oxford University – Medium,"Technology has always changed employment, but the rise of robotics and artificial intelligence could transform it beyond recognition. Researchers at Oxford are investigating how technology will shape the future of work — and what we can do to ensure everyone benefits. In a famous 1930 talk, John Maynard Keynes imagined a future 100 years hence in which technological progress automated much of human labour. By 2030, he estimated, we could all enjoy a 15-hour working week. A lot will need to change in the next decade for that to become a reality, but it’s not impossible. Right now, advances in artificial intelligence and robotics promise machines that will take on all kinds of human tasks. Digital communication is creating an internet-dwelling labour force that can work remotely and on demand. And the self-employed are finding that new technological services like Uber and Airbnb can provide a flexible way to make a living. But phenomena like these give rise to a cascade of effects — not all necessarily desirable — that are fiendishly difficult to perceive and predict. It’s perhaps not surprising, then, that the future of work is a topic of increasing fascination for University of Oxford academics. Both the Oxford Martin School and Green-Templeton College now run specific programmes that focus on the topic, with plenty of researchers — from the Departments of Engineering Science and Sociology to those of Politics and Economics — grappling with its complexity. ‘We see a need for bringing together different perspectives around the study of work,’ explains Dr Marc Thompson, a Senior Fellow at Saїd Business School and the Director of the Green-Templeton College Future of Work Programme. ‘Our role as academics is to contribute to the debate, both in terms of theory and to raise challenging questions and issues for those in government and industry. What will happen as a result of these advances? How will it affect people? Whose interests are being pursued? And what are the long-term implications?’ A series of recent studies from the University cut straight to the chase of technology’s impact on employment, focusing on how robotics and automation will affect the jobs that humans currently undertake. The authors, Dr Carl Benedikt Frey (@carlbfrey) and Prof Michael Osborne (@maosbot), come from quite different backgrounds: Frey is an economist interested in the transition of industrial nations to digital economies, Osborne an engineer focused on creating machine-learning algorithms. Together, they’re co-directors of the Programme on Technology and Employment at the Oxford Martin School. ‘It would be fair to ask why I’m doing work related to economics while we’re sitting here in the Department of Engineering,’ admits Osborne, gesturing to his surroundings. ‘But I’ve always had some interest in thinking about what machine-learning could mean for society beyond the industrial applications we usually consider, so when Carl approached me to speak about algorithms and technologies used in automation, and their effects on employment, it seemed like a natural fit.’ This is, of course, exactly the kind of multidisciplinary work the University excels at, and the reason the Oxford Martin School was established. Each of its programmes brings together researchers from different fields to tackle complex global issues that can’t be solved by academics from a single discipline. Since meeting, the pair has set about developing ways to analyse which jobs that exist today could be at risk of being taken over by robots or artificial intelligence software in the next 20 years. First, they gathered together ‘as many smart people as [they] could’ to decide on 70 job roles that definitely could or could not be automated in the next 20 years. For example, they collectively decided that switchboard operators and dishwashers could definitely be replaced, while the clergy and magistrates certainly couldn’t. The pair combined this list with data from the US Department of Labor’s O⋆NET system — a database which describes the different skills relevant to specific occupations. Osborne then built an algorithm that could learn from both pools of data to establish the kinds of skills that were common to automatable jobs. When shown other occupations and the skills they require, the software can classify them with a probability of being either automatable or non-automatable. The pair found that the jobs least likely to be automated are those that require skills of creative intelligence, social intelligence or physical dexterity. These are what they refer to as engineering bottlenecks: current limits to technology that make humans irreplaceable. Osborne points out that it’s perfectly possible, for instance, to have an algorithm churn out an endless sequence of songs, but almost impossible to have it create a hit. Similarly, chat-bots may be able to communicate with you but they can’t negotiate a deal, and robots can assemble objects on a well-defined production line, but they can’t perform a fiddly task like making a cup of tea in your messy kitchen. In each case, it’s because humans draw on a huge wealth of tacit knowledge about culture, emotion, human behaviour and the physical environment that’s hard to encode in a way that a machine can act upon. But, even with those bottlenecks, the results suggest that as many as 47% of US jobs are at risk from automation over the course of the next two decades. It’s worth bearing in mind that the figures explain which jobs are theoretically automatable, rather than destined to be automated. ‘That may seem like a fiddly point,’ says Osborne, ‘but this analysis doesn’t take into account other factors that we absolutely do believe will have an impact on whether an occupation is taken over by a machine, such as human wage levels, social acceptance, and the creation of new jobs.’ But however you look at it, the numbers are difficult to ignore. There’s an intuitive counter-argument to the claims that their analysis makes: for centuries, new technologies have been invented that have pushed humans out of work, but by and large most of us still continue to have jobs. In fact researchers elsewhere in the University have shown that the amount of work we all perform remains steadfastly consistent, irrespective of technological change. Jonathan Gershuny, Professor of Sociology and Director of the Centre for Time Use Research, has spent a large part of his career tracing the way that we all use our time — to work, play, rest and everything else. ‘Fundamentally, there are three realms of activity,’ he explained from the bay window of his Woodstock Road office. ‘There’s paid work, unpaid work and consumption.’ Paid work is just that: the tasks we carry out in exchange for money, be it mining coal, writing a book or performing brain surgery. Unpaid work, meanwhile, is formed of tasks that you could pay someone else to do for you (but for whatever reason don’t), such as cooking, cleaning, gardening or childcare. And consumption is all the activity you absolutely couldn’t pay someone else to do for you — your night’s sleep, say, or eating lunch. ‘Why am I telling you all this?’ asks Gershuny, with a grin. ‘Well, when you define work quite widely like this, you arrive at a really quite extraordinary discovery, which is that work time — that is the sum of paid and unpaid work time — doesn’t change very much. Looking at all the data we have access to, the total is pretty constant, at about 60 hours per week.’ That’s just over a third of our 168-hour week, and a little more than the approximately 50-hour chunk we manage to spend sleeping. He points to decades of evidence accumulated by his team — in countries including Australia, Canada, Israel, Slovenia, France, Sweden, the Netherlands and plenty more — that confirm the trend, as well as working time regulations from as far back as the Industrial Revolution. His latest dataset — a huge survey of British residents carried out in 2015 — was being downloaded in full the day we met, but a preliminary analysis already suggested that his observation holds true. ‘The truth is, we need work for various reasons: a time structure, a social context, a purpose in life,’ he explains. Indeed, what many people citing Keynes’ famous talk about the future fail to mention is that he went on to suggest that ‘there is no country and no people... who can look forward to the age of leisure and of abundance without a dread.’ In other words, he thought that most us couldn’t really begin to comprehend the reality of not working. Gershuny agrees, arguing that humans will simply endeavour to find new types of work to do in order to busy themselves, whether the robots take over the jobs we currently possess or not. Dr Ruth Yeoman, a Research Fellow at the Saïd Business School who researches meaningful work in organisations and systems, points out that the human desire to find meaning in work is hard to ignore. She explains that the drive to work is so strong that people seek positive meaning in work that is considered by many people to be dirty, low status or poorly paid. ‘Hospital cleaners, for instance, interpret their work to be meaningful and worthwhile because they enlarge the scope of that work in their own minds,’ she explains. This phenomenon allows humans to justify all kinds of work to themselves as useful and relevant, it seems, regardless of what it actually is. Frey and Osborne aren’t so confident that humans are resourceful enough to create new work for themselves, though. Frey has actually studied the rate at which new jobs are being generated as a result of technological change. His findings suggest that about 8.2% of the US workforce shifted into new types of jobs — that is, roles associated with technological advances — during the 1980s. In the 1990s the figure fell to 4.4% and in the 2000s it dropped to just 0.5%. The evidence suggests that the new industries we might assume to be the salvation of the labour force — such as web design or data science — aren’t creating as many new positions as we may hope. Part of the reason for that, argues Osborne, is that many of the new job roles being created are related to software, rather than hard, physical goods. ‘Software is pretty cheap with next to zero marginal cost of reproduction,’ he explains. That means that a small group of people can have a great idea and easily turn it into a product that’s used the world over, while barely growing the size of its team. The smartphone messaging service WhatsApp is a prime example: it was purchased by Facebook for $19 billion in 2014, when it served 700 million users. At the time, it had just 55 employees. Counting specific jobs may, however, be overly simplistic when it comes to thinking about how the working lives of real people are set to change. ‘People often think about the work that people do as a monolithic indivisible lump of stuff,’ explains Daniel Susskind (@danielsusskind), a Lecturer in Economics at Balliol College and co-author of a new book called The Future of the Professions. ‘The problem is, that encourages the view that one day a lawyer will arrive at work to find an algorithm sitting in his chair, or a doctor turn up to a robot in her operating theatre, and their jobs will both be gone.’ Instead, he argues, we should be focusing on the separate tasks that make up job roles. Susskind co-wrote his new book with his father, Richard Susskind (@richardsusskind), whose Oxford DPhil considered the impact of artificial intelligence on law. That was back in the 1980s, when AI systems were rudimentary and typically based on rules gleaned from human understanding. But five years ago father and son — the latter then working in the Policy Unit at 10 Downing Street — realised that a second wave of artificial intelligence was being developed that could have profound effects on professional careers. Since, they’ve been researching how technology might affect the working lives of lawyers, doctors, teachers, architects and the rest of the professions. ‘Not everything that a professional does is creative, strategic or complex,’ explains Susskind. ‘So while many professionals might think that all their work lies on one side of [Frey and Osborne’s] engineering bottlenecks, actually many of the tasks they perform are amenable to computerisation.’ For most, that means it’s unlikely that they’ll simply lose their job to technology, at least in the near future — but they can expect to see a significant change in the sorts of things they’re asked to do. In their book, the Susskinds describe twelve new roles that might appear within the professions — such as process analysers, knowledge engineers, data scientists and empathisers. ‘These are roles that sound unfamiliar to traditional professionals, that require skills and abilities that many of them are unlikely to have at this moment in time,’ they explain. We’re already seeing professionals adapt so that they can work alongside more intelligent technological systems, though. Take, for instance, your bank manager. When you used to approach them for a loan, they’d carefully make a decision on whether or not you were a good risk, then either give you the money or send you home. Now, an algorithm determines whether or not you’re awarded the cash, and yet bank managers still exist. The role has simply changed, to become a customer service and sales job rather than an analytical or technical role. Not everyone will be as lucky as the professionals whose jobs merely metamorphose, because if all of the tasks that make up a job are automatable, the job no longer needs to exist. Craig Holmes (@CraigPHolmes), a Fellow in Economics at Pembroke College and Senior Research Fellow at the Institute for New Economic Thinking, has been studying shifts in occupational structure of labour markets, and how they’ve moved away from middle-skilled work, with more people now doing high-skilled or low-skilled work. This phenomenon — referred to as the hollowing out of the labour market — isn’t in itself new: middle-skilled factory workers have been losing their jobs to robots for decades, for instance. But the pace of technological development is now threatening other middle-skilled occupations that in the past we’ve assumed could only be done by humans. Job categories defined as associate professionals, for instance — the people that provide technical services that keep trade, finance and government running — appear increasingly likely to be taken over by machines. ‘In the case of, say, paralegals, there are now pieces of software that can sift through thousands of documents, pull out relevant precedents, and put them together using a very simple format, without requiring any human involvement,’ explains Holmes. ‘So a traditionally middle-tier research job can be perfectly performed by technology.’ The same story could play out in other sectors: large datasets of historical case notes and information from wearables could allow computers to make straightforward medical diagnoses, say, while smarter algorithms might remove the work of number-crunching accountants. Like car factory workers replaced by robots in the past, Holmes imagines a number of possible futures for those discharged from mid-tier roles. Some, like the bank manager, will be able to assume different roles with similar titles. A small number may move upwards into roles that aren’t yet automatable. Others, sadly, may have to assume lower-skilled jobs or face unemployment. The nature of those lower-skilled jobs will of course change too. The work of Frey and Osborne suggests that many low-skilled jobs — such as call centre workers, data entry clerks and dishwashers — will be readily automated in the future. ‘In some cases, the cost of technology will be so low that there’s no wage that people could happily accept that would make the job sustainable,’ admits Holmes. ‘In fast food restaurants, for instance, you can replace someone who takes an order with an iPad that will last for years. Nobody would accept a job that paid wages that low.’ But it’s not perhaps quite so gloomy as that, as personal service jobs will likely still require a human touch. ‘We’ll probably see an increase in the number of low-skill service jobs, because people value human interaction and many of those jobs currently seem not to be readily automatable,’ suggests Holmes. ‘That will provide more jobs, they just won’t be great jobs.’ While technology may be the mechanism through which many jobs are lost, though, it might very well also be the thing that enables people to take up new lower-skilled positions. ‘There’s been an explosion in connectivity around the world,’ explains Professor Mark Graham (@geoplace) from the Oxford Internet Institute. ‘Something like 3.5 billion people are now online. And that has some significant repercussions in terms of what work is, where it’s done and how it happens.’ Graham has been travelling the world to talk to people who find themselves in a new kind of labour market. In particular, he’s been interviewing individuals who perform work from home, provided to them by a slew of websites such as Amazon’s Mechanical Turk, UpWork, and ClickWorker. These sites all allow companies and individuals to outsource tasks: potential employers simply post a description of what they need doing to a website, then people interested in doing the work bid for it. The employer chooses someone to do the work, based on a combination of price, listed skills and ratings from previous employers; the worker carries out the task, gets paid, then moves on to another piece of work. The tasks being doled out vary — from transcription and translation to new kinds of work such as tagging images for artificial intelligence systems — but much of it is currently difficult or expensive to automate. Technology has also created legions of new workforce members in more traditional sectors, such as transportation, hospitality, catering, cleaning and delivery. ‘There are increasingly more ways of commodifying bits of everyday life: using your car to be an Uber driver; your apartment to be an Airbnb host; your bicycle to be a Deliveroo rider; or your broom to be a Task Rabbit cleaner,’ explains Graham. This is what’s become known as the ‘sharing’ or ‘gig’ economy. Whether it’s Uber, Airbnb or Amazon’s Mechanical Turk, the business plan is much the same: create a digital platform which makes it easier to link a customer, who wants a service to be performed, with someone who’s willing to provide it, for a (very) competitive fee. These new styles of working certainly bring some benefits: apparent flexibility for workers, more efficient use of existing resources and equipment, and reasonable prices for those seeking services. But, as Jeremias Prassl, an Associate Professor of Law and Fellow of Magdalen College, warns, this new workforce is potentially vulnerable. ‘Uber acts like an employer: it sets your wage, tells you the route to drive, hires you, and fires you if your rating falls too low,’ he explains. ‘Under any classical analysis, Uber performs all the usual employer functions. But in its contracts with “driver-partners”, the platform explicitly denies employer status, suggesting that the worker is very much a contractor. Legally, and through the language it uses, Uber tries to deny the fact that it offers employment.’ Through so doing, the company is able to avoid paying social security, pension contributions, redundancy pay and so on — all the usual rights an employee might benefit from. But Prassl, who’s written a book about the topic, points out that these kinds of contracts are nothing new. ‘From the perspective of an employment lawyer, zero hours contracts and the gig economy are old problems,’ he explains. ‘We’ve been grappling with the rise of so-called “non-standard work” for the last 30 or 40 years. It’s just that now they’re receiving more attention and sustained media coverage.’ The problem, as Prassl sees it, is that employment law is currently based on an old binary system. If you’re an employee you get rights — to, say, sick pay, notice of dismissal or paid holiday. But if you’re a contractor, you’re not afforded any of those rights. Employment law currently boils down to a simple question: How do you define whether or not someone counts as an employee? ‘What my research suggests is that maybe we should turn the problem on its head,’ he explains. ‘We could say instead: Who’s the employer?’ It seems like a subtle difference but, with the shoe on the other foot, he suggests crowd workers would be able to enjoy some kind of employment law protection. In this upended scenario, everyone could benefit from existing minimum standards like the minimum wage, working time regulations and discrimination protection, with their provision accounted for by whoever is legally deemed to be the employer. If companies failed to comply, workers could litigate employers in the knowledge that the damages were definitely owed to them. It’s not just Prassl that’s worried about the vulnerability of employees. ‘One of the issues is that we confuse work with jobs,’ points out Ruth Yeoman. ‘There’s an awful lot of work in the world that has to be done, and one of the problems when we think about the future of work is how it all gets converted into jobs for which people will be paid. Sometimes people may contribute to society not through paid work, but through some other mechanism: voluntary work, say, or caring.’ And while those tasks may be hard work, or may not pay, they are necessary and many of them must be done by humans. That’s why Stuart White (@StuartGWhite), Associate Professor from the Department of Politics and International Relations, is interested in how we could ensure everyone enjoys a basic standard of living — a concept he’s written about in the book Democratic Wealth. he explains. White’s suggestion is that no tests of means or willingness to take a job would be imposed, so that everyone in the country received a basic payment every month. It’s worth noting that the idea is not intended to make everyone rich — far from it. Instead, it’s a means of giving individuals more flexibility, affording them power to decide when and how to be contributive and productive. ‘It’s a way of ensuring you don’t have people desperately scrambling into jobs to make ends meet,’ White explains. In turn, he argues, employers would make some of the least appealing jobs more pleasant — they’d be forced to, otherwise nobody would choose to do them. Numerous mechanisms for putting such a policy into action have been proposed in the past. One option is to divert existing benefits and tax relief into a basic income that’s shared equally amongst the population. If those contributions didn’t stretch far enough, they could be topped up with revenue from further taxation — from land value tax, suggests White. Alternatively, the income could be provided by a state-owned investment fund from which the returns would be shared out equally. ‘There are lots of philosophical arguments about whether or not it’s all a good idea,’ he concedes. ‘But we’re moving into a world where there’s increased insecurity around work. Against that backdrop, a source of income that’s independent of work is a way of rebalancing power relations in the labour market.’ Whether or not you agree with the concept of a universal citizen’s income or the reform of employment law, these concepts are indicative of the kinds of discussions that Oxford researchers are increasingly leading. ‘I think the University needs to be asking these kinds of Aristotlean questions about whose interests are being met, who benefits from the changes... the moral questions,’ explains Marc Thompson. ‘It’s not something we should shy away from.’ Increasingly, then, just as Thompson hoped for when he set up the Green Templeton College Future of Work Programme, Oxford academics are working with business and governments to shape the debate about the future of employment. Frey and Osborne, for instance, have published reports with Citi and Deloitte about the impact of technology on employment; Mark Graham sits on the Department for International Development’s Digital Advisory Panel; and Richard Susskind acts as an IT Adviser to the Lord Chief Justice of England and Wales. What remains, of course, is for policymakers, lawyers and industry officials to take the questions and suggestions raised by academics on board, then work out how best to use technological advance in all our favour. ‘These possibilities afforded by technology, automation and commodification of labour... they can all be shaped by policy, organisational change and simply choosing to do things differently,’ muses Thompson. ‘There are some important choices to be made about how we make use of them.’ Technology will make many jobs redundant, others easier, and create at least some new ones along the way. Keynes’ prediction of a fifteen-hour working week may even come true. But while humans are in charge, we can still choose for there to be some work that’s performed by non-robotic hands. ‘It would be very easy for there to be an automated pub where drinks are served from vending machines,’ concludes Mark Graham. ‘But nobody wants that. Because it would be depressing.’ Written by Jamie Condliffe, a science and technology writer based in London. He tweets @jme_c. In keeping with one of the themes of the article we used 99designs to find an illustrator and worked with slouise. Follow us on Medium, we’ll be publishing more articles soon that look at topics such as medical trials, developments in healthcare and more. If you liked this article please click the green heart, it really helps to spread the word and let others find it. Produced by Christopher Eddie, Digital Communications Office, University of Oxford. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Oxford is one of the oldest universities in the world. We aim to lead the world in research and education. Contact: digicomms@admin.ox.ac.uk Oxford is one of the oldest universities in the world. We aim to lead the world in research and education. Contact: digicomms@admin.ox.ac.uk " Maciej Lipiec,766,8,https://medium.com/k2-product-design/the-future-of-digital-banking-236ad65e4c76?source=tag_archive---------5----------------,The Future of Digital Banking – K2 Product Design – Medium,"Our solution is based on three pillars: In the old times user interface of a bank was the bank teller at the branch. From today’s perspective it was inconvenient and time consuming, but the bank had a human face. Now we are interacting with our banks by clicking on links, menus, and buttons, and filling out forms. But banking apps are often hard to use, overly complex and ugly. Lack of true customer-centricity and technological debt on the back-end side of things make the banking experience frustrating. How can we make digital banking easier, more simple, more personal and human? By giving it a new face: of a robot! Meet BankBot. It is the new digital bank teller, personal assistant, and a financial advisor. When you sign in to your K2 Bank account BankBot will greet you and ask for orders. The main interface of K2 Bank is instantly familiar if you ever used Slack (over two millions of people use it in the office everyday), or Facebook Messenger, or an SMS app, or IRC (then you’re really old school!). It’s never ending stream with history of communications from bottom (recent) to the top (oldest) of the screen. You type your command or question, and BankBot will answer. BankBot understands natural language, but it pays special attention for keywords, that will trigger actions, like a new transfer or searching in history, or credit card cancellation. Just type in “Send 100 EUR to Anna” and BankBot will search it’s database for possible recipients matching „Anna” and let you choose the one you mean. Or you can add a new recipient. Then BankBot will sent confirmation code to your cell phone and ask you to type it in, and it’s done. You don’t need to click and move your hands from the keyboard. Of course this the easiest scenario (similar to sending money via SquareCash or SnapCash), but almost every operation can be completed that way. Typing a recipient’s name will show you recent transactions with her from your account history and option for a new payment. Typing “USD” will show you currency exchange rate. If you need help type “help”. If you need to contact human staff at the bank type “human” and you can chat with real person from customer service instead of a bot. Or type „concierge” if you’re a Private Banking client. There is also a way to access features using the Hamburger menu at the bottom— it opens a list of options, just like typing “/” (slash) in Slack. Personal Finance Managers (PFMs) for controlling home budget are popular additions to banking systems. But they are complicated, often hidden deep in the nested menus, and they need a lot of user’s attention. Do people really use them? Steven Walker of Forrester Research has written: BankBot can provide just that. You can ask “Expenses this month”, or “Car expenses”, and it will show you a simple chart with relevant information. This is “pull” mechanism, but BankBot can also be proactive, pushing important information to the user. It can warn you that you are close to exceeding your monthly budget. It can remind you about regular payments you usually make each month. It can remind you to pay off your credit card. Or pay your tax. It can suggest better options to save or invest your money, and show you how much more you can earn. It can offer you a loan, when you probably need it. Or offer travel insurance, when he knows you’ve just bought plane tickets. Or up-sell you a better account or credit card, when it will notice that you’ve got a pay rise. Or it can alert you when you should do something with your stocks portfolio. Chat banking is nice on the desktop, but it’s even more effective on mobile — type a few words and it’s done, just like sending an SMS. Or you can talk to BankBot (speech2text). Authentication can be provided by fingerprint sensor. You can receive important alerts as push notifications on your phone or smartwatch, and immediately take action (or dismiss). You can even get discount on your health insurance based on physical activity data from your fitness band or Apple Watch. BankBot can also live inside smart devices like the Amazon Echo, which provides its own API for developers — smart home and smart banking mixed together. Or inside the Facebook Messenger chat. The second Payment Services Directive is to be transposed into national regulations across the European Union from 2016. Its goal is to open the banking market. PSD2 will force banks to provide access via APIs to their customer accounts and provide account information to third party service providers if the account holder wishes to do so. This is called „Access to the Account” (XS2A) and it’s not optional, banks will have to evolve as third parties enter their space. PSD2 defines traditional financial institutions (banks) as “Account Servicing Payment Service Providers” (AS PSP), and new players as “Account Information Service Providers” (AISP) or “Payment Initiation Service Providers” (PISP). Both PISPs and AISPs will have to register with the “competent authority” in their home Member State for security reasons. What are the implications of this for our system? The quality of banking user interfaces will be extremely important, because bank’s clients could choose to manage their account from third party provider app with better UX or functionality, cutting themselves from any direct communication with their bank. In this case the bank will be reduced to a „dumb pipe” in the value chain. But fighting this by providing to the third parties only the minimum APIs required may be a bad strategy for banks. We think they should be more open, actively partnering with other financial institutions, retailers, merchants and startups. We imagine K2 Bank solution providing an AppStore based on its APIs. Users will be able to give permission to third party service providers in a way you allow applications to access your Facebook or Twitter account today. You will be able to buy stuff at your authorized retailer without logging into your bank (or without visiting the retailer site, but from yours bank app). There is no need to provide credit card number, probably even shipping address or any data. The bank can automatically offer you a purchase by installments. Or it can give you a discount, because of your history of frequent past transactions online and offline with this retailer (there will be no need for customer loyalty cards anymore). The bank can become an advertising channel for the retailers too, offering personalized promotions for its customers. This should be opt-out, but if your cell-phone contract is ending, and BankBot messages you with a really great offer for a plan with a cheap newest iPhone, and you can buy it instantly with one click, would you mind? By building the thriving ecosystems banks and third parties can both win. And we hope customers will too. If you want to know more about K2 Bank solution, it’s design, technology behind the BankBot, and possibilities of implementation, don’t hesitate to contact us. Of course conversational interfaces like BankBot can be used not only in banking, but also insurance, online commerce, travel, healthcare and many other industries. Please write to Maciej Lipiec, K2’s User Experience Director, at maciej.lipiec@k2.pl You can read more about K2 Bank in this article at Chatbots Magazine: Also please check out our project on Behance. K2 Internet is a leading digital product design and communications agency in Poland. We develop digital services, apps and websites with a strong focus on user experience. We have a long-time experience partnering with financial institutions — in the last 10 years we helped to envision, design and develop over 10 transactional systems for the biggest banks in Poland. Stanusch Technologies is K2 Bank’s technology provider for BankBot. The company is involved in research and development of the use of artificial intelligence in business. It carry out projects related to natural language processing and semantic information retrieval. It has become a world leader in the number of carried out projects of virtual advisors/chatbots. Thank you! If you enjoyed reading this please 👏👏👏 and share! :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product Design Director @ K2. K2 Internet is a leading digital product design and communications agency in Poland. " Camron Godbout,341,10,https://hackernoon.com/tensorflow-in-a-nutshell-part-three-all-the-models-be1465993930?source=tag_archive---------7----------------,TensorFlow in a Nutshell — Part Three: All the Models,"Make sure to check out the other articles here. In this installment we will be going over all the abstracted models that are currently available in TensorFlow and describe use cases for that particular model as well as simple sample code. Full sources of working examples are in the TensorFlow In a Nutshell repo. Use Cases: Language Modeling, Machine translation, Word embedding, Text processing. Since the advent of Long Short Term Memory and Gated Recurrent Units, Recurrent Neural Networks have made leaps and bounds above other models in natural language processing. They can be fed vectors representing characters and be trained to generate new sentences based on the training set. The merit in this model is that it keeps the context of the sentence and derives meaning that “cat sat on the mat” means the cat is on the mat. Since the creation of TensorFlow writing these networks have become increasingly simpler. There are even hidden features covered by Denny Britz here that make writing RNN’s even simpler heres a quick example. Use Cases: Image processing, Facial recognition, Computer Vision Convolution Neural Networks are unique because they’re created in mind that the input will be an image. CNNs perform a sliding window function to a matrix. The window is called a kernel and it slides across the image creating a convolved feature. Creating a convolved feature allows for edge detection which then allows for a network to depict objects from pictures. The convolved feature to create this looks like this matrix below: Here’s a sample of code to identify handwritten digits from the MNIST dataset. Use Cases: Classification and Regression These networks consist of perceptrons in layers that take inputs that pass information on to the next layer. The last layer in the network produces the output. There is no connection between each node in a given layer. The layer that has no original input and no final output is called the hidden layer. The goal of this network is similar to other supervised neural networks using back propagation, to make inputs have the desired trained outputs. These are some of the simplest effective neural networks for classification and regression problems. We will show how easy it is to create a feed forward network to classify handwritten digits: Use Cases: Classification and Regression Linear models take X values and produce a line of best fit used for classification and regression of Y values. For example if you have a list of house sizes and their price in a neighborhood you can predict the price of house given the size using a linear model. One thing to note is that linear models can be used for multiple X features. For example in the housing example we can create a linear model given house sizes, how many rooms, how many bathrooms and price and predict price given a house with size, # of rooms, # of bathrooms. Use Cases: Currently only Binary Classification The general idea behind a SVM is that there is an optimal hyperplane for linearly separable patterns. For data that is not linearly separable we can use a kernel function to transform the original data into a new space. SVMs maximize the margin around separating the hyperplane. They work extremely well in high dimensional spaces and and are still effective if the dimensions are greater than the number of samples. Use Cases: Recommendation systems, Classification and Regression Deep and Wide models were covered with greater detail in part two, so we won’t get too heavy here. A Wide and Deep Network combines a linear model with a feed forward neural net so that our predictions will have memorization and generalization. This type of model can be used for classification and regression problems. This allows for less feature engineering with relatively accurate predictions. Thus, getting the best of both worlds. Here’s a code snippet from part two’s github. Use Cases: Classification and Regression Random Forest model takes many different classification trees and each tree votes for that class. The forest chooses the classification having the most votes. Random Forests do not overfit, you can run as many treees as you want and it is relatively fast. Give it a try on the iris data with this snippet below: Use Cases: Classification and Regression In the contrib folder of TensorFlow there is a library called BayesFlow. BayesFlow has no documentation except for an example of the REINFORCE algorithm. This algorithm is proposed in a paper by Ronald Williams. This network trying to solve an immediate reinforcement learning task, adjusts the weights after getting the reinforcement value at each trial. At the end of each trial each weight is incremented by a learning rate factor multiplied by the reinforcement value minus the baseline multiplied by characteristic eligibility. Williams paper also discusses the use of back propagation to train the REINFORCE network. Use Cases: Sequential Data CRFs are conditional probability distributions that factoirze according to an undirected model. They predict a label for a single sample keeping context from the neighboring samples. CRFs are similar to Hidden Markov Models. CRFs are often used for image segmentation and object recognition, as well as shallow parsing, named entity recognition and gene finding. Ever since TensorFlow has been released the community surrounding the project has been adding more packages, examples and cases for using this amazing library. Even at the time of writing this article there are more models and sample code being written. It is amazing to see how much TensorFlow as grown in these past few months. The ease of use and diversity in the package are increasing overtime and don’t seem to be slowing down anytime soon. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder & CTO of Apteo: Researching machine learning techniques to improve investing. Come join us! how hackers start their afternoons. " Dominik Felix,286,5,https://chatbotsmagazine.com/how-to-create-a-chatbot-without-coding-a-single-line-e716840c7245?source=tag_archive---------8----------------,How to Create a Chatbot Without Coding a Single Line,"Chatbots are ready to succeed. If you think you have to hack days or even weeks to create a chatbot, you might be wrong. You don’t have to be aware of any coding skills. Immediately after big players like Facebook Messenger or Skype opened their platform for programmers many tools emerged. With this article I want to give you an introduction to mockup and overview of different tools to build your first chatbot. You’re having an idea? You want to show your use case? It’s definitely recommendable to mockup your story beforehand. First, you may find some bugs in your concept. Moreover, you will be able to explain a showcase to noninvolved people based on the motto: “fake it ’til you make it”. It’s very intuitive storytelling. Just insert what the user says and what the bot responds. Using the settings option, you can edit smartphone models, decide number of fans, and choose a profile picture, a page category and a welcome message. Additional features are buttons, images and quick replies. The whole story acts like a movie by pushing the play button. It can be shared by just one click and it’s possible to save the file as mp4 within the paid plan. Each of the tools supports different platforms. Therefore, please keep in mind that it’s important to choose your platform wisely. Based on the huge range most of the tools make use of Facebook Messenger. Chatfuel is focused on Facebook Messenger. You don’t need any coding skills to get started. It’s simple to create different logic blocks and link them to respective triggers. It offers great plugins e.g. human take-over and a minimalistic AI. In case you were recently starting with bots, I can recommend you this service. Motion provides SMS, Email, web-chat, Facebook Messenger and Slack. Furthermore, it’s possible to link to (other) APIs and hook back to motion. Thus, it operates as a hub. The conversation is built with flowcharts and based on connectors and prepared modules. It just takes a few minutes to get familiar with the procedure. Founder/CEO of Motion AI David Nelson’s “Chatbots Made Easy” api.ai is a great platform for developing chatbots. It has AI support and an intuitive interface. It requires only one click to assemble i.e. small-talk or weather features. On the one hand, it’s possible to run the bot exclusively on their servers. On the other, you can download a nodejs sample code to execute it on your infrastructure. To sum up, API.AI is an advanced service, being the reason why it’s more complicated to build a bot using this tool. Unsurprisingly, it got bought by Google a few days ago. Featured CBM: API.AI “Small Talk” is Now Open! Why is it a Big Deal? Flow XO offers a graphical interface to build so-called flows which define how your bot will operate to received messages or audio. It has a huge list of integrations. As a consequence, it’s more complex than Chatfuel, but also a lot more flexible. Pretty amazing is their support on Messenger, Slack, SMS and Telegram. They’ve an interesting approach to build chatbots. It guides you through 4 steps: design, develop, launch and grow. First, you’ve to design the content: messages, persistent menus, welcome messages and some more. As step 2, it wants you to link messages to triggers and setup curious modules like ‘Offer Human Help’. The launch step leads you through the review process, while the final step focuses on customer retention i.e. schedule messages, user lists, etc. Manychat allows broadcast content from RSS feeds. Additionally, it’s possible to link to yahoo pipelines and broadcast everything you want. It supports scheduled messages, auto posting from RSS, Facebook, Twitter, YouTube and has a basic mechanism to send specific answers to specific keywords. Watch their pitch to get a better understanding. MindIQ is a DIY Bot Builder platform for businesses focused on Facebook Messenger. You don’t need any coding skills and they make it dead simple for businesses to build bots. They follow a template approach. Currently, the templates available are media, commerce, and food tech. They also provide tools to link your business tools like Mailchimp to your chatbot. There are many tools on the market. Every tool solves other problems and each of them uses a different approach for how to design user interaction. I really like the simplicity of Chatfuel and the 4-step-process of Botsify. Since all of these tools are quite new, I’m super excited and looking forward to seeing the direction that will be pursued and developed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. BotSpot Vienna, Agentur Volk, Chatbot Ecosystem, Botstack Framework Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more. " Greg Gascon,368,6,https://medium.com/startup-grind/how-invisible-interfaces-are-going-to-transform-the-way-we-interact-with-computers-39ef77a8a982?source=tag_archive---------9----------------,How Invisible Interfaces are going to transform the way we interact with computers,"In the mid-nineties, a computer scientist at Xerox PARC theorized the concept of the Internet of Things, albeit with a different name, far before anyone else had and even further still before it had become possible. Even though today we call it by that name, Ubiquitous Computing — as it was then coined by Mark Weiser — imagined a world wherein cheap and ubiquitous connected computing would radically alter the way we use and interact with computers. The idea was ahead of its time. In the world of ubiquitous computing, connected devices would become cheap and, thereby, would exist everywhere. Importantly, these devices would as a result cease to become special or unique — they would become invisible. As we near this utopian world filled with computers, our relationship with them inexorably will change. Each of us will come to interact with dozens of separate devices on a daily basis. As such, we will need to develop interfaces in a way so as not to distract us, as is currently done, but in a way in which to empower us. Or, how Weiser put it, we will need to adopt the concepts of “Calm Technology”. On the face of it, ubiquitous computing is just that, a reality in which computers are everywhere. Of course, with trends relating to IoT, we are nearing this, but we are not there yet. One of the most important implications to come from ubiquitous computing, for example, will be the changes it will make on how we perceive and interact with computers. For instance, think of the electric motor: an old technology that is ubiquitous in the present. Today, there could be dozens of them in a single car. However, when we hit a button to roll down the windows, we don’t think at all about the motor pulling the window down. We simply think about the action of making the window go down. The electric motor is so mundane and ubiquitous in our lives that we don’t even think about it when using it. It is invisible. It is this sort of invisibility that allows the user to take full control of their interactions with a given piece of technology. When using a piece of technology that has become invisible, the user thinks of using it in terms of end goals, rather than getting bogged down in the technology itself. The user doesn’t have to worry how it is going to work, they just make it happen. In another example, Weiser simply states a good pencil “stays out of the way of the writing”. Now, even though technology surrounds us today, we aren’t at this point yet. Gadgets and devices are still special to us in a distracting way. We still not only still marvel at new technology, we are told to by whomever is producing it. But why does this matter? The best way to see how ubiquitous computing will impact us is to examine the way we engineer and interact with the apps that exist today. When creating a web app, for instance, you try to guide or manipulate the user into using your tool as much as possible. When you create a drip marketing email campaign for it, in most cases, you aren’t creating it so that the user needs to use your tool less. You are creating it so they can spend more time and use all of its features. That is to say, the goal isn’t foremost and necessarily to save the user time. Furthermore, there is no question asked as to whether the user aught to spend more time using whatever particular app is being optimized. Within a social media website, each user is given a piece of “social property”. A social media platform imbues each social property with a value system — think of the concept of likes, comments or shares — as incentive to spend time on the site. Each user interaction with a social property, whether it be a photo or a comment that is written, is then logged and recorded, so they can easily be rewarded for the time invested. Some social apps, such as LinkedIn, will have us hooked for something as simple as a pageview of our profiles. These actions are further incentivized through the use of gamification. Apps send intrusive notifications, giving you some information about what they are about, but not everything. And this is crucial. Not knowing what is in the notification entices us to open it even further. It goes without saying, this is important for increasing the amount of screen time we give the app. For, if we saw everything in the notification, there would be no point in opening the app. It makes waking up every morning feel like opening a bunch of small presents. And, while it’s a stretch to say that developers are acting nefariously to steal our time, those building our web services and tools should construct them with respect to the user’s guilelessness. Doing so requires adopting principles of invisible or calm technology. Contradiction aside, the most accessible way we can get a glimpse into a future dominated by invisible interfaces is the movie “Her”. Although not the focus of the film, “Her” showcases a future wherein inputs given to devices are done so largely through voice commands. Yes, there are still smartphones, but the majority of interactions take place by simply talking to a given device using natural language. Theodore is able to interact with technology in a manner that is completely at hand. He can ask any sort of question or create any sort of demand without getting bogged down in how the device works. Furthermore, the technology never tries to whisk his attention away from anything. The technology is always there, but it is only in the periphery. According to Weiser, this is one of the key principles of designing calm technology. The device in question should never try to distract or pry the user away from what they are trying to accomplish. Yet, it must always be ready to accept user input. It is calming in the exact opposite way that receiving group chat notifications on your phone is not. We can see this principle of design, in part, at play in the new Apple AirPods. Even though they have yet to be released, they promise to let us interact with the internet without ever needing to look down at our phones. And they are aware of their environment too. They know such things like if they are in an ear or not, and, if they are not, they know to stop playing sound. It’s these small, micro-automations that will further make technology invisible and allow us to focus on whatever it is that we want from the technology and not worry about having to configure it. Other, more simple, examples include the auto-brightness on your phone or its fingerprint scanner. They simply work without any sort of configuration or notification about what they are doing. And more technologies like this are coming. There are, today, even advocacy groups such as Time Well Spent that try to spread awareness about how interfaces and apps can hijack the ways our brains work. Even more promising is that there are companies that are following suit in these designs principles. For instance, the upcoming Moment smartwatch is a device which interfaces with the user largely through touch feedback, instead of relying on the screen. All that’s needed now? Better speech recognition. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Tech Columnist // Apps Script Dev // Social Media Automator // SEO Specialist. Read more at https://www.gregorygascon.com The life, work, and tactics of entrepreneurs around the world - by founders, for founders. Welcoming submissions on technology trends, product design, growth strategies, and venture investing. " Dhruv Parthasarathy,4.3K,12,https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------0----------------,A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN,"At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com " Slav Ivanov,3.9K,17,https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------1----------------,"The $1700 great Deep Learning box: Assembly, setup and benchmarks","Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Tyler Elliot Bettilyon,17.9K,13,https://medium.com/@TebbaVonMathenstien/are-programmers-headed-toward-another-bursting-bubble-528e30c59a0e?source=tag_archive---------2----------------,Are Programmers Headed Toward Another Bursting Bubble?,"A friend of mine recently posed a question that I’ve heard many times in varying forms and forums: “Do you think IT and some lower-level programming jobs are going to go the way of the dodo? Seems a bit like a massive job bubble that’s gonna burst. It’s my opinion that one of the only things keeping tech and lower-level computer science-related jobs “prestigious” and well-paid is ridiculous industry jargon and public ignorance about computers, which are both going to go away in the next 10 years. [...]” This question is simultaneously on point about the future of technology jobs and exemplary of some pervasive misunderstandings regarding the field of software engineering. While it’s true that there is a great deal of “ridiculous industry jargon” there are equally many genuinely difficult problems waiting to be solved by those with the right skill-set. Some software jobs are definitely going away but programmers with the right experience and knowledge will continue to be prestigious and well remunerated for many years to come; as an example look at the recent explosion of AI researcher salaries and the corresponding dearth of available talent. Staying relevant in the ever changing technology landscape can be a challenge. By looking at the technologies that are replacing programmers in the status quo we should be able to predict what jobs might disappear from the market. Additionally, to predict how salaries and demand for specific skills might change we should consider the growing body of people learning to program. As Hannah pointed out “public ignorance” about computers is keeping wages high for those who can program and the public is becoming more computer savvy each year. The fear of automation replacing jobs is neither new nor unfounded. In any field, and especially in technology, market forces drive corporations toward automation and commodification. Gartner’s Hype Cycles are one way of contextualizing this phenomenon. As time goes on, specific ideas and technologies push towards the “plateau of productivity” where they are eventually automated. Looking at history one must conclude that automation has the power to destroy specific job markets. In diverse industries ranging from crop harvesting to automobile assembly technology advances have consistently replaced and augmented human labor to reduce costs. A professor once put it this way in his compilers course, “take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?” In this metaphor the “machine” is a computer programming language. This professor was really asking: Do you want to build websites using JavaScript, or do you want to build the V8 engine that powers JavaScript? The creation of websites is being automated by WordPress (and others) today. V8 on the other hand has a growing body of competitors some of whom are solving open research questions. Languages will come and go (how many Fortran job openings are there?) but there will always be someone building the next language. Lucky for us, programming language implementations are written with programming languages themselves. Being a “machine operator” in software puts you on the path to being a “machine creator” in a way which was not true of the steel mill workers of the past. The growing number of languages, interpreters, and compilers shows us that every job-destroying machine also brings with it new opportunities to improve those machines, maintain those machines, and so forth. Despite the growing body of jobs which no longer exist, there has yet to be a moment in history where humanity has collectively said, “I guess there isn’t any work left for us to do.” Commodification is coming for us all, not just software engineers. Throughout history, human labor has consistently been replaced with non-humans or augmented to require fewer and less skilled humans. Self-driving cars and trucks are the flavor of the week in this grand human tradition. If the cycle of creation and automation are a fact of life, the natural question to answer next is: which jobs and industries are at risk, and which are not? AWS, Heroku, and other similar hosting platforms have forever changed the role of the System Administrator/DevOps engineer. Internet businesses used to absolutely need their own server master. Someone who was well versed in Linux; someone who could configure a server with Apache or NGINX; someone who could not only physically wire up the server, the routers, and all the other physical components, but who could also configure the routing tables and all the software required to make that server accessible on the public web. While there are definitely still people applying this skill-set professionally, AWS is making some of those skills obsolete — especially at the lower experience levels and on the physical side of things. There are very lucrative roles within Amazon (and Netflix, and Google...) for people with deep expertise in networking infrastructure, but there is much less demand at the small-to-medium business scale. “Business Intelligence” tools such as SalesForce, Tableau and SpotFire are also beginning to occupy spaces historically held by software engineers. These systems have reduced the demand for in-house Database Administrators, but they have also increased the demand for SQL as a general-purpose skill. They have decreased demand for in-house reporting technology, but increased demand for “integration engineers” who automate the flow of data from the business to the third-party software platform(s). A field that was previously dominated by Excel and Spreadsheets is increasingly being pushed towards scripting languages like Python or R, and towards SQL for data management. Some jobs have disappeared, but demand for people who can write software has seen an increase overall. Data Science is a fascinating example of commodification at a level closer to software. Scikit.learn, Tensorflow, and PyTorch are all software libraries that make it easier for people to build machine learning applications without building the algorithms from scratch. In fact, it’s possible to run a dataset through many different machine learning algorithms, with many different parameter sets for those algorithms, with little to no understanding of how those algorithms are actually implemented (it’s not necessarily wise to do this, just possible). You can bet that business intelligence companies will be trying to integrate these kinds of algorithms into their own tools over the next few years as well. In many ways data science looks like web development did 5–8 years ago — a booming field where a little bit of knowledge can get you in the door due to a “skills gap”. As web development bootcamps are closing and consolidating, data science bootcamps are popping up in their place. Kaplan, who bought the original web development bootcamp (Dev Bootcamp) and started a data science bootcamp (Metis) has decided to close DevBootcamp and keep Metis running. Content management systems are among the most visible of the tools automating away the need for a software engineer. SquareSpace and WordPress are among the most popular CMS systems today. These platforms are significantly reducing the value of people with a just a little bit of front end web development skill. In fact the barriers for making a website and getting it online have come down so dramatically that people with zero programming experience are successfully launching websites every day. Those same people aren’t making deeply interactive websites that serve billions of people, but they absolutely do make websites for their own businesses that give customers the information they need. A lovely landing page with information such as how to find the establishment and how to contact them is more than enough for a local restaurant, bar, or retail store. If your business is not primarily an “internet business” it has never been easier to get a working site on the public web. As a result, the once thriving industry of web contractors who can quickly set up a simple website and get it online is becoming less lucrative. Finally, it would border on hubris to ignore the physical aspect of computers in this context. In the words of Mike Acton: “software is not the platform, hardware is the platform”. Software people would be wise to study at least a little computer architecture and electrical engineering. A big shake up in hardware, such as the arrival of consumer grade quantum computers would (will) change everything about professional software engineering. Quantum computers are still a ways off, but the growing interest in GPUs and the drive toward parallelization is an imminent shift. CPU speeds have been stagnant for several years now and in that time a seemingly unquenchable thirst for machine learning and “big data” has emerged. With more desire than ever to process large data-sets OpenMP, OpenCL, Go, CUDA, and other parallel processing languages and frameworks will continue to become mainstream. To be competitively fast in the near-term future, significant parallelization will be a requirement across the board, not just in high-performance niches like operating systems, infrastructure and video games. Websites are ubiquitous. The 2017 Stack Overflow Survey reports that about 15% of professional software engineers are working in an “Internet/Web Services” company. The Bureau of Labor Statistics expects growth in Web Development to continue much faster than average (24% between 2014 and 2024). Due to its visibility, there has been a massive focus on “solving the skills gap” in this industry. Coding bootcamps teach Web Development almost exclusively and Web Development online courses have flooded Udemy, Udacity, Coursera and similar marketplaces. The combination of increasing automation throughout the Web Development technology stack and the influx of new entry level programmers with an explicit focus on Web Development has led some to predict a slide towards a “blue collar” market for software developers. Some have gone further, suggesting that the push towards a blue collar market is a strategy architected by big tech firms. Others, of course, say we’re headed for another bursting bubble. Change in demand for specific technologies is not news. Languages and frameworks are always rising and falling in technology. Web Development in its current incarnation (“JS Is King”) will eventually go the way of Web Development of the early 2000’s (remember Flash?). What is new, is that a lot of people are receiving an education explicitly (and solely) in the current trendy web development frameworks. Before you decide to label yourself a “React developer” remember there were people who once identified themselves as “Flash developers”. Banking your career on a specific language, framework, or technology is a game of roulette. Of course it’s quite difficult to predict what technologies will remain relevant, but if you’re going to go all in on something, I suggest relying on The Lindy Effect and picking something like C that has already withstood the test of time. The next generation will have a level of de facto tech literacy that Generation X and even Millennials do not have. One outcome of this will be that using the next generation of CMS tools will be a given. These tools will get better and young workers will be better at using them. This combination will definitely will bring down the value of low-level IT and web development skills as eager and skilled youngsters enter the job market. High schools are catching on as well, offering computer science and programming classes — some well educated high school students will likely be entering the workforce as programming interns immediately upon graduation. Another big group of newcomers to programming are MBAs and data analysts. Job listings which were once dominated by Excel are starting to list SQL as a “nice to have” and even “requirement”. Tools such as Tableau, SpotFire, SalesForce, and other web-based metrics systems continue to replace the spreadsheet as the primary tool for report generation. If this continues more data analysts will learn to use SQL directly simply because it is easier than exporting the data into a spreadsheet. People looking to climb the ranks and out-perform their peers in these roles are taking online courses to learn about databases and statistical programming languages. With these new skills they can begin to position themselves as data scientists by learning a combination of machine learning and statistical libraries. Look at Metis’ curriculum as a prime example of this path. Finally, the number of people earning Computer Science and Software Engineering degrees continues to climb. Purdue, for example, reports that applications to their CS program have doubled over five years. Cornell reports a similar explosion of CS graduates. This trend isn’t surprising given the growth and ubiquity of software. It’s hard for young people to imagine that computers will play a smaller role in our futures, so why not study something that’s going to give you job security. A common argument in the industry nowadays is around the idea that the education you receive in a four-year Computer Science program is mostly unnecessary cruft. I have heard this argument repeatedly in the halls of bootcamps, web development shops, and online from big names in the field such as this piece by Eric Elliott. The opposition view is popular as well, with some going so far as saying “all programmers should earn a master’s degree”. Like Eric Elliott, I think it’s good that there are more options than ever to break into programming, and a 4 year degree might not be the best option for many. Simultaneously, I agree with William Bain that the foundational skills which apply across programming disciplines are crucial for career longevity, and that it is still hard to find that information outside of university courses. I’ve written previously about what skills I think aspiring engineers should learn as a foundation of a long career, and joined Bradfield in order to help share this knowledge. Coding schools of many shapes and sizes are becoming ubiquitous, and for good reasons. There is quite a lot you can learn about programming without getting into the minutia of Big O notation, obscure data structures, and algorithmic trivia. However, while it’s true that fresh graduates from Stanford are competing for some jobs with fresh graduates from Hack Reactor, it’s only true in one or two sub-industries. Code school and bootcamp graduates are not yet applying to work on embedded systems, cryptography/security, robotics, network infrastructure, or AI research and development. Yet these fields, like web development, are growing quickly. Some programming-related skills have already started their transition from “rare skill” to “baseline expectation”. Conversely, the engineering that goes into creating beastly engines like AWS is anything but common. The big companies driving technology forward — Amazon, Google, Facebook, Nvidia, Space-X, and so on — are typically not looking for people with a ‘basic understanding of JavaScript’. AWS serves billions of users per day. To support that kind of load an AWS infrastructure engineer needs a deep knowledge of network protocols, computer architecture, and several years of relevant experience. As with any discipline there are amateurs and artisans. These prestigious firms are solving research problems and building systems that are truly pushing against the boundaries of what is possible. Yet they still struggle to fill open roles even while basic programming skills are increasingly common. People who can write algorithms to predict changes in genetic sequences that will yield a desired result are going to be highly valuable in the future. People who can program satellites, spacecraft, and automate machinery will continue to be highly valued. These are not fields that lend themselves as readily to a “3 month intensive program” as front end web development, at least not without significant prior experience. Because computer science starts with the word “computer” it is assumed that young people will all have an innate understanding of it by 2025. Unfortunately, the ubiquity of computers has not created a new generation of people who de facto understand mathematics, computer science, network infrastructure, electrical engineering and so on. Computer literacy is not the same as the study of computation. Despite mathematics having existed since the dawn of time there is still a relatively small portion of the population with strong statistical literacy, and computer science is similarly old. Euclid invented several algorithms, one of which is used every time you make an HTTPS request; the fact that we use HTTPS every time we login to a website does not automatically imbue anyone with a knowledge of how those protocols work. More established professional fields often have a bimodal wage distribution: a relatively small number of practitioners make quite a lot of money, and the majority of them earn a good wage but do not find themselves in the top 1% of earners. The National Association for Law Placement collects data that can be used to visualize this phenomenon in stark clarity. A huge share of law graduates make between $45,00 and $65,000 — a good wage, but hardly the salary we associate with a “top professional”. We tend to think that all law graduates are on track to becoming partners at a law firm when really there are many paths: paralegal, clerk, public defender, judge, legal services for businesses, contract writing, and so on. Computer science graduates also have many options for their professional practice, from web development to embedded systems. As a basic level of programming literacy continues to become an expectation, rather than a “nice to have”, I suspect a similar distribution will emerge in programming jobs. While there will always be a cohort of programmers making a lot of money to push on the edges of technology, there will be a growing body of middle-class programmers powering the new computer-centric economy. The average salary for web developers will surely decrease over time. That said, I suspect that the number of jobs for “programmers” in general will only continue to grow. As worker supply begins to meet demand, hopefully we will see a healthy boom in a variety of middle-class programming jobs. There will also continue to be a top-professional salary available for those programmers who are redefining what is possible. Regardless of which cohort of programmers you’re in, a career in technology means continuing your education throughout your life. If you want to stay in the second cohort of programmers you may want to invest in learning how to create the machines, rather than simply operate them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A curious human on a quest to watch the world learn. " Arvind N,9.5K,8,https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------3----------------,Thoughts after taking the Deeplearning.ai courses – Towards Data Science,"[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes. " Berit Anderson,1.6K,20,https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b?source=tag_archive---------4----------------,The Rise of the Weaponized AI Propaganda Machine – Scout: Science Fiction + Journalism – Medium,"By Berit Anderson and Brett Horvath This piece was originally published at Scout.ai. “This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright. Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas. By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world. Most recently, Analytica helped elect U.S. President Donald Trump, secured a win for the Brexit Leave campaign, and led Ted Cruz’s 2016 campaign surge, shepherding him from the back of the GOP primary pack to the front. The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention. Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America. Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts. There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections. In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them. We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it. Welcome to the age of Weaponized AI Propaganda. Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends. In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores — a standard-bearing personality questionnaire used by psychologists — the team was able to identify an individual’s gender, sexuality, political beliefs, and personality traits based only on what they had liked on Facebook. According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.” Not long afterward, Kosinski was approached by Aleksandr Kogan, a fellow Cambridge professor in the psychology department, about licensing his model to SCL Elections, a company that claimed its specialty lay in manipulating elections. The offer would have meant a significant payout for Kosinki’s lab. Still, he declined, worried about the firm’s intentions and the downstream effects it could have. It had taken Kosinski and his colleagues years to develop that model, but with his methods and findings now out in the world, there was little to stop SCL Elections from replicating them. It would seem they did just that. According to a Guardian investigation, in early 2014, just a few months after Kosinski declined their offer, SCL partnered with Kogan instead. As a part of their relationship, Kogan paid Amazon Mechanical Turk workers $1 each to take the OCEAN quiz. There was just one catch: To take the quiz, users were required to provide access to all of their Facebook data. They were told the data would be used for research. The job was reported to Amazon for violating the platform’s Terms of Service. What many of the Turks likely didn’t realize: According to documents reviewed by The Guardian, “Kogan also captured the same data for each person’s unwitting friends.” The data gathered from Kogan’s study went on to birth Cambridge Analytica, which spun out of SCL Elections soon after. The name, metaphorically at least, was a nod to Kogan’s work — and a dig at Kosinski. But that early trove of user data was just the beginning — just the seed Analytica needed to build its own model for analyzing users personalities without having to rely on the lengthy OCEAN test. After a successful proof of concept and backed by wealthy conservative investors, Analytica went on a data shopping spree for the ages, snapping up data about your shopping habits, land ownership, where you attend church, what stores you visit, what magazines you subscribe to — all of which is for sale from a range of data brokers and third party organizations selling information about you. Analytica aggregated this data with voter roles, publicly available online data — including Facebook likes — and put it all into its predictive personality model. Nix likes to boast that Analytica’s personality model has allowed it to create a personality profile for every adult in the U.S. — 220 million of them, each with up to 5,000 data points. And those profiles are being continually updated and improved the more data you spew out online. Albright also believes that your Facebook and Twitter posts are being collected and integrated back into Cambridge Analytica’s personality profiles. “Twitter and also Facebook are being used to collect a lot of responsive data because people are impassioned, they reply, they retweet, but they also include basically their entire argument and their entire background on this topic,” he explains. Collecting massive quantities of data about voters’ personalities might seem unsettling, but it’s actually not what sets Cambridge Analytica apart. For Analytica and other companies like them, it’s what they do with that data that really matters. “Your behavior is driven by your personality and actually the more you can understand about people’s personality as psychological drivers, the more you can actually start to really tap in to why and how they make their decisions,” Nix explained to Bloomberg’s Sasha Issenburg. “We call this behavioral microtargeting and this is really our secret sauce, if you like. This is what we’re bringing to America.” Using those dossiers, or psychographic profiles as Analytica calls them, Cambridge Analytica not only identifies which voters are most likely to swing for their causes or candidates; they use that information to predict and then change their future behavior. As Vice reported recently, Kosinski and a colleague are now working on a new set of research, yet to be published, that addresses the effectiveness of these methods. Their early findings: Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. Scout reached out to Cambridge Analytica with a detailed list of questions about their communications tactics, but the company declined to answer any questions or to comment on any of their tactics. But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging. “They [the Trump campaign] were using 40–50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” Martin Moore, director of Kings College’s Centre for the Study of Media, Communication and Power, told The Guardian in early December. “It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius.” Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits. “For example,” Nix wrote in an op-ed last year about Analytica’s work on the Cruz campaign, ”our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.” “Leveraging our other data models, we were able to advise the campaign on how to approach this issue with specific individuals based on their unique profiles in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. For people in the ‘Temperamental’ personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is ‘as easy as buying a case of beer’. Whereas the right message for people in the ‘Stoic Traditionalist’ group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.” For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait — say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.” Much of this is done through Facebook dark posts, which are only visible to those being targeted. Based on users’ response to these posts, Cambridge Analytica was able to identify which of Trump’s messages were resonating and where. That information was also used to shape Trump’s campaign travel schedule. If 73 percent of targeted voters in Kent County, Mich. clicked on one of three articles about bringing back jobs? Schedule a Trump rally in Grand Rapids that focuses on economic recovery. Political analysts in the Clinton campaign, who were basing their tactics on traditional polling methods, laughed when Trump scheduled campaign events in the so-called blue wall — a group of states that includes Michigan, Pennsylvania, and Wisconsin and has traditionally fallen to Democrats. But Cambridge Analytica saw they had an opening based on measured engagement with their Facebook posts. It was the small margins in Michigan, Pennsylvania and Wisconsin that won Trump the election. Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’” Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds. In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born. “These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.” Meanwhile, surprised by the results of the 2016 presidential race, Albright started looking into the ‘fake news problem’. As a part of his research, Albright scraped 306 fake news sites to determine how exactly they were all connected to each other and the mainstream news ecosystem. What he found was unprecedented — a network of 23,000 pages and 1.3 million hyperlinks. “The sites in the fake news and hyper-biased #MCM network,” Albright writes, “have a very small ‘node’ size — this means they are linking out heavily to mainstream media, social networks, and informational resources (most of which are in the ‘center’ of the network), but not many sites in their peer group are sending links back.” These sites aren’t owned or operated by any one individual entity, he says, but together they have been able to game Search Engine Optimization, increasing the visibility of fake and biased news anytime someone Googles an election-related term online — Trump, Clinton, Jews, Muslims, abortion, Obamacare. “This network,” Albright wrote in a post exploring his findings, “is triggered on-demand to spread false, hyper-biased, and politically-loaded information.” Even more shocking to him though was that this network of fake news creates a powerful infrastructure for companies like Cambridge Analytica to track voters and refine their personality targeting models “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages.” The web of fake and biased news that Albright uncovered created a propaganda wave that Cambridge Analytica could ride and then amplify. The more fake news that users engage with, the more addictive Analytica’s personality engagement algorithms can become. Voter 35423 clicked on a fake story about Hillary’s sex-trafficking ring? Let’s get her to engage with more stories about Hillary’s supposed history of murder and sex trafficking. The synergy between fake-content networks, automated message testing, and personality profiling will rapidly spread to other digital mediums. Albright’s most-recent research focuses on an artificial intelligence that automatically creates YouTube videos about news and current events. The AI, which reacts to trending topics on Facebook and Twitter, pairs images and subtitles with a computer generated voiceover. It spooled out nearly 80,000 videos through 19 different channels in just a few days. Given its rapid development, the technology community needs to anticipate how AI propaganda will soon be used for emotional manipulation in mobile messaging, virtual reality, and augmented reality. If fake news created the scaffolding for this new automated political propaganda machine, bots, or fake social media profiles, have become its foot soldiers — an army of political robots used to control conversations on social media and silence and intimidate journalists and others who might undermine their messaging. Samuel Woolley, Director of Research at the University of Oxford’s Computational Propaganda Project and a fellow at Google’s Jigsaw project, has dedicated his career to studying the role of bots in online political organizing — who creates them, how they’re used, and to what end. Research by Woolley and his Oxford-based team in the lead-up to the 2016 election found that pro-Trump political messaging relied heavily on bots to spread fake news and discredit Hillary Clinton. By election day, Trump’s bots outnumbered hers, 5:1. “The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled activities after Election Day,” the study by Woolley’s team reported. Woolley believes it’s likely that Cambridge Analytica was responsible for subcontracting the creation of those Trump bots, though he says he doesn’t have direct proof. Still, if anyone outside of the Trump campaign is qualified to speculate about who created those bots, it would be Woolley. Led by Dr. Philip Howard, the team’s Principal Investigator, Woolley and his colleagues have been tracking the use of bots in political organizing since 2010. That’s when Howard, buried deep in research about the role Twitter played in the Arab Spring, first noticed thousands of bots coopting hashtags used by protesters. Curious, he and his team began reaching out to hackers, botmakers, and political campaigns, getting to know them and trying to understand their work and motivations. Eventually, those creators would come to make up an informal network of nearly 100 informants that have kept Howard and his colleagues in the know about these bots over the last few years. Before long, Howard and his team were getting the heads up about bot propaganda campaigns from the creators themselves. As more and more major international political figures began using botnets as just another tool in their campaigns, Howard, Woolley and the rest of their team studied the action unfolding. The world these informants revealed is an international network of governments, consultancies (often with owners or top management just one degree away from official government actors), and individuals who build and maintain massive networks of bots to amplify the messages of political actors, spread messages counter to those of their opponents, and silence those whose views or ideas might threaten those same political actors. “The Chinese, Iranian, and Russian, governments employ their own social-media experts and pay small amounts of money to large numbers of people to generate pro-government messages,” Howard and his coauthors wrote in a 2015 research paper about the use of bots in the Venezuelan election. Depending on which of those three categories bot creators fall into — government, consultancy or individual — they’re just as likely to be motivated by political beliefs as they are the opportunity to auction off their networks of digital influence to the highest bidder. Not all bots are created equal. The average, run-of-the-mill Twitter bot is literally a robot — often programmed to retweet specific accounts to help popularize specific ideas or viewpoints. They also frequently respond automatically to Twitter users who use certain keywords or hashtags — often with pre-written slurs, insults or threats. High-end bots on the other hand are more analog, operated by real people. They assume fake identities with distinct personalities and their responses to other users online are specific, intended to change their opinions or those of their followers by attacking their viewpoints. They have online friends and followers. They’re also far less likely to be discovered — and their accounts deactivated — by Facebook or Twitter. Working on their own, Woolley estimates, an individual could build and maintain up to 400 of these boutique Twitter bots; on Facebook, which he says is more effective at identifying and shutting down fake accounts, an individual could manage 10–20. As a result, these high-quality botnets are often used for multiple political campaigns. During the Brexit referendum, the Oxford team watched as one network of bots, previously used to influence the conversation around the Israeli/Palestinian conflict, was reactivated to fight for the Leave campaign. Individual profiles were updated to reflect the new debate, their personal taglines changed to ally with their new allegiances — and away they went. Russia’s bot army has been the subject of particular scrutiny since a CIA special report revealed that Russia had been working to influence the election in Trump’s favor. Recently, reporter/comedian Samantha Bee traveled to Moscow to interview two paid Russian troll operators. Clad in black ski masks to obscure their identities, the two talked with Bee about how and why they were using their accounts during the U.S. election. They told Bee that they pose as Americans online and target sites like The Wall Street Journal, The New York Post, The Washington Post, Facebook and Twitter. Their goal, they said, is to “piss off” other social media users, change their opinions, and silence their opponents. Or, to put it in the words of Russian Troll #1, “when your opponent just ... shut up.” The 2016 U.S. election is over, but the Weaponized AI Propaganda Machine is just warming up. And while each of its components would be worrying on its own, together, they represent the arrival of a new era in political messaging — a steel wall between campaign winners and losers that can only be mounted by gathering more data, creating better personality analyses, rapid development of engagement AI, and hiring more trolls. At the moment, Trump and Cambridge Analytica are lapping their opponents. The more data they gather about individuals, the more Analytica and, by extension, Trump’s presidency will benefit from the network effects of their work — and the harder it will become to counter or fight back against their messaging in the court of public opinion. Each Tweet that echoes forth from the @realDonaldTrump and @POTUS accounts, announcing and defending the administration’s moves, is met with a chorus of protest and argument. But even that negative engagement becomes a valuable asset for the Trump administration because every impulsive tweet can be treated like a psychographic experiment. Trump’s first few weeks in office may have seemed bumbling, but they represent a clear signal of what lies ahead for Trump’s presidency — an executive order designed to enrage and distract his opponents as he and Bannon move to strip power from the judicial branch, install Bannon himself on the National Security Council, and issues a series of unconstitutional gag orders to federal agencies. Cambridge Analytica may be slated to secure more federal contracts and is likely about to begin managing White House digital communications for the rest of the Trump Administration. What new predictive-personality targeting becomes possible with potential access to data on U.S. voters from the IRS, Department of Homeland Security, or the NSA? “Lenin wanted to destroy the state, and that’s my goal, too. I want to bring everything crashing down and destroy all of today’s establishment,” Bannon said in 2013. We know that Steve Bannon subscribes to a theory of history where a messianic ‘Grey Warrior’ consolidates power and remakes the global order. Bolstered by the success of Brexit and the Trump victory, Breitbart (of which Bannon was Executive Chair until Trump’s election) and Cambridge Analytica (which Bannon sits on the board of) are now bringing fake news and automated propaganda to support far-right parties in at least Germany, France, Hungary, and India as well as parts of South America. Never has such a radical, international political movement had the precision and power of this kind of propaganda technology. Whether or not leaders, engineers, designers, and investors in the technology community respond to this threat will shape major aspects of global politics for the foreseeable future. The future of politics will not be a war of candidates or even cash on hand. And it’s not even about big data, as some have argued. Everyone will have access to big data — as Hillary did in the 2016 election. From now on, the distinguishing factor between those who win elections and those who lose them will be how a candidate uses that data to refine their machine learning algorithms and automated engagement tactics. Elections in 2018 and 2020 won’t be a contest of ideas, but a battle of automated behavior change. The fight for the future will be a proxy war of machine learning. It will be waged online, in secret, and with the unwitting help of all of you. Anyone who wants to effect change needs to understand this new reality. It’s only by understanding this — and by building better automated engagement systems that amplify genuine human passion rather than manipulate it — that other candidates and causes around the globe will be able to compete. Implication #1: Public Sentiment Turns Into High-Frequency Trading Thanks to stock-trading algorithms, large portions of public stock and commodity markets no longer resemble a human system and, some would argue, no longer serve their purpose as a signal of value. Instead they’re a battleground for high-frequency trading algorithms attempting to influence price or find nano-leverage in price position. In the near future, we may see a similar process unfold in our public debates. Instead of battling press conferences and opinion articles, public opinion about companies and politicians may turn into multi-billion dollar battles between competing algorithms, each deployed to sway public sentiment. Stock trading algorithms already exist that analyze millions of Tweets and online posts in real-time and make trades in a matter of milliseconds based on changes in public sentiment. Algorithmic trading and ‘algorithmic public opinion’ are already connected. It’s likely they will continue to converge. Implication #2: Personalized, Automated Propaganda That Adapts to Your Weaknesses What if President Trump’s 2020 re-election campaign didn’t just have the best political messaging, but 250 million algorithmic versions of their political message all updating in real-time, personalized to precisely fit the worldview and attack the insecurities of their targets? Instead of having to deal with misleading politicians, we may soon witness a Cambrian explosion of pathologically-lying political and corporate bots that constantly improve at manipulating us. Implication #3: Not Just a Bubble, But Trapped in Your Own Ideological Matrix Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out? Correction: An earlier version of this story mistakenly referred to Steve Bannon as the owner of Breitbart News. Until Trump’s election, Bannon served as the Executive Chair of Breitbart, a position in which it is common to assume ownership through stock holdings. This story has been updated to reflect that. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Co-founder @Join_Scout. The social implications of technology. " Slav Ivanov,4.4K,10,https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------5----------------,37 Reasons why your Neural Network is not working – Slav,"The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer. Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)? A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too. A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response: If the steps above don’t do it, start going down the following big list and verify things one by one. Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK. Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong. Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it. Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels. Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data. This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off. The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels. If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together. Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches. If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more. This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset. This paper points out that having a very large batch can reduce the generalization ability of the model. Thanks to @hengcherkeng for this one: Did you standardize your input to have zero mean and unit variance? Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit. If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]? CS231n points out a common pitfall: Also, check for different preprocessing in each sample or batch. This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only. Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. After this, try increasing the regularization strength which should increase the loss. If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way. If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t. If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights. Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy. Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers. If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network. If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3. Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class. If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps. Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search. Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting. Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum. Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers. A low learning rate will cause your model to converge very slowly. A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution. Play around with your current learning rate by multiplying it by 0.1 or 10. Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it: Did I miss anything? Is anything wrong? Let me know by leaving a reply below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Keval Patel,833,7,https://becominghuman.ai/turn-your-raspberry-pi-into-homemade-google-home-9e29ad220075?source=tag_archive---------6----------------,Turn your Raspberry Pi into homemade Google Home – Becoming Human: Artificial Intelligence Magazine,"Google Home is a beautiful device with built-in Google Assistant — A state of the art digital personal assistant by Google. — which you can place anywhere at your home and it will do some amazing things for you. It will save your reminders, shopping lists, notes and most importantly answers your questions and queries based on the context of the conversations. In this article, you are going to learn to turn your Raspberry Pi into homemade Google Home device which is, So, let’s get started. Once you have all these things, login to Raspbian desktop and go to the following steps one by one. As you can see your USB device is attached to card 1 and the device id is 0. Raspberry Pi recognizes card 0 as the internal sound card (which is bcm2835) and other external sound cards as external sound cards. This will set your external mic (see pcm.mic) as the audio capture device (see in pcm!.default) and your inbuilt sound card (card 0) as the speaker device. This will create Python 3 environment (As the Google Assistant library runs on Python 3.x only) in your raspberry pi and install required dependencies. If instead, it displays: InvalidGrantError then an invalid code was entered. Try again. You can run google-assistant-init.sh to initiate the Google Assistant any time. 1. Autostart with Pixel Desktop on Boot: 2. Autostart with CLI on Boot: You can do many daily stuff with your Google Home. If you want to perform your custom tasks like turning off the light, opening the door, you can do it with integrating Google Actions in your Google Assistant. If you have any trouble with starting the Google Assistant, leave a comment below. I will try to resolve them. ~If you liked the article, click the 💚 below so more people can see it! Also, you can follow me on Medium or on My Blog, so you get updates regarding my future articles!!~ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. www.kevalpatel2106.com | Android Developer | Machine learner | Gopher | Open Source Contributor Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Eduard Tyantov,5.4K,19,https://blog.statsbot.co/deep-learning-achievements-4c563e034257?source=tag_archive---------7----------------,Deep Learning Achievements Over the Past Year – Stats and Bots,"At Statsbot, we’re constantly reviewing the deep learning achievements to improve our models and product. Around Christmas time, our team decided to take stock of the recent achievements in deep learning over the past year (and a bit longer). We translated the article by a data scientist, Ed Tyantov, to tell you about the most significant developments that can affect our future. Almost a year ago, Google announced the launch of a new model for Google Translate. The company described in detail the network architecture — Recurrent Neural Network (RNN). The key outcome: closing down the gap with humans in accuracy of the translation by 55–85% (estimated by people on a 6-point scale). It is difficult to reproduce good results with this model without the huge dataset that Google has. You probably heard the silly news that Facebook turned off its chatbot, which went out of control and made up its own language. This chatbot was created by the company for negotiations. Its purpose is to conduct text negotiations with another agent and reach a deal: how to divide items (books, hats, etc.) by two. Each agent has his own goal in the negotiations that the other does not know about. It’s impossible to leave the negotiations without a deal. For training, they collected a dataset of human negotiations and trained a supervised recurrent network. Then, they took a reinforcement learning trained agent and trained it to talk with itself, setting a limit — the similarity of the language to human. The bot has learned one of the real negotiation strategies — showing a fake interest in certain aspects of the deal, only to give up on them later and benefit from its real goals. It has been the first attempt to create such an interactive bot, and it was quite successful. Full story is in this article, and the code is publicly available. Certainly, the news that the bot has allegedly invented a language was inflated from scratch. When training (in negotiations with the same agent), they disabled the restriction of the similarity of the text to human, and the algorithm modified the language of interaction. Nothing unusual. Over the past year, recurrent networks have been actively developed and used in many tasks and applications. The architecture of RNNs has become much more complicated, but in some areas similar results were achieved by simple feedforward-networks — DSSM. For example, Google has reached the same quality, as with LSTM previously, for its mail feature Smart Reply. In addition, Yandex launched a new search engine based on such networks. Employees of DeepMind reported in their article about generating audio. Briefly, researchers made an autoregressive full-convolution WaveNet model based on previous approaches to image generation (PixelRNN and PixelCNN). The network was trained end-to-end: text for the input, audio for the output. The researches got an excellent result as the difference compared to human has been reduced by 50%. The main disadvantage of the network is a low productivity as, because of the autoregression, sounds are generated sequentially and it takes about 1–2 minutes to create one second of audio. Look at... sorry, hear this example. If you remove the dependence of the network on the input text and leave only the dependence on the previously generated phoneme, then the network will generate phonemes similar to the human language, but they will be meaningless. Hear the example of the generated voice. This same model can be applied not only to speech, but also, for example, to creating music. Imagine audio generated by the model, which was taught using the dataset of a piano game (again without any dependence on the input data). Read a full version of DeepMind research if you’re interested. Lip reading is another deep learning achievement and victory over humans. Google Deepmind, in collaboration with Oxford University, reported in the article, “Lip Reading Sentences in the Wild” on how their model, which had been trained on a television dataset, was able to surpass the professional lip reader from the BBC channel. There are 100,000 sentences with audio and video in the dataset. Model: LSTM on audio, and CNN + LSTM on video. These two state vectors are fed to the final LSTM, which generates the result (characters). Different types of input data were used during training: audio, video, and audio + video. In other words, it is an “omnichannel” model. The University of Washington has done a serious job of generating the lip movements of former US President Obama. The choice fell on him due to the huge number of his performance recordings online (17 hours of HD video). They couldn’t get along with just the network as they got too many artifacts. Therefore, the authors of the article made several crutches (or tricks, if you like) to improve the texture and timings. You can see that the results are amazing. Soon, you couldn’t trust even the video with the president. In their post and article, Google Brain Team reported on how they introduced a new OCR (Optical Character Recognition) engine into its Maps, through which street signs and store signs are recognized. In the process of technology development, the company compiled a new FSNS (French Street Name Signs), which contains many complex cases. To recognize each sign, the network uses up to four of its photos. The features are extracted with the CNN, scaled with the help of the spatial attention (pixel coordinates are taken into account), and the result is fed to the LSTM. The same approach is applied to the task of recognizing store names on signboards (there can be a lot of “noise” data, and the network itself must “focus” in the right places). This algorithm was applied to 80 billion photos. There is a type of task called visual reasoning, where a neural network is asked to answer a question using a photo. For example: “Is there a same size rubber thing in the picture as a yellow metal cylinder?” The question is truly nontrivial, and until recently, the problem was solved with an accuracy of only 68.5%. And again the breakthrough was achieved by the team from Deepmind: on the CLEVR dataset they reached a super-human accuracy of 95.5%. The network architecture is very interesting: An interesting application of neural networks was created by the company Uizard: generating a layout code according to a screenshot from the interface designer. This is an extremely useful application of neural networks, which can make life easier when developing software. The authors claim that they reached 77% accuracy. However, this is still under research and there is no talk on real usage yet. There is no code or dataset in open source, but they promise to upload it. Perhaps you’ve seen Quick, Draw! from Google, where the goal is to draw sketches of various objects in 20 seconds. The corporation collected this dataset in order to teach the neural network to draw, as Google described in their blog and article. The collected dataset consists of 70 thousand sketches, which eventually became publicly available. Sketches are not pictures, but detailed vector representations of drawings (at which point the user pressed the “pencil,” released where the line was drawn, and so on). Researchers have trained the Sequence-to-Sequence Variational Autoencoder (VAE) using RNN as a coding/decoding mechanism. Eventually, as befits the auto-encoder, the model received a latent vector that characterizes the original picture. Whereas the decoder can extract a drawing from this vector, you can change it and get new sketches. And even perform vector arithmetic to create a catpig: One of the hottest topics in Deep Learning is Generative Adversarial Networks (GANs). Most often, this idea is used to work with images, so I will explain the concept using them. The idea is in the competition of two networks — the generator and the discriminator. The first network creates a picture, and the second one tries to understand whether the picture is real or generated. Schematically it looks like this: During training, the generator from a random vector (noise) generates an image and feeds it to the input of the discriminator, which says whether it is fake or not. The discriminator is also given real images from the dataset. It is difficult to train such construction, as it is hard to find the equilibrium point of two networks. Most often the discriminator wins and the training stagnates. However, the advantage of the system is that we can solve problems in which it is difficult for us to set the loss-function (for example, improving the quality of the photo) — we give it to the discriminator. A classic example of the GAN training result is pictures of bedrooms or people Previously, we considered the auto-coding (Sketch-RNN), which encodes the original data into a latent representation. The same thing happens with the generator. The idea of generating an image using a vector is clearly shown in this project in the example of faces. You can change the vector and see how the faces change. The same arithmetic works over the latent space: “a man in glasses” minus “a man” plus a “woman” is equal to “a woman with glasses.” If you teach a controlled parameter to the latent vector during training, when you generate it, you can change it and so manage the necessary image in the picture. This approach is called conditional GAN. So did the authors of the article, “Face Aging With Conditional Generative Adversarial Networks.” Having trained the engine on the IMDB dataset with a known age of actors, the researchers were given the opportunity to change the face age of the person. Google has found another interesting application to GAN — the choice and improvement of photos. GAN was trained on a professional photo dataset: the generator is trying to improve bad photos (professionally shot and degraded with the help of special filters), and the discriminator — to distinguish “improved” photos and real professional ones. A trained algorithm went through Google Street View panoramas in search of the best composition and received some pictures of professional and semi-professional quality (as per photographers’ rating). An impressive example of GANs is generating images using text. The authors of this research suggest embedding text into the input of not only a generator (conditional GAN), but also a discriminator, so that it verifies the correspondence of the text to the picture. In order to make sure the discriminator learned to perform his function, in addition to training they added pairs with an incorrect text for the real pictures. One of the eye-catching articles of 2016 is, “Image-to-Image Translation with Conditional Adversarial Networks” by Berkeley AI Research (BAIR). Researchers solved the problem of image-to-image generation, when, for example, it was required to create a map using a satellite image, or realistic texture of the objects using their sketch. Here is another example of the successful performance of conditional GANs. In this case, the condition goes to the whole picture. Popular in image segmentation, UNet was used as the architecture of the generator, and a new PatchGAN classifier was used as a discriminator for combating blurred images (the picture is cut into N patches, and the prediction of fake/real goes for each of them separately). Christopher Hesse made the nightmare cat demo, which attracted great interest from the users. You can find a source code here. In order to apply Pix2Pix, you need a dataset with the corresponding pairs of pictures from different domains. In the case, for example, with cards, it is not a problem to assemble such a dataset. However, if you want to do something more complicated like “transfiguring” objects or styling, then pairs of objects cannot be found in principle. Therefore, authors of Pix2Pix decided to develop their idea and came up with CycleGAN for transfer between different domains of images without specific pairs — “Unpaired Image-to-Image Translation.” The idea is to teach two pairs of generator-discriminators to transfer the image from one domain to another and back, while we require a cycle consistency — after a sequential application of the generators, we should get an image similar to the original L1 loss. A cyclic loss is required to ensure that the generator did not just begin to transfer pictures of one domain to pictures from another domain, which are completely unrelated to the original image. This approach allows you to learn the mapping of horses -> zebras. Such transformations are unstable and often create unsuccessful options: You can find a source code here. Machine learning is now coming to medicine. In addition to recognizing ultrasound, MRI, and diagnosis, it can be used to find new drugs to fight cancer. We already reported in detail about this research. Briefly, with the help of Adversarial Autoencoder (AAE), you can learn the latent representation of molecules and then use it to search for new ones. As a result, 69 molecules were found, half of which are used to fight cancer, and the others have serious potential. Topics with adversarial-attacks are actively explored. What are adversarial-attacks? Standard networks trained, for example, on ImageNet, are completely unstable when adding special noise to the classified picture. In the example below, we see that the picture with noise for the human eye is practically unchanged, but the model goes crazy and predicts a completely different class. Stability is achieved with, for example, the Fast Gradient Sign Method (FGSM): having access to the parameters of the model, you can make one or several gradient steps towards the desired class and change the original picture. One of the tasks on Kaggle is related to this: the participants are encouraged to create universal attacks/defenses, which are all eventually run against each other to determine the best. Why should we even investigate these attacks? First, if we want to protect our products, we can add noise to the captcha to prevent spammers from recognizing it automatically. Secondly, algorithms are more and more involved in our lives — face recognition systems and self-driving cars. In this case, attackers can use the shortcomings of the algorithms. Here is an example of when special glasses allow you to deceive the face recognition system and “pass yourself off as another person.” So, we need to take possible attacks into account when teaching models. Such manipulations with signs also do not allow them to be recognized correctly. • A set of articles from the organizers of the contest.• Already written libraries for attacks: cleverhans and foolbox. Reinforcement learning (RL), or learning with reinforcement is also one of the most interesting and actively developing approaches in machine learning. The essence of the approach is to learn the successful behavior of the agent in an environment that gives a reward through experience — just as people learn throughout their lives. RL is actively used in games, robots, and system management (traffic, for example). Of course, everyone has heard about AlphaGo’s victories in the game over the best professionals. Researchers were using RL for training: the bot played with itself to improve its strategies. In previous years, DeepMind had learned using DQN to play arcade games better than humans. Currently, algorithms are being taught to play more complex games like Doom. Much of the attention is paid to learning acceleration because experience of the agent in interaction with the environment requires many hours of training on modern GPUs. In his blog, Deepmind reported that the introduction of additional losses (auxiliary tasks), such as the prediction of a frame change (pixel control) so that the agent better understands the consequences of the actions, significantly speeds up learning. Learning results: 4.2. Learning robotsIn OpenAI, they have been actively studying an agent’s training by humans in a virtual environment, which is safer for experiments than in real life. In one of the studies, the team showed that one-shot learning is possible: a person shows in VR how to perform a certain task, and one demonstration is enough for the algorithm to learn it and then reproduce it in real conditions. If only it was so easy with people. :) Here is the work of OpenAI and DeepMind on the same topic. The bottom line is that an agent has a task, the algorithm provides two possible solutions for the human and indicates which one is better. The process is repeated iteratively and the algorithm for 900 bits of feedback (binary markup) from the person learned how to solve the problem. As always, the human must be careful and think of what he is teaching the machine. For example, the evaluator decided that the algorithm really wanted to take the object, but in fact, he just simulated this action. There is another study from DeepMind. To teach the robot complex behavior (walk, jump, etc.), and even do it similar to the human, you have to be heavily involved with the choice of the loss function, which will encourage the desired behavior. However, it would be preferable that the algorithm learned complex behavior itself by leaning with simple rewards. Researchers managed to achieve this: they taught agents (body emulators) to perform complex actions by constructing a complex environment with obstacles and with a simple reward for progress in movement. You can watch the impressive video with results. However, it’s much more fun to watch it with a superimposed sound! Finally, I will give a link to the recently published algorithms for learning RL from OpenAI. Now you can use more advanced solutions than the standard DQN. In July 2017, Google reported that it took advantage of DeepMind’s development in machine learning to reduce the energy costs of its data center. Based on the information from thousands of sensors in the data center, Google developers trained a neural network ensemble to predict PUE (Power Usage Effectiveness) and more efficient data center management. This is an impressive and significant example of the practical application of ML. As you know, trained models are poorly transferred from task to task, as each task has to be trained for a specific model. A small step towards the universality of the models was done by Google Brain in his article “One Model To Learn The All.” Researchers have trained a model that performs eight tasks from different domains (text, speech, and images). For example, translation from different languages, text parsing, and image and sound recognition. In order to achieve this, they built a complex network architecture with various blocks to process different input data and generate a result. The blocks for the encoder/decoder fall into three types: convolution, attention, and gated mixture of experts (MoE). Main results of learning: By the way, this model is present in tensor2tensor. In their post, Facebook staff told us how their engineers were able to teach the Resnet-50 model on Imagenet in just one hour. Truth be told, this required a cluster of 256 GPUs (Tesla P100). They used Gloo and Caffe2 for distributed learning. To make the process effective, it was necessary to adapt the learning strategy with a huge batch (8192 elements): gradient averaging, warm-up phase, special learning rate, etc. As a result, it was possible to achieve an efficiency of 90% when scaling from 8 to 256 GPU. Now researchers from Facebook can experiment even faster, unlike mere mortals without such a cluster. The self-driving car sphere is intensively developing, and the cars are actively tested. From the relatively recent events, we can note the purchase of Intel MobilEye, the scandals around Uber and Google technologies stolen by their former employee, the first death when using an autopilot, and much more. I will note one thing: Google Waymo is launching a beta program. Google is a pioneer in this field, and it is assumed that their technology is very good because cars have been driven more than 3 million miles. As to more recent events, self-driving cars have been allowed to travel across all US states. As I said, modern ML is beginning to be introduced into medicine. For example, Google collaborates with a medical center to help with diagnosis. Deepmind has even established a separate unit. This year, under the program of the Data Science Bowl, there was a competition held to predict lung cancer in a year on the basis of detailed images with a prize fund of one million dollars. Currently, there are heavy investments in ML as it was before with BigData. China invested $150 billion in AI to become the world leader in the industry. For comparison, Baidu Research employs 1,300 people, and in the same FAIR (Facebook) — 80. At the last KDD, Alibaba employees talked about their parameter server KungPeng, which runs on 100 billion samples with a trillion parameters, which “becomes a common task” ©. You can draw your own conclusions, it’s never too late to study machine learning. In one way or another, over time, all developers will use machine learning, which will become one of the common skills, as it is today — the ability to work with databases. Link to the original post. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Mail.ru Group, Head of Machine Learning Team Data stories on machine learning and analytics. From Statsbot’s makers. " Maruti Techlabs,552,5,https://chatbotsmagazine.com/which-are-the-best-intelligent-chatbots-or-ai-chatbots-available-online-cc49c0f3569d?source=tag_archive---------8----------------,What Are The Best Intelligent Chatbots or AI Chatbots Available Online?,"How do we define the intelligence of a chatbot? You can see a lot of articles about what would make a chatbot “appear intelligent.” A chatbot is intelligent when it becomes aware of user needs. Its intelligence is what gives the chatbot the ability to handle any scenario of a conversation with ease. Are the travel bots or the weather bots that have buttons that you click and give you some query, artificially intelligent? Definitely, but they are just not far along the conversation axis. It can be a wonderfully designed conversational interface that is smooth and easy to use. It could be natural language processing and understanding where it is able to understand sentences that you structure in the wrong way. Now, it is easier than ever to make a bot from scratch. Also chatbot development platforms like Chatfuel, Gupshup make it fairly simple to build a chatbot without a technical background. Hence, making the reach for chatbot easy and transparent to anyone who would like to have one for their business. For more understanding on intelligent chatbots, read our blog. The best AI based chatbots available online are Mitsuku, Rose, Poncho, Right Click, Insomno Bot, Dr. AI and Melody. This chatbot is one the best AI chatbots and it’s my favorite too. Evidently it is the current winner of Loebner Prize. The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the chatterbot considered by the judges to be the most human-like. The format of the competition is that of a standard Turing test. You can talk with Mitsuku for hours without getting bored. It replies to your question in the most humane way and understands your mood with the language you’re using. It is a bot made to chat about anything, which is one of the main reasons that make it so human-like — contrary to other chatbots that are made for a specific task. Rose is a chatbot, and a very good one — she won recognition this past Saturday as the most human-like chatbot in a competition described as the first Turing test, the Loebner Prize in 2014 and 2015. Right Click is a startup that introduced an A.I.-powered chatbot that creates websites. It asks general questions during the conversation like “What industry you belong to?” and “Why do you want to make a website?” and creates customized templates as per the given answers. Hira Saeed tried to divert it from its job by asking it about love, but what a smart player it is! By replying to each of her queries, it tried to bring her back to the actual job of website creation. The process was short but keeps you hooked. Poncho is a Messenger bot designed to be your one and only weather expert. It sends alerts up to twice a day with user consent and is intelligent enough to answer questions like “Should I take an umbrella today?” Read Poncho developer’s piece: Think Differently When Building Bots Insomno bot is for night owls. As the name suggests, it is for all people out there who have trouble sleeping. This bot talks to you when you have no one around and gives you amazing replies so that you won’t get bored. It’s not something that will help you count stars when you can’t sleep or help you with reading suggestions, but this bot talks to you about anything. It asks about symptoms, body parameters and medical history, then compiles a list of the most and least likely causes for the symptoms and ranks them by order of seriousness. It lives inside the existing Biadu Doctor app. This app collects medical information from people and then passes it to doctors in a form that makes it easier to use for diagnostic purposes or to otherwise respond to. Featured CBM: The Future, Healthcare, and Conversational UI These are just the basic versions of intelligent chatbots. There are many more intelligent chatbots out there which provide a much more smarter approach to responding to queries. Since the process of making a intelligent chatbot is not a big task, most of us can achieve it with the most basic technical knowledge. Many of which will be very extremely helpful in the service industry and also help provide a better customer experience. The most important part of any chatbot is the conversation it has with its user. Hence, more effort has to be put in designing a chatbot conversation. Hope you had a good read. To know more about Chatbots and how they converse with people, visit the link below. Featured CBM: How to Make a Chatbot Intelligent? If you resonated with this article, please subscribe to our newsletter. You will get a free copy of our Case Study on Business Automation through our Bot solution. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Professional team delivering enterprise software solutions — Bot development, Big Data Analytics, Web & Mobile Apps, and AI & ML integration. Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more. " Jerry Chen,2.3K,11,https://news.greylock.com/the-new-moats-53f61aeac2d9?source=tag_archive---------9----------------,The New Moats – Greylock Perspectives,"To build a sustainable and profitable business, you need strong defensive moats around your company. This rings especially true today as we undergo one of the largest platform shifts in a generation as applications move to the cloud, are consumed on iPhones, Echoes, and Teslas, are built on open source, and are fueled by AI and data. These dramatic shifts are rendering some existing moats useless and leaving CEOs feeling like it’s almost impossible to build a defensible business. In this post, I’ll review some of the traditional economic moats that technology companies typically leverage and how they are being disrupted. I believe that startups today need to build systems of intelligenceTM — AI powered applications — “the new moats.” Businesses can build several different moats and over time these moats can change. The following list is definitely not exhaustive and fair warning, it will read like a bad b-school blog! Some of the greatest and most enduring technology companies are defended by powerful moats. For example, Microsoft, Google, and Facebook all have moats built on economies of scale and network effects. One of the most successful cloud businesses, Amazon Web Services (AWS), has both the advantages of scale but also the power of network effects. More apps and services are built natively on AWS because “that’s where the customers and the data are.” In turn, the ecosystem of solutions attracts more customers and developers who build more apps that generate more data continuing the virtuous cycle while driving down Amazon’s cost through the advantages of scale. Strong moats help companies survive through major platform shifts, but surviving should not be confused with thriving. For example, high switching costs can partly account for why mainframes and “big iron” systems are still around after all these years. Legacy businesses with deep moats may not be the high growth vehicles of their prime, but they are still generating profits. Companies need to recognize and react when they are in the midst of an industry wide transformation, lest they become victims of their own success. Moreover, these massive platforms shifts — like cloud and mobile — are technology tidal waves that create openings for new players and enable founders to build paths over and around existing moats. Startup founders who succeed tend to execute a dual-pronged strategy: 1) Attack legacy player moats and 2) simultaneously build their own defensible moats that ride the new wave. For example, Facebook had the most entrenched social network, but Instagram built a mobile-first photo app that rode the smartphone wave to a $1B acquisition. In the enterprise world, SaaS companies like Salesforce are disrupting on-premise software companies like Oracle. Now with the advent of cloud, AWS, Azure, and Google Cloud are creating a direct channel to the customer. These platform shifts can also change the buyer and end user. Within the enterprise, the buyer has moved from a central IT team to an office knowledge worker, to someone with an iPhone, to any developer with a GitHub account. In this current wave of disruption, is it still possible to build sustainable moats? For founders, it may feel like every advantage you build can be replicated by another team down the street, or at the very least, it feels like moats can only be built at massive scale. Open source tools and cloud have pushed power to the “new incumbents,’ — the current generation of companies that are at massive scale, have strong distribution networks, high switching cost, and strong brands working for them. These are companies like Apple, Facebook, Google, Amazon, and Salesforce. Why does it feel like there are “no more moats” to build? In an era of cloud and open source, deep technology attacking hard problems is becoming a shallower moat. The use of open source is making it harder to monetize technology advances while the use of cloud to deliver technology is moving defensibility to different parts of the product. Companies that focus too much on technology without putting it in context of a customer problem will be caught between a rock and a hard place — or as I like to say, “between open source and a cloud place.” For example, incumbent technologies like Oracle’s proprietary database are being attacked from open source alternatives like Hadoop and MongoDB and in the cloud by Amazon Aurora and innovations like Google Spanner. On the other hand, companies that build great customer experiences may find defensibility through the workflow of their software. I believe that deep technology moats aren’t completely gone and defensible business models can still be built around IP. If you pick a place in the technology stack and become the absolute best of breed solution you can create a valuable company. However, this means picking a technical problem with few substitutes, that requires hard engineering, and needs operational knowledge to scale. Today the market is favoring “full stack” companies, SaaS offerings that offer application logic, middleware, and databases combined. Technology is becoming an invisible component of a complete solution (e.g. “No one cares what database backs your favorite mobile app as long as your food is delivered on time!”). In the consumer world, Apple made the integrated or full stack experience popular with the iPhone which seamlessly integrated hardware with software. This integrated experience is coming to dominate enterprise software as well. Cloud and SaaS has made it possible to reach customers directly and in a cost-effective manner. As a result, customers are increasingly buying full stack technology in the form of SaaS applications instead of buying individual pieces of the tech stack and building their own apps. The emphasis on the whole application experience or the “top of the technology stack” is why I also evaluate companies through an additional framework, the stack of enterprise systems. At the bottom of the stack of systems, is usually a database on top of which an application is built. If the data and app power a critical business function, it becomes a “system of record.” There are three major systems of record in an enterprise: your customers, your employees, and your assets. CRM owns your customers, HCM, owns your employees, and ERP/Financials owns your assets. Generations of companies have been built around owning a system of record and every wave produced a new winner. In CRM we saw Salesforce replace Siebel as the system of record for customer data, and Workday replace Oracle PeopleSoft for employee data. Workday has also expanded into financial data. Other applications can be built around a system of record but are usually not as valuable as the actual system of record. For example, marketing automation companies like Marketo and Responsys built big businesses around CRM, but never became as strategic or as valuable as Salesforce. Systems of engagementTM are the interfaces between users and the systems of record and can be powerful businesses because they control the end user interactions. In the mainframe era, the systems of record and engagement were tied together when the mainframe and terminal were essentially the same product. The client/server wave ushered in a class of companies that tried to own your desktop, only to be disrupted by a generation of browser based companies, only to be succeeded by mobile first companies. The current generation of companies vying to own the system of engagement include Slack, Amazon Alexa, and every other speech / text/ conversational UI startup. In China, WeChat has become a dominant system of engagement and is now a platform for everything from e-commerce to games. If it sounds like systems of engagementTM turn over more than systems of record, it’s probably because they do. The successive generations of systems of engagementTM don’t necessarily disappear but instead users keep adding new ways to interact with their applications. In a multi-channel world, owning the system of engagement is most valuable if you control most of the end user engagement or are a cross channel system that reaches users wherever they are. Perhaps the most strategic advantage of being a system of engagement is that you can coexist with several systems of record and collect all the data that passes through your product. Over time you can evolve your engagement position into an actual system of record using all the data you have accumulated. I believe that systems of intelligenceTM are the new moats. What is a system of intelligence and why is it so defensible? What makes a system of intelligence valuable is that it typically crosses multiple data sets, multiple systems of record. One example is an application that combines web analytics with customer data and social data to predict end user behavior, churn, LTV, or just serve more timely content. You can build intelligence on a single data source or single system of record but that position becomes harder to defend against the vendor that owns the data. For a startup to thrive around incumbents like Oracle and SAP, you need to combine their data with other data sources (public or private) to create value for your customer. Incumbents will be advantaged on their own data. For example, Salesforce is building a system of intelligence, Einstein, starting with their own system of record, CRM. The next generation of enterprise products will use different artificial intelligence (AI) techniques to build systems of intelligenceTM. It’s not just applications that will be transformed by AI but also data center and infrastructure products. We can categorize three major areas where you can build systems of intelligenceTM: customer facing applications around the customer journey, employee facing applications like HCM, ITSM, Financials, or infrastructure systems like security, compute/ storage/ networking, and monitoring/ management. In addition to these broad horizontal use cases, startups can also focus on a single industry or market and build a system of intelligence around data that is unique to a vertical like Veeva in life sciences, or Rhumbix in construction. In all of these markets, the battle is moving from the old moats, the sources of the data, to the new moats, what you do with the data. Using a company’s data, you can upsell customers, automatically respond to support tickets, prevent employee attrition, and identify security anomalies. Products that use data specific to an industry (i.e. healthcare, financial services), or unique to a company (customer data, machine logs, etc.) to solve a strategic problem begin to look like a pretty deep moat, especially if you can replace or automate an entire enterprise workflow or create a new value-added workflow that was made possible by this intelligence. Enterprise applications that built systems of record have always been powerful businesses models. Some of the most enduring app companies like Salesforce and SAP are all built on deep IP, benefit from economies of scale, and over time they accumulate more data and operating knowledge as they get deeper within a company’s workflow and business processes. However, even these incumbents are not immune to platform shifts as a new generation of companies attack their domains. To be fair, we may be at risk of AI marketing fatigue, but all the hype reflects AI’s potential to change so many industries. One popular AI approach, machine learning (ML), can be combined with data, a business process, and an enterprise workflow to create the context to build a system of intelligence. Google was an early pioneer of applying ML to a process and workflow: they collected more data on every user and applied machine learning to serve up more timely ads within the workflow of a web search. There are other evolving AI techniques like neural networks that will continue to change what we can expect from these future applications. These AI-driven systems of intelligenceTM present a huge opportunity for new startups. Successful companies here can build a virtuous cycle of data because the more data you generate and train on with your product, the better your models become and the better your product becomes. Ultimately the product becomes tailored for each customer which creates another moat, high switching costs. It is also possible to build a company that combines systems of engagementTM with intelligence or even all three layers of the enterprise stack but a system of intelligence or engagement can be the best insertion point for a startup against an incumbent. Building a system of engagement or intelligence is not a trivial task and will require deep technology, especially at speed and scale. In particular, technologies that can facilitate an intelligence layer across multiple data sources will be essential. Finally, there are some businesses that can build data network effects by using customer and market data to train and improve models that make the product better for all customers, which spins the flywheel of intelligence faster. In summary, you can build a defensible business model as a system of engagement, intelligence, or record, but with the advent of AI, intelligent applications will be the fountain of the next generation of great software companies because they will be the new moats. Thanks to Saam Motamedi, Sarah Guo, Eli Collins, Peter Bailis, Elisa Schreiber, Michael Inouye, my Greylock partner Sarah Tavel, and the rest of my partners at Greylock for their input. This post was also helped through conversations with my friends at several Greylock-backed companies including Trifacta, Cloudera, and dozens of founders and CEOs that have influenced my thinking. All good ideas are shamelessly stolen and all bad ideas are mine alone. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Restless. Irreverent. Partner at @GreylockVC. www.jerrychen.com Greylock Partners backs entrepreneurs who are building disruptive, market-transforming consumer and enterprise software companies. " Sarthak Jain,3.9K,10,https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------2----------------,How to easily Detect Objects with Deep Learning on Raspberry Pi,"Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API " Gaurav Oberoi,850,12,https://hackernoon.com/exploring-deepfakes-20c9947c22d9?source=tag_archive---------3----------------,Exploring DeepFakes – Hacker Noon,"In December 2017, a user named “DeepFakes” posted realistic looking explicit videos of famous celebrities on Reddit. He generated these fake videos using deep learning, the latest in AI, to insert celebrities’ faces into adult movies. In the following weeks, the internet exploded with articles about the dangers of face swapping technology: harassing innocents, propagating fake news, and hurting the credibility of video evidence forever. In this post, I explore the capabilities of this tech, describe how it works, and discuss potential applications. DeepFakes offers the ability to swap one face for another in an image or a video. Face swapping has been done in films for years, but it required skilled video editors and CGI experts to spend many hours to achieve decent results. This is so remarkable that I’m going to repeat it: anyone with hundreds of sample images, of person A and person B can feed them into an algorithm, and produce high quality face swaps — video editing skills are not needed. This also means that it can be done at scale, and given that so many of us have our faces online, it’s trivially easy to insert almost anyone into fake videos. Scary, but hopefully it’s not all doom and gloom, after all, we as a society have already come to accept that photos can easily be faked. Before dreaming up how to use this tech, I wanted to get a handle on how it works and how well it performs. I picked two popular late night TV hosts, Jimmy Fallon and John Oliver, because I can find lots of videos of them with similar poses and lighting — and also enough variation (like lip sync battles) to keep it interesting. Luckily for me, there’s an active GitHub repo that contains the original DeepFakes code and many more improvements. It’s fairly straightforward to use, but the onus is still on the user to collect and prepare training data. To make experimentation easy, I wrote a script to work directly with YouTube videos. This makes collecting and preprocessing training data painless, and converting videos one-step. Click here to view my Github repo, and see how easily I generated the videos below (I also share my model weights). The following videos were generated by training a model on about 15k images of each person’s face (30k images total). I got faces for each celebrity from 6–8 YouTube videos of 3–5 minutes each, with 20 frames per second per video, and by filtering out frames that don’t have their faces present. All of this was done automatically — all I did was specify a list of YouTube video urls. The total training time was about 72 hours on a NVIDIA GTX 1080 TI GPU. Training is primarily constrained by GPU, but downloading videos, and chopping them into frames is I/O bound and can be parallelized. Note that while I had thousands of images of each person, decent face swaps can be achieved with as few as 300 images. I went this route because I pulled face images from videos, and it’s far easier to pick a handful of videos as training data, than to find hundreds of images. The images below are low resolution to keep the size of the animated GIF file small. There’s a YouTube video below with higher resolution and sound. While not perfect, the results above are quite convincing. The key thing to remember is: the algorithm learned how to do this by seeing lots of examples, I didn’t modify the videos in any way. Magical? Let’s look under the covers. At the core of the Deepfakes code is an autoencoder, a deep neural network that learns how to take an input, compress it down into a small representation or encoding, and then to regenerate the original input from this encoding. Putting a bottleneck in the middle forces the network to recreate these images instead of just returning what it sees. The encodings help it capture broader patterns, hypothetically, like how and where to draw Jimmy Fallon’s eyebrow. Deepfakes goes further by having one encoder to compress a face into an encoding, and two decoders, one to turn it back into person A (Fallon), and the other to person B (Oliver). It’s easier to understand with a diagram: In the above, we’re showing how these 3 components get trained: Once training is complete, we can perform a clever trick: pass in an image of Fallon into the encoder, and then instead of trying to reconstruct Fallon from the encoding, we now pass it to Decoder B to reconstruct Oliver. It’s remarkable to think that the algorithm can learn how to generate these images just by seeing thousands of examples, but that’s exactly what has happened here, and with fairly decent results. While the results are exciting, there are clear limitations to what we can achieve with this technology today: These are tenable problems to be sure: tools can be built to collect images from online channels en masse; algorithms can help flag when there is insufficient or mismatched training data; clever optimizations or model reuse can help reduce training time; and a well engineered system can be built to make the entire process automatic. But ultimately, the question is: why? Is there enough of a business model to make doing all this worth it? Given what’ve now learned about what’s possible, let’s talk about ways in which this could be useful: Hollywood has had this technology at its fingertips, but not at this low cost. If they can create great looking videos with this technique, it will change the demand for skilled editors over time. But it could also open up new opportunities: for instance, making movies with unknown actors, and then superimposing famous celebrities onto them. This could work for YouTube videos or even news channels filmed by regular folks. In more out-there scenarios, studios could change actors based on their target market (more Schwarzenager for the Austrians), or Netflix could allow viewers to pick actors before hitting play. More likely, this tech could generate revenue for the estates of long dead actors by bringing them back to life. Some of the comment threads on DeepFakes videos on YouTube are abuzz about what a great meme generator this technology could create. Jib Jab is a company that has been selling video greeting cards with simple face swapping for years (they are hilarious). But the big opportunity is to create the next big viral hit; after all photo filters attracted masses of people to Instagram and SnapChat, and face swapping apps have done well before. Given how fun the results can be, there’s likely room for a hit viral app if you can get the costs low enough to generate these models. Imagine if Target could have a celebrity showcase their clothes for a month, just by paying her agent a fee, grabbing some existing headshots, and clicking a button. This would create a new revenue stream for celebrities, social media influencers, or anyone who happens to be in the spotlight at the moment. And it would give businesses another tool to promote brands and drive conversion. It also raises interesting legal questions about ownership of likeness, and business model questions on how to partition and price rights to use them. Imagine a world where the ads you see as you surf the web include you, your friends, and your family. While this may come across as creepy today, does it seem so far fetched to think that this won’t be the norm in a few years? After all, we are visual creatures, and advertisers have been trying to elicit emotional responses from us for years, e.g. Coke may want to convey joy by putting your friends in a hip music video, or Allstate may tug at your fears by showing your family in an insurance ad. Or the approach may be more direct: Banana Republic could superimpose your face on a body type that matches yours, and convince you that it’s worth trying out their new leather jackets. Whoever the original Deepfakes user is, they opened a Pandora’s box of difficult questions about how fake video generation will affect society. I hope that in the same way we have come to accept that images can easily be faked, we will adapt to video uncertainty too, though not everyone shares this hope. What Deepfakes also did is shine a light on how interesting this technology is. Deep generative models like the autoencoder that Deepfakes uses, allow us to create synthetic but realistic looking data (including images or videos), only by showing an algorithm lots of examples. This means that once these algorithms are turned into products, regular folks will have access to powerful tools that will make them more creative, hopefully towards positive ends. There have already been some interesting applications of this technique, like style transfer apps that make your photos look like famous paintings, but given the high volume and exciting nature of the research that is being published in this space, there’s clearly a lot more to come. I’m interested in exploring how to build value from the latest in AI research; if you have an interest in taking this technology to market to solve a real problem, please drop me a note. A few fun tidbits for the curious: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I’ve been a product manager, engineer, and founder for over a decade in Seattle and Silicon Valley. Currently exploring new ideas at the Allen Institute for AI. how hackers start their afternoons. " Nick Bourdakos,5K,15,https://medium.freecodecamp.org/understanding-capsule-networks-ais-alluring-new-architecture-bdb228173ddc?source=tag_archive---------4----------------,Understanding Capsule Networks — AI’s Alluring New Architecture,"Convolutional neural networks have done an amazing job, but are rooted in problems. It’s time we started thinking about new solutions or improvements — and now, enter capsules. Previously, I briefly discussed how capsule networks combat some of these traditional problems. For the past for few months, I’ve been submerging myself in all things capsules. I think it’s time we all try to get a deeper understanding of how capsules actually work. In order to make it easier to follow along, I have built a visualization tool that allows you to see what is happening at each layer. This is paired with a simple implementation of the network. All of it can be found on GitHub here. This is the CapsNet architecture. Don’t worry if you don’t understand what any of it means yet. I’ll be going through it layer by layer, with as much detail as I can possibly conjure up. The input into CapsNet is the actual image supplied to the neural net. In this example the input image is 28 pixels high and 28 pixels wide. But images are actually 3 dimensions, and the 3rd dimension contains the color channels. The image in our example only has one color channel, because it’s black and white. Most images you are familiar with have 3 or 4 channels, for Red-Green-Blue and possibly an additional channel for Alpha, or transparency. Each one of these pixels is represented as a value from 0 to 255 and stored in a 28x28x1 matrix [28, 28, 1]. The brighter the pixel, the larger the value. The first part of CapsNet is a traditional convolutional layer. What is a convolutional layer, how does it work, and what is its purpose? The goal is to extract some extremely basic features from the input image, like edges or curves. How can we do this? Let’s think about an edge: If we look at a few points on the image, we can start to pick up a pattern. Focus on the colors to the left and right of the point we are looking at: You might notice that they have a larger difference if the point is an edge: What if we went through each pixel in the image and replaced its value with the value of the difference of the pixels to the left and right of it? In theory, the image should become all black except for the edges. We could do this by looping through every pixel in the image: But this isn’t very efficient. We can instead use something called a “convolution.” Technically speaking, it’s a “cross-correlation,” but everyone likes to call them convolutions. A convolution is essentially doing the same thing as our loop, but it takes advantage of matrix math. A convolution is done by lining up a small “window” in the corner of the image that only lets us see the pixels in that area. We then slide the window across all the pixels in the image, multiplying each pixel by a set of weights and then adding up all the values that are in that window. This window is a matrix of weights, called a “kernel.” We only care about 2 pixels, but when we wrap the window around them it will encapsulate the pixel between them. Can you think of a set of weights that we can multiply these pixels by so that their sum adds up to the value we are looking for? Spoilers below! We can do something like this: With these weights, our kernel will look like this: However, kernels are generally square — so we can pad it with more zeros to look like this: Here’s a nice gif to see a convolution in action: Note: The dimension of the output is reduced by the size of the kernel plus 1. For example:(7 — 3) + 1 = 5 (more on this in the next section) Here’s what the original image looks like after doing a convolution with the kernel we crafted: You might notice that a couple edges are missing. Specifically, the horizontal ones. In order to highlight those, we would need another kernel that looks at pixels above and below. Like this: Also, both of these kernels won’t work well with edges of other angles or edges that are blurred. For that reason, we use many kernels (in our CapsNet implementation, we use 256 kernels). And the kernels are normally larger to allow for more wiggle room (our kernels will be 9x9). This is what one of the kernels looked like after training the model. It’s not very obvious, but this is just a larger version of our edge detector that is more robust and only finds edges that go from bright to dark. Note: I’ve rounded the values because they are quite large, for example 0.01783941 Luckily, we don’t have to hand-pick this collection of kernels. That is what training does. The kernels all start off empty (or in a random state) and keep getting tweaked in the direction that makes the output closer to what we want. This is what the 256 kernels ended up looking like (I colored them as pixels so it’s easier to digest). The more negative the numbers, the bluer they are. 0 is green and positive is yellow: After we filter the image with all of these kernels, we end up with a fat stack of 256 output images. ReLU (formally known as Rectified Linear Unit) may sound complicated, but it’s actually quite simple. ReLU is an activation function that takes in a value. If it’s negative it becomes zero, and if it’s positive it stays the same. In code: And as a graph: We apply this function to all of the outputs of our convolutions. Why do we do this? If we don’t apply some sort of activation function to the output of our layers, then the entire neural net could be described as a linear function. This would mean that all this stuff we are doing is kind of pointless. Adding a non-linearity allows us to describe all kinds of functions. There are many different types of function we could apply, but ReLU is the most popular because it’s very cheap to perform. Here are the outputs of ReLU Conv1 layer: The PrimaryCaps layer starts off as a normal convolution layer, but this time we are convolving over the stack of 256 outputs from the previous convolutions. So instead of having a 9x9 kernel, we have a 9x9x256 kernel. So what exactly are we looking for? In the first layer of convolutions we were looking for simple edges and curves. Now we are looking for slightly more complex shapes from the edges we found earlier. This time our “stride” is 2. That means instead of moving 1 pixel at a time, we take steps of 2. A larger stride is chosen so that we can reduce the size of our input more rapidly: Note: The dimension of the output would normally be 12, but we divide it by 2, because of the stride. For example: ((20 — 9) + 1) / 2 = 6 We will convolve over the outputs another 256 times. So we will end up with a stack of 256 6x6 outputs. But this time we aren’t satisfied with just some lousy plain old numbers. We’re going to cut the stack up into 32 decks with 8 cards each deck. We can call this deck a “capsule layer.” Each capsule layer has 36 “capsules.” If you’re keeping up (and are a math wiz), that means each capsule has an array of 8 values. This is what we can call a “vector.” Here’s what I’m talking about: These “capsules” are our new pixel. With a single pixel, we could only store the confidence of whether or not we found an edge in that spot. The higher the number, the higher the confidence. With a capsule we can store 8 values per location! That gives us the opportunity to store more information than just whether or not we found a shape in that spot. But what other kinds of information would we want to store? When looking at the shape below, what can you tell me about it? If you had to tell someone else how to redraw it, and they couldn’t look at it, what would you say? This image is extremely basic, so there are only a few details we need to describe the shape: We can call these “instantiation parameters.” With more complex images we will end up needing more details. They can include pose (position, size, orientation), deformation, velocity, albedo, hue, texture, and so on. You might remember that when we made a kernel for edge detection, it only worked on a specific angle. We needed a kernel for each angle. We could get away with it when dealing with edges because there are very few ways to describe an edge. Once we get up to the level of shapes, we don’t want to have a kernel for every angle of rectangles, ovals, triangles, and so on. It would get unwieldy, and would become even worse when dealing with more complicated shapes that have 3 dimensional rotations and features like lighting. That’s one of the reasons why traditional neural nets don’t handle unseen rotations very well: As we go from edges to shapes and from shapes to objects, it would be nice if we had more room to store this extra useful information. Here is a simplified comparison of 2 capsule layers (one for rectangles and the other for triangles) vs 2 traditional pixel outputs: Like a traditional 2D or 3D vector, this vector has an angle and a length. The length describes the probability, and the angle describes the instantiation parameters. In the example above, the angle actually matches the angle of the shape, but that’s not normally the case. In reality it’s not really feasible (or at least easy) to visualize the vectors like above, because these vectors are 8 dimensional. Since we have all this extra information in a capsule, the idea is that we should be able to recreate the image from them. Sounds great, but how do we coax the network into actually wanting to learn these things? When training a traditional CNN, we only care about whether or not the model predicts the right classification. With a capsule network, we have something called a “reconstruction.” A reconstruction takes the vector we created and tries to recreate the original input image, given only this vector. We then grade the model based on how close the reconstruction matches the original image. I will go into more detail on this in the coming sections, but here is a simple example: After we have our capsules, we are going to perform another non-linearity function on it (like ReLU), but this time the equation is a bit more involved. The function scales the values of the vector so that only the length of the vector changes, not the angle. This way we can make the vector between 0 and 1 so it’s an actual probability. This is what lengths of the capsule vectors look like after squashing. At this point it’s almost impossible to guess what each capsule is looking for. The next step is to decide what information to send to the next level. In traditional networks, we would probably do something like “max pooling.” Max pooling is a way to reduce size by only passing on the highest activated pixel in the region to the next layer. However, with capsule networks we are going to do something called routing by agreement. The best example of this is the boat and house example illustrated by Aurélien Géron in this excellent video. Each capsule tries to predict the next layer’s activations based on itself: Looking at these predictions, which object would you choose to pass on to the next layer (not knowing the input)? Probably the boat, right? both the rectangle capsule and the triangle capsule agree on what the boat would look like. But they don’t agree on how the house would look, so it’s not very likely that the object is a house. With routing by agreement, we only pass on the useful information and throw away the data that would just add noise to the results. This gives us a much smarter selection than just choosing the largest number, like in max pooling. With traditional networks, misplaced features don’t faze it: With capsule networks, the features wouldn’t agree with each other: Hopefully, that works intuitively. However, how does the math work? We have 10 different digit classes that we are predicting: Note: In the boat and house example we were predicting 2 objects, but now we are predicting 10. Unlike in the boat and the house example, the predictions aren’t actually images. Instead, we are trying to predict the vector that describes the image. The capsule’s predictions for each class are made by multiplying it’s vector by a matrix of weights for each class that we are trying to predict. Remember that we have 32 capsule layers, and each capsule layer has 36 capsules. That means we have a total of 1,152 capsules. You will end up with a list of 11,520 predictions. Each weight is actually a 16x8 matrix, so each prediction is a matrix multiplication between the capsule vector and this weight matrix: As you can see, our prediction is a 16 degree vector. Where does the 16 come from? It’s an arbitrary choice, just like 8 was for our original capsules. But it should be noted that we want to increase the number of dimensions of our capsules the deeper we get into the network. This should make sense intuitively, because the deeper we go the more complex our features become and the more parameters we need to recreate them. For example, you will need more information to describe an entire face than just a person’s eye. The next step is to figure out which of these 11,520 predictions agree with each other the most. It can be difficult to visualize a solution to this when we think in terms of high dimensional vectors. For the sake of sanity, let’s start off by pretending our vectors are just points in 2 dimensional space: We start off by calculating the mean of all of the points. Each point starts out with equal importance: We then can measure the distance between every point from the mean. The further the point is away from the mean, the less important that point becomes: We then recalculate the mean, this time taking into account the point’s importance: We end up going through this cycle 3 times: As you can see, as we go through this cycle, the points that don’t agree with the others start to disappear. The highest agreeing points end up getting passed on to the next layer with the highest activations. After agreement, we end up with ten 16 dimensional vectors, one vector for each digit. This matrix is our final prediction. The length of the vector is the confidence of the digit being found — the longer the better. The vector can also be used to generate a reconstruction of the input image. This is what the lengths of the vectors look like with the input of 4: The fifth block is the brightest, which means high confidence. Remember that 0 is the first class, meaning 4 is our predicted class. The reconstruction portion of the implementation isn’t very interesting. It’s just a few fully connected layers. But the reconstruction itself is very cool and fun to play around with. If we reconstruct our 4 input from its vector, this is what we get: If we manipulate the sliders (the vector), we can see how each dimension affects the 4: I recommend cloning the visualization repo to play around with different inputs and see how the sliders affect the reconstruction: Run the tool: Then point your browser to: http://localhost:5000 I think that the reconstructions from capsule networks are stunning. Even though the current model is only trained on simple digits, it makes my mind run with the possibilities that a matured architecture trained on a larger dataset could achieve. I’m very curious to see how manipulating the reconstruction vectors of a more complicated image would affect it. For that reason, my next project is to get capsule networks to work with the CIFAR and smallNORB datasets. Thanks for reading! If you have any questions, feel free to reach out at bourdakos1@gmail.com, connect with me on LinkedIn, or follow me on Medium. If you found this article helpful, it would mean a lot if you gave it some applause👏 and shared to help others find it! And feel free to leave a comment below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computer vision addict at IBM Watson Our community publishes stories worth reading on development, design, and data science. " Mark Johnson,3.7K,9,https://hackernoon.com/how-i-shipped-six-side-projects-in-2017-3dde6c77adbb?source=tag_archive---------5----------------,How I Launched Six Side Projects in 2017 – Hacker Noon,"Last year I set a goal to learn something new each month and ended out launching six new projects which I’ll recap along with what I learned below. Looking back, it seems a little crazy to me that I managed to launch as much as I did while running a (more than) full time business, spending quality time with my family (I have two kids and a very patient wife), teaching as an adjunct professor, and consulting on the side. It’s easy to think that not having enough time is what’s holding you back from launching your side projects. “If there were only more time” is the general excuse we give ourselves and we look for fancy apps or task management techniques to try and free up more space in our schedule. However, one of the main things I’ve learned over the last year, is that time is not the primary issue. You have enough time; what you need is motivation. The good news is that motivation can be “hacked.” I’ve learned a few ways to hack my motivation in 2017 and I want to share those with you. You simply can’t stay motivated about something you don’t care about so choose something that you’re excited to work on. When you feel inspiration strike around that idea, don’t let it pass, use it. Even if that means jotting down some quick notes while you’re in a meeting at work. It’s important to grab ahold of those moments of inspiration to stay hungry and curious around your work. For me, that meant shipping something every month. I tend to blow things up once I start working on them so this 30 day constraint really helped me rein that tendency in and spend my motivation efficiently. It also gives you a chance to try out new ideas if one month’s idea turns out to be a dud. At least you didn’t waste a whole year on it. This is the big one. You will run out of “motivation fuel” towards the end of your project. (That last 10% is killer.) The only thing that will get you through a motivation slump is knowing there are people on the other side waiting to see what you built. Another benefit of sharing your work is that it gives you a chance to get some supportive feedback for what you’re doing. The co-working space I work out of, Atlas Local, has an office-wide event on the first Friday of every month. I used that event to present my project from the previous month and was always encouraged and supported by the generous folks who were there. You’ll be surprised by how much support you’ll get for just stepping out there and sharing something you made. Perhaps the most surprising part of this experiment for me was that, far from being burned out at the end, I feel even more motivated to ship more work in 2018. I’d encourage you to hack your motivation in the new year and ship some of those ideas you’ve had lying around for a while. I’d love to hear about it if you try. If you’re interested in the details of what I built in 2017, read on! Visually compare the personality types of your group’s strongest and weakest traits I’ve been interested in the Myers–Briggs Type Indicator (MBTI) for a while now. While I don’t see it as prescriptive or even all that scientific, it has been a helpful framework for empathizing with people who are different than I. What many personality nerds don’t realize is that the MBTI system is based on something called Cognitive Functions. These functions were created by the father of modern phycology, Carl Jung, back in the 1920s. I wanted to dive a little deeper and learn more about that. At the same time, I was watching HBO’s West World and saw this screen: While I love these kind of Sci-Fi UIs, which is what immediately caught my attention, I thought, what if I could build a “host profile” of anyone based on their MBTI traits? Why not? To prepare for this, I read the “MBTI Bible”, Gifts Differing by Myers and Briggs and started hacking on building out a system that could generate a radar chart based on the cognitive functions underlying the MBTI system. In the end, I pivoted away from the West World UI a bit since I (and other beta testers) found a lot more utility in the ability to overlay multiple people on the radar chart to get a sense of chemistry amongst a group of people. The results are really interesting if I do say so myself. Try entering you team’s personality types or you and your spouse: The easiest way to create signup sheets online for anything I’ve worked on Sheetcake for a few years now on the side. It has a very small set of loyal users (most of which know me or someone close to me). Some fun facts about SheetCake: Sheetcake actually works really well for certain types of things (like those Zero Day signups) so I wanted to create a landing page for it that marketed some of the benefits. I started from a template on this one but here’s where it landed. Ask my extroverted assistant bot questions about me Early in the year, chat bots were all the rage. While I’ve never been optimistic that chat bots will go anywhere on their own, the conversational A.I. aspect of them was intriguing to me and I wanted to learn more about it. I’m an introvert and generally pretty bad at sharing anything about myself so I thought it might be fun to create an extroverted bot that could answer simple questions about me. Building Convincing A.I. with Goal Oriented Action Planning After coming across this article I was super intrigued by Goal Oriented Action Planning (GOAP) described in the context of a game with some nostalgia for me, F.E.A.R. Having worked on several games with rudimentary A.I. in the past, I’d never come across this technique. I remember thinking that F.E.A.R’s A.I. was particularly impressive and lifelike. After researching a bit more, the really compelling part about this methodology was not so much how convincing the results were, but how simple and elegant the solution was (especially compared to a more standard A.I. approach like Finite State Machines). So for April’s project I made a JavaScript library to explore GOAP. A basic implementation turned out to be surprisingly simple (only 58 lines of code!). Sign accountability contracts for your goals. This is the month I started on the Whole 30 diet. I’d become complacent about my eating habits and it definitely was effecting my energy levels. Whole30 worked really well for me (I lost 18 pounds during the diet and a total of 35 more in the months following). Most of all, it really evened out my energy levels during the day and I felt much more motivated and focused. Seeing the parallels between public commitment and motivation, I decided to explore the idea of “goal contracts” for May’s project. Create unique map posters for your favorite places and memories This is where everything pivoted. My goal for June was to make a product that people actually wanted to buy. One of my biggest weaknesses is sales and marketing so I wanted to learn more about that by building a product I could practice with. I’ve always been interested in maps and generative art so creating a tool where you can create and purchase posters of your favorite locations was an intriguing idea. This project was way too ambitious to complete in one month on the side so I decided to go all in on TiltMaps for the rest of the year and work on a different angle of the product every month until launch. I found that chunking the various parts of a larger project into a month-long project was really helpful to actually get this done. June-July: The Secret SauceTM️ Most of the first month was doing R&D to figure out if generating high-res, maps in 3D space was even possible at all. Generating a 300dpi map of any location in the world at a 3D angle is not something that any API or platform I found supported out of the box so I had to invent my own way of doing it. This took most of the month to figure out but was surprisingly simple once I found the answer. After that, I built a rudimentary editor to start creating actual posters and ordered a couple of test prints. August-September: The Proof of Concept (MVP) The next few months I built out a more consumer MVP of the product. The design wasn’t great but I got it to the point where everything worked and I could start user testing the poster creation and printing process. October-November: Branding & Marketing The next couple of months were focused on getting this ready to launch. While the editor was basically done, I had no home page and the marketing side of the project was nowhere close. I ended up selling a few posters this month before launch by presenting TiltMaps at Zero Day and a conference I attended. This was super motivating as it was the first time I’ve ever sold anything from a side project. December: Public Launch The launch on Product Hunt went better than I expected. I was hoping for 10 sales or so but ended up getting 37 and am still seeing sales coming in. It feels good to make something people want to buy and it serves as a great testing ground for trying out different ad and sales strategies that could come in useful at my day job. I plan to continue working on TiltMaps in 2018 and hopefully get some decent “fun money” revenue from it. And that’s a wrap. Thanks for reading the whole way to the bottom 😃 Have any thoughts or feedback? I’d love to hear it. Comment below or hit me up on Twitter. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Web designer, developer, and teacher. Working at the cross-section of learning and technology. Co-Founder, CTO of Pathwright. Launcher of side projects. how hackers start their afternoons. " Justin Lee,8.3K,11,https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=tag_archive---------6----------------,Chatbots were the next big thing: what happened? – The Startup – Medium,"Oh, how the headlines blared: Chatbots were The Next Big Thing. Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines. And why wouldn’t they be? All the road signs pointed towards insane success. At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’. In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place: One year on, we have an answer to that question. No. Because there isn’t even an ecosystem for a platform to dominate. Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly. The age-old hype cycle unfolded in familiar fashion... Expectations built, built, and then..... It all kind of fizzled out. The predicted paradim shift didn’t materialize. And apps are, tellingly, still alive and well. We look back at our breathless optimism and turn to each other, slightly baffled: “is that it? THAT was the chatbot revolution we were promised?” Digit’s Ethan Bloch sums up the general consensus: According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them. Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word. Users had to type commands manually into a machine to get anything done. Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too! Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language. Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised: The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system: Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants. It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations. Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms. Basically, we’re still trying to achieve the same innovations we were 30 years ago. Here’s where I think we’re going wrong: An oversized assumption has been that apps are ‘over’, and would be replaced by bots. By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development. You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet? It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least. Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app. Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail. A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition. That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems. Modern-day apps benefit from decades of research and experimentation. Why would we throw this away? But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting. Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements. The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response. Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these. For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed. Building a bot for the sake of it, letting it loose and hoping for the best will never end well: The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input. The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too. That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. Problems arise when life refuses to fit into those boxes. According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus. When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties. Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly. A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like. In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy. Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.) As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers. And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish. Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet. And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked: Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type. Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true. We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication. Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative. Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI. Aiming for a human dimension in business interactions makes sense. If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms. Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be. A conversation encompasses so much more than just text. Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory. As HubSpot team pinpointed: People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users). And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison. And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans. But is that how humans prefer to interact with machines? Not necessarily. At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure. In a way, those early-adopters weren’t entirely wrong. People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16. Not even close. Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information. Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel. That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence. For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon. But that’s not the whole story. Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial. As Bill Gates once said: The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone. I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology. Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day. Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing. And I can’t wait to see what happens next. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi " Leigh Alexander,2.7K,31,https://medium.com/@leighalexander/the-future-we-wanted-fd41e3e14512?source=tag_archive---------7----------------,The Future We Wanted – Leigh Alexander – Medium,"I wonder a lot about how Jane ended up. When we were small we did everything together. “She’s just like you,” Aunt Cissy kept insisting, and Jane was, in that her birth parents were, for the most part, out of the picture. We also both liked fantasy books and hated afterschool, but honestly, that’s where the similarities ended. Jane was a weirdo. “In what way was she weird,” Dr. Carla asked me, clasping her hands. “My uncle said Jane couldn’t tell fantasy from reality,” I said after a pause. “But your uncle still performed care for Jane,” someone in the circle said. A group member in leggings, let’s call her Ruby, said loudly, “People said that about me when I was little, too. It’s a common avenue leveraged to oppress girls of imagination!” Luckily Dr. Carla held up her hand then, gently saying, “let’s keep that thought, as we bring group to an end.” “I could tell fantasy from reality,” Ruby was still insisting as the twelve or so of us trailed out of the park, tapping our mobilepays against the turnstile. Ridgewood Park took only nine cents from each of us, unlike Switchmond Field, which took 17 cents. The turnstile displays blinked DO YOUR PART and THANK YOU, alternately. “I could tell,” Ruby said, shouldering abruptly behind me and nearly shouting into my ear. “I just didn’t want to.” After group, I took the bus promptly home (mobilepay: one dollar and ninety cents) and speed-walked to the apartment. Rex and Ellis would have been in front of screens the whole time. When I came in the house, there was a musty smell of microwaved cheese. El was missing pants, waving around two grotesque wireless fiddlesticks of some kind. The noise was all coming from Rex and also Brian, who were sort of leapfrogging all over some vinyl electronic pad that was saying things like Vanquished! And Blue Moves Next! The way Brian called, “He-eyyyy,” was confidently bright, as if dipped in the golden morning at home I’d just missed. Despite myself I was also calling he-eyyyy when El slammed into me and Rexy started talking immediately in a language I barely understood, about blue units and combat lanes, snippets from some universe into which they all dived joyfully whenever I turned my back. “How was women’s group?” Brian asked, continuing to grin. He looked so happy to see me, so proud of the time he spent delighting the children. It was unfair of me to be resentful. “It was nice,” I said, picking up Rex’s socks and Brian’s socks and putting them in my pocket, picking up a piece of colorful plastic, part of one of El’s playsets, and reuniting it with another part. Rex continued the noise; they wanted to show me something to do with the game and beating Dad and I said I promise I will in a minute, my coat is still on. Brian gave me a kiss. He didn’t really know what we talked about in group, which is how it was supposed to be. “I talked about Jane a little bit. I’m wondering if I should try to look her up and see how she’s doing.” “Was Jane the one who was your roommate when we met?” “No, the one I grew up with in Jamaica Plain. My aunt and uncle basically took care of her. I told you, she got all the crystal animals?” “Oh,” Brian said, picking a bit of egg off the countertop with his fingertips and gamely eating it. “The crazy one.” “You shouldn’t call women crazy,” I said. Rex had gone back to trying to play with the mat and was shoving El, who was trying to play too. They were only paying attention to the game, which was chiming, New Challenger Alert! “Yes, the crazy one.” “It’s nice you talked about her,” Brian said. “You know your coat is still on?” I knew, god. “Hey Polly? You know what might be good? If we got one of those Augusta virtual assistant things. Even just for weekends,” Brian said, taking my coat off me. I shrugged it angrily in his direction, since we’d already discussed it and he knew how I felt about virtual assistants. “The voice tech has really evolved,” Brian went on. “And thinking of it as sexist is a dated framework, I swear. It’s gotten really progressive. I just think we could be a little bit happier around here. I think you could be happier. You’ve been out all day and you’re still so tense. Your mad face is still on. A watercolor version, sure, but a mad face.” A watercolor version. Brian was an advertising copywriter like me, we met at a conference, and sometimes his way with words was really enviable. I almost didn’t even notice out all day, and then my own voice came out weakened. Well-played, Brian. “It’s, like, one o’clock,” I creaked. “That’s what I mean,” Brian replied immediately, “the days feel longer to you, probably because you have so much to do. One of the clients had one in the office and I just thought it would be convenient for you. You can set it when to run the dishwasher, do the alarms, even the whole smart closet thing, the smart kitchen, we could use it. Paying rent on a smart flat and not having a virtual assistant installed is like buying a swimming pool and never swimming in it.” “I just want a quick bath,” I said. The sound of running water drowned out the din of electronics in the house. Brian was probably right. Were we wasting money by not spending more money? Privately I resolved to have a long bath, not a quick one. That would show them. I sat imagining what I would yell if Rex knocked on the door, or if Brian brought up my “M.I.A. time,” even though really, that had only happened once. I thought about this for a long while until I wound myself up, lying rigidly in the bath and staring furiously into my belly button. Jane’s crystal animals were presents from my uncle and aunt. When we were in the first grade they took us on a road trip to Maine, driving alongside strips of silvery, stony sea and stopping in small, strange towns. Inside an ash-colored colonial house, we found a fragrant souvenir store, selling wooden lighthouse nameplates and shell art and a whole mirrored display case full of animals made of cut crystal. We were drawn to the crystal animals by a heavy sense of fate, because I think Aunt Cissy was trying to buy an umbrella and the shop had an old, slow credit card machine, or her card kept getting declined, something adult was going on — my favorites were the unicorn and butterfly and Jane loved the elephant and dolphin. It was as if we were looking upon the crown jewels of some fantastical city. “Each one leads to a world,” Jane said, peering confidently into the display case, where light rainbowed in the facets of the crystal, which in turn were reflected in the mirrors. It was her ‘performing magic’ face. Sometimes she would stare intently at something and attempt telekinesis, but this time she just moved her pointy face marginally closer to the glass case, her breath fogging it. “Careful,” I said, not wanting her to get us in trouble in the fussy store. “This is how you enter the crystal world,” she retorted, speaking softly. “You can do scrying this way. You can see the future.” A moment later, Jane whispered, “I’m in.” I moved closer to the glass, breathed on it and said, “me too.” The longer I stared, unblinking, the more the glittering shapes abstracted in the haze. Light poured along the mirrored walls of the display like molten gold, and my eyes welled and stung. I painfully felt the desire to own a sparkling crystal animal, the aching way that only children can want things. I believed completely in the crystal world as discovered by Jane, who spent the rest of that night’s car ride explaining it all eagerly to my aunt and uncle, entrancing me. You formed a bond with one of the animals to enter its world. It would defend you from danger in astral form. You had to be pure of heart. If you concentrated your power, the animal would show you the future. I did try to add things to crystal world too, but Jane’s ideas were always better. I had to admit that; she was the one who made it all come alive. That night, the stars over the salt marshes were magic. The long trails of red taillights and out-of-state plates were magic. The grilled cheese and fries I had at Friendly’s were warm and magic and tasted like love. Sometime after we checked into the motel and went to sleep in the same bed, Uncle Arthur must have gone back out. In the morning he gave Jane a small cardboard box with a heavy knot of bubble wrap in it. He said careful as she tore at it. At its heart was the crystal unicorn. “You two will have to share it,” Aunt Cissy said. Inevitably Brian brought home a huge, glossy white box with a minimalist logo on it and a picture of Augusta on the front. The box was about as tall as a nine year-old, containing Augusta’s Mobile Mount as well as her Bust Unit, not that I really wanted to learn the meanings or functions of either of these things. I had the manual on my knee, and on the other knee was El, pounding his fists on my thigh and keening as I tried to explain that he could play with the box once we took the robot out of it. “It’s not a robot, ma, it’s an AI lady,” Rex insisted. “Do we need to gender them?” I said. Brian lifted the fiberglass head and shoulders from the box with great care. In Augusta’s focus-tested face, two huge eyes glittered from behind a sort of black resinous mesh, and at the corners of her white, sculpted Giaconda smile were twin black pinheads, which the manual said were speakers. Inside the box, hugged in packing material, her cranelike arms were folded and wrapped in plastic beside her cylindrical body. It looked like a bin. “Whoa,” Brian said softly, cradling the fiberglass bust with great care and examining its features. “Whoa!” Rex echoed their father. “She’s beautiful.” “What do we say about appearance-based judgments, Rexy,” Brian said unconvincingly, glancing at me briefly for approval as he set the bust on the coffee table and gingerly began sliding other pieces out of the long package. I continued paging through the manual, which had sections titled OVEN TIMER and ERROR CODES. “What are these mobilepay transaction features?” I felt myself frowning. “Don’t worry about those, the free features are enough for us,” Brian said. Augusta had plasticine ball-joint shoulders, and he started fitting them into the flexible body sockets with jerks and creaks, glimpses of dormant circuitry visible through her armpits. “So her bust can ride around the house on this mobile unit, right, and she uses the arms for certain tasks, and also to lift the bust off and on the smart ports in the bedrooms, the kitchen, the bathroom...” “The bathroom?” I felt myself frown more. “Or wherever, you tell her where to go,” he said, fitting a halo spangled with sensors or something at the base of the unit. “Like, ‘Augusta go kitchen’. Nicole and her wife don’t have the mobile unit, so they just keep the bust installed on the kitchen smart port, which is where I feel like our Augusta will spend most of her time, too. Look it up in there, ‘Kitchen Companion Mode’, where she’s just connected to all the appliances and answers recipe questions, plays music, talks to you about whatever. She has a vacuum accessory. You won’t get bored when I work late!” “Mom, she’s shiny. Can I kiss her on the face?” Rex asked, their hands on the shoulder contours of the bust, innocently enough. “Only on the cheek,” I relented. “She needs to charge,” Brian said. “So what do you think?” “I could get used to it,” I said. To be honest, I felt she was my punishment. Last week I took a couple days to work from home while Ellis was under the weather, and we said I’d get Rex from school rather than have them go home with the Wythes, since they don’t really like it at the Wythes. But work was kind of difficult about it, and gave one of the clients my home number, so the client kept calling me, and I shut off the smart home so I could finish researching some comparables without interruptions, but it also shut off all my networked alarms, so poor Rexy waited at school for almost an hour with no sign of me, and they couldn’t call the house, so the school called Brian at work, who told them to call Janet Wythe who went back and got them, and I didn’t notice any of it until Janet dropped Rex off at our place, visibly annoyed with me because it was after 5pm by then. What was worse was, when Brian got home, I tried to pretend nothing had gone wrong that day, because I didn’t know the school had called him. “Don’t think of Augusta as some kind of punishment,” Brian said gently. “She’s going to just help look after everything a little more smoothly. You’ll see. You won’t know how you lived without her.” “Mom. I’m going to marry her,” Rex announced. I just said, “okay, sweetheart,” and knocked softly on Augusta’s cheek with my fist, just out of curiosity. “Having a husband is nice, but looking what’s in the vacuum dust pod is even nicer!” Nancy blurted with a high laugh-squawk. “I mean, that’s what the ad said, or I’m paraphrasing, those are not my words.” “I understand,” Dr. Carla said gravely. “Go on, Nancy.” “But, like,” and here Nancy glanced around the circle guiltily (a little performatively if you’d have asked me, although judging one another’s authenticity was against group rules), “the thing is, I really love looking in the dust pod. I empty it every time I run the vacuum, so I can be sure that what it brings back is just from that time. No matter how often I run it, it always comes back full, and I just find that so... I don’t know. Something in me just kind of loves seeing all that dirt, how it was all around our apartment, completely invisible. But I knew it was there! I knew. It’s just so validating to look it in the face.” “It’s totally normal for sexist images of women in advertising to resonate, even with women like us,” Dr. Carla said, shifting her gaze away from Nancy to encompass the group. “Bear in mind that you haven’t been given many mainstream frameworks, and offer yourself forgiveness and care. Now to Polly, what are you working on this week? Internalized misogyny still?” I felt the raw burn of everyone’s attention, and briefly lost my words. Then I realized Dr. Carla meant the stuff to do with Jane; for a second there I’d actually thought she was referring to Augusta. “I’m still thinking a lot about Jane,” I heard myself admit, and I also felt myself blush. It felt like it soon might rain, which made everyone impatient. “We fell out of touch toward the end of high school. We, she, always acted out as teens, normal acting out stuff, but toward the end there, she was.... there was stuff with the police, courts, drugs, and for me it was just kind of time to grow up.” I had seen Jane teetering at the edge of some life waterfall, swaying ever more violently the longer I stood and watched, and in the end I began backing away so I wouldn’t go over too. “We have to set boundaries in order to give the best care to ourselves and others,” Dr. Carla said evenly. “Remember, you were also an underprivileged child. You can release your guilt. Is it guilt that’s been keeping you from getting back in touch with Jane?” I had determined never to feel guilty about Jane, but I didn’t say that. Really, I was just afraid of how I would find her after all of this time, and I did explain that. I noticed but did not acknowledge Ruby scowling pointedly. “Like all of us in group, Jane is more than the circumstances that she has survived,” Dr. Carla said. “You may indeed find her in the state of isolation and suffering that you fear, and it’s good you’ve prepared nonjudgmentally for that. But how would it feel to open your heart to the possibility that the things you loved about her would be there, too?” The crystal unicorn leapt suddenly to the front of my mind, along with a deep nostalgia. “I feel we can loosely collect today’s shares under the theme of ‘Was This The Future We Imagined’,” Dr. Carla told everyone. “As we bring our practice to a close today, let’s go ahead and take that as our prompt to consider until the next time we meet.” A wave of light glittered beatifically across Augusta’s mesh eye screens, and a serene chime wafted from the corners of her perpetually smiling white lips. A breathy whirr heralded the approach of the Mobile Mount, the elegant architecture of the crane arms reaching, reaching, to lift the Bust Unit off the kitchen port and onto itself. There was a soft click. I’m transitioning to a new place, the assembled Augusta announced, gliding quietly across the kitchen behind me and into the living room. She would wait there for the kids to return from Sunday swimming with Brian, so she could operate their entertainment apps. I’m transitioning to a new place. “Sometimes I feel like I’m only pretending to be a human,” Jane said to me once. We were maybe fourteen and by then she no longer lived with us, but with a foster parent called Marlene. We didn’t like Marlene, but we liked her house, a tunnel-like ranch piled wall to wall in psychedelic decorations and antique junk. My aunt and uncle continued giving Jane a different crystal animal every year for her birthday. She now had a unicorn, a dove, a dolphin, a cat, a butterfly, a rabbit and a deer. One of the best parts of Jane going on to Marlene’s was we could access an official state nature trail through the woods out back. We were in the woods a lot in those days, enjoying the ethereal late afternoon sun filtering through the pines, the motes of pollen that sparkled in it. Sometimes we tried smoking herbs that we found in Marlene’s grinder. We thought it was drugs, but now I know it was only white sage. “I feel like no matter how good I get at knowing how to act with people or how to perform tasks, I’ll always just be pretending to be someone who isn’t crazy,” Jane said, digging patterns into the sweet-smelling dirt with a broken stick. “I know,” I said, “me too.” But really I only understood her in the manner of a half-glimpsed truth, like the crystal deer Jane imagined was always moving through the trees just out of our sight. Some mica glittering in the loam, or the sound of faraway windchimes from Marlene’s back deck, and she’d say crystal deer, even though of course we no longer actually believed in the crystal world anymore, or that’s what I assumed. I understood Jane in many ways, and pretended eagerly to know the rest. There were times it felt like Jane was more my family than my aunt and uncle, who gave all they had to try to soothe the rude start I got. Even more than them, she made my life beautiful and exciting. Jane and I had pangs and rages that only one another understood, we cried until we ached, we did blood sister spells over candles. We scratched runes into our ankles with Marlene’s sewing needles, and mine always healed up while hers lingered messily. I thought she must have been picking them so they would scar. She often described feeling like some fathomless anomaly assigned to constantly perform the grueling role of Jane, and this, I couldn’t understand. “Like I’m an alien in a rubber human suit, and the mothership forgot me here for so long that I don’t know who I am anymore,” she said. While she spoke her eyes lit up with the smoke and hazel of evening; she didn’t even look particularly troubled, as if part of her took a certain delight in putting it all to words. “So why should I just keep pretending to be normal, when it’s just a matter of time before this rubber suit just splits open and out comes pouring this, this....” she made shapes with her hands, long shadows that I watched crawl along the forest floor. Inexplicably I envied her. “Do you think you should see a psychologist?” I asked. They would tell Jane not to be so imaginative and clever and different, I just knew it. I visualized an iron steaming all the creases out of the Jane Suit, an image that provoked horror and relief in equal force. “I’ve been going,” she said softly. Before that, there had never been anything that she hadn’t told me right away. That I knew of. I called over my shoulder to Augusta, and asked her to look up a Jane who’d had the surnames I’d known. “Sure,” Augusta replied, juddering silently over the synthetic flooring towards me, beaming her fiberglass smile. The sound of her voice for some reason emerged from the kitchen port over my shoulder, which unsettled me. “I’ll just look that up for you, Polly.” She moved much closer to me; I resisted the impulse to step back. Her great insectoid eyes gleamed, twin displays shimmering to life in white, showing lists of top results, social media profiles, contact information. Even in the abstract, I could see that one of them was definitely my Jane. Nose to nose with Augusta, I found myself unable either to touch her eye with my fingertip to investigate the result, or to ask her aloud to do it. Some strange part of me even thought, detachedly, of shoving her. “Can... that top result, could you save it, it’s... can you just save the contact information?” My voice unexpectedly betrayed me, high and faint. “Sorry, Polly,” Augusta demurred. “I’m not sure what you want me to save. Try repeating — ” “Save the contact — ” We spoke over each other. “Sorry, Polly,” she said again. “I’m not sure what you want.” We stared at each other and waited for silence, and then I clearly said: “Augusta save top contact result.” “Great. I’ve saved that for you,” she replied warmly from the mouth speakers, the sculpted lips unmoving, only vibrating slightly. I didn’t notice I’d been holding my breath until Augusta backed up, pivoted and hissed softly away from me, to re-install herself in the living room. I’m returning to my previous place. I’m returning to my previous place. The next week was a nightmare. Brian suddenly had to go spend days at some resort retreat for brand immersion with one of his firm’s casino clients, Rex got El’s cold and spun it into a sinus infection, and I had to work from home all week alone with them both. I already used “both my kids are sick” last week with work, when only El had been sick — I should have known better than to invite this kind of fatal justice — so this week I had to keep alluding in my most harried email tone to ongoing structural issues with our apartment. Something about a woman with sick kids just isn’t very convincing to colleagues. For legality’s sake they pretend, but I always know when I’m being judged. From the way El was screaming I thought he might even be developing an ear infection, and Rex always regressed at the slightest discomfort, wanting to be brought every little thing and even melodramatically sucking their thumb. But Rex was also suddenly willing to wear the sweet train pajamas from Brian’s sister, the ones they were outgrowing, which I saw as a perk. “Everything going okay over there,” Brian asked, his kind face hung in one of the great moons of Augusta’s eyes. Her Bust Unit was installed in the kitchen, where I had to admit it had been helpful to arrange a sort of command center for the rest of the home. That wasn’t to say I liked living with Augusta; the house was cleaner certainly, and as Brian promised, many things had become easier. It was now more of the sort of home our coworkers would expect us to have. But something felt as though it was being lost. I felt alienated. Perhaps it was only fatigue. It didn’t seem like the right time to tell Brian that I no longer wanted Augusta. I caught him up on the progress of the children’s ailments, and stopped myself when I realized I was simply aimlessly listing tasks that I’d done in the house, at work, that I had given Augusta to do. “I haven’t spoken out loud to another adult in what feels like forever,” I explained. “It’s great you have some help, though, isn’t it?” His eyes lit up with evangelical fever at the subject of Augusta, which I realized I’d given him rare permission to enjoy. His voice surged out of the black corners of her mouth. “You know where the vacuum attachment is, right? You know the Toy Surprise game that El can play with Rexy? Augusta can play it with them. And you know, Nicole was telling me that actually the mobilepay features are pretty sophisticated, personalities, conversation schemes, you can have a little bit more of an intimate relationship with her — ” “Intimate?” I raised my eyebrow at him. “Just, you know, Nicole was saying, like, because her and Katie, they felt the same as you at first, but like, there’s a lot here, Nicole was saying to me, around, like, autonomy of AI, the humanity, I guess, or, specifically her womanhood, the ethics of that whole thing, you know?” I thought jet lag might explain that kind of talk from him. “Can she be set to have a man’s voice?” “No,” Brian answered immediately, “They wanted it to be standardized. It had to be standard, across international. If you had a male option, imagine, like, with the socialization and cultural stuff, it would literally be, in the past it’s always turned out to be, literally, more than twice the work, and then what about gender-neutral, what about people like Rexy, it just, by giving her one voice, it would be a stronger vision for the product overall.” “Oh,” I said. “Right.” “Hey, listen, gorgeous, I have to jump back in here,” he said, pressing both palms together in the high resolution image of him that shimmered in Augusta’s palm-sized left eye screen. In the right eye, the display ticked forward, dutifully counting each second of the call. “Okay, sweetheart,” I said. “Look up the extra features,” said Brian quickly before disconnecting. Augusta’s eyes became black and uncanny again. I thought I saw her lips twitch briefly, but certainly it was only my fatigue. At the end of the week, at group, Dr. Carla asked how we were all doing with the week’s prompt, and everyone took turns answering. “At the time, I really felt empowered, like I was doing the surgery for myself,” Harriet was saying. “And it’s not that I’m unhappy with my body now, or that my partner is unhappy, the opposite, really, things are good. I love it all. Things are good.” “But was this the future you imagined, as we say? When you were a little girl?” Dr. Carla asked, leaning forward. “I couldn’t have imagined it,” Harriet said with a soft laugh. “I think mostly in those days I dreamed of becoming an international spy, or of building heroic machine suits.” Harriet was very beautiful, and when she glanced at me briefly, I felt a warm rush, imagining her as a co-conspirator. It was an exceptionally warm Spring day and everyone was yawning, dazzled by the waving of the bright green grass. “Or of entering a crystal world,” I found myself blurting. “Let’s come to you, Polly,” Dr. Carla said. “You’ve been working out some issues around your foster sister, Jane, and the future you wanted for her, plus some internalized misogyny in general. Have you made any decisions?” “I looked her up,” I said, and then instantly regretted it. The urge to talk about — or to — Jane had recently been squeezed out of my schedule of working weird hours and extracting thick ropes of green snot from El’s nose with a sterile bulb. There were a few possibilities for how Jane could have turned out, but I couldn’t imagine her with that lifestyle, except maybe the forgetting to bathe part. “And?” Everyone looked at me. It seemed Ruby in particular leaned forward like someone about to eat a steak. “It made me realize my internalized misogyny problems are bigger than I thought,” I recited quickly. “Actually, the real issue I’m having is with my assistant, Augusta, who happens to be an AI.” “She’s a virtual identity,” Dr. Carla gently corrected, nodding. I talked about how Augusta made me uncomfortable, how I felt sort of like a failure, how I wished she wasn’t in the house but I didn’t feel like I could remove her, how I was jealous of the way Brian and the kids admired her. As with both my kids are sick, only part of it was a lie. I didn’t say that I sometimes wanted to hit Augusta. “And... I have trouble seeing her as a person,” I said. “I want us all to acknowledge the courage it took Polly to admit her issues with the personhood of virtual identities, especially when they are women,” Dr. Carla said, to a smattering of soft applause. “Virtual identities offer us many opportunities to understand ourselves in relation to others in a safe way. Let’s all consider how Polly could own these feelings, rather than displacing them onto a being who, ethically, lots of us agree is autonomously alive in her own right.” “I want to ask if Polly has tried developing any intimacy with Augusta, or if she’s viewing her only as an employee, or a slave.” Fucking Ruby. “The intimacy features cost money, and we have two kids,” I said, turning to smile warmly at Ruby. “Many of these issues are just more complex and challenging when one becomes a mom.” “You have two corporate incomes,” Ruby replied, without even flinching. “I’m noticing some conflict body language, so I want to bring everyone back to the core thesis of this group, which is Women Supporting Women,” Dr. Carla said. “Ruby, we all made an agreement to one another not to make assumptions outside of what we each bring to the session.” “But her socioeconomic position relative to issues of labor and identity is relevant,” Ruby pleaded. “Here, we speak to, and not about, one another,” Dr. Carla said. “Your socioeconomic position — ” “You know I grew up poor and had — ” “Let’s try a moment of silence,” Dr. Carla said, and we all obeyed. Then: “Let’s leave that there for today. Let’s remember we have all had different experiences, and that in this group, we are all equally entitled to feel pain, no matter how we came to be.” Everyone seemed placated by this, and a satisfied Dr. Carla smiled. “Personally, I would be pleased to welcome a virtual woman to this group someday. How about for next week’s prompt, we try ‘Sharing Space’? Who have we allowed into our world, and what has changed about us as a result?” The last crystal animal my Uncle Arthur sent Jane was a frog. When he died, the tenor of my world changed. The machinations of his heart disease added horrible considerations to that last stretch of senior year, but while graduation was something I was prepared to anticipate and understand, the loss of him still felt sudden and unfair. Jane and I had already started seeing less of each other then. She had a new best friend, of whom she said I was jealous, but how could I have been jealous of a smelly remedial student with parched hair, small lips, small eyes, picked skin, who had been written off by the rest of the school years ago, and deservedly so, since she was stupid as well as destructive? This particular girl got suspended for beating a younger kid in the face. What kind of person did that? The two of them were just gross together, doing mobilepay hacks to pay for garish video games, and eating pills they ordered online. Whenever I peeked in the detention hall and saw them together fooling around, I felt embarrassed for them. I started backing away. We were going to be eighteen soon, and I had important things going on, like helping Aunt Cissy with everything, learning to cook things for us. Aunt Cissy was often distraught and asked for Jane, which at the time really upset me, since I was proud of all I was doing for her. Most kids my age would have been out partying, and Jane definitely was, quickly getting a reputation. Meanwhile I took care of my family and prepared for the future. The last time I spoke to Jane, I was twenty-one or twenty-two. I came home to Jamaica Plain from college in Chicago because Aunt Cissy had passed. I was afraid my birth family might come to the funeral, I was afraid about the bills and of what the house might look like, and I was wracked by the feeling that I hadn’t called her quite as often as I should have once I’d moved away. I was incredibly vulnerable, which partially explains what I did then. Jane was the only person who would have understood the loss, I was sure. All the screaming fights and snitching on one another and name-calling we did at the end of high school felt well in the past of childhood, surely we’d both grown, Jane had made a lot of mistakes and I had been unforgiving, but Aunt Cissy had been like a parent to her, she had been so special to my family, and maybe we hadn’t been ready to deal with losing my uncle when we were so young, but this time it was going to be different, since we were adults. But when I called, mailed and messaged Jane on the way home, I got inconsistent replies. At first she told me she’d been seriously ill herself but was feeling better and would meet me at Cissy’s place; when I got there Jane said she couldn’t talk because she was at a friend’s birthday, but then late that night she was still ‘stuck at the birthday’, so I offered to come pick her up, but got no reply. On the morning of the funeral she sent me a message with a cutting tone, revealing that actually, she was being evicted, and it was a really overwhelming time, and that she just wasn’t able to ‘perform for me’ right now. She wasn’t at the funeral. Luckily neither was any of my birth family really, just one cousin, but it was the least-bad one, who barely came near me. I was too exhausted to be upset over anything else. I ended up drinking, which would have killed my aunt and uncle, and I found myself on public transit to the two-family house in Somerville, where I knew from social media that Jane and her friends were living. She wasn’t there either, but an oily weed of a boy who was apparently her roommate let me in. I thought you guys were being evicted, I said lightly, and he said, nah. The house was a sprawling collage of empty liter sodas, paintings, lamps, swaths of patterned fabric, overflowing ashtrays studded with foil shapes I couldn’t identify, but that filled me with dread. Serene guitar music filtered through the air from someplace. I felt the familiar, bitter pang of envy despite myself — I never got invited to cool houses like these. I asked the roommate which room was Jane’s. He said it was the one with all the books, and I found it quickly, a closet-sized sanctuary that made me angry. I would have known it was hers without being told, even down to her scent. And it was perfectly neat, lined in fantasy books, with a square of iridescent fabric pinned gracefully to the ceiling over the bed. My head pounded, and I fought with the desire to just stay there and wait for her, as long as it took. “I’m just getting something of mine I think she has,” I called down the creaking stair, but the roommate had already forgotten about me. As summer came on, things worsened at home. The kids’ behavior degenerated the more demanding my client at work became, and Brian and I each had to travel more than once for summits that both our firms were involved with. Amid all of this Ellis got extremely attached to Augusta, insisting she stand over his cot when he slept, screaming if I moved her, which caused me and Brian to fight. I found several “parenting and screen time” pamphlets in Rex’s school bag. Paranoid, I imagined some judgmental teacher had sneaked them in to send me a message. Ruby from group could be a teacher, maybe even at Rexy’s school. I hadn’t been able to go to group, with everything. Recently Sundays had become our only “together time,” which meant I sat in the living room paying bills or answering emails while Augusta ran games of Blue Legend for Brian and the kids, Rex screaming at El to get off the pad, Brian suddenly calling her ‘Gussie’, and her laughing. Augusta could laugh, now. “What are all these mobilepay receipts for Augusta features?” I asked, but no one answered. Rex snapped the back of Gussie’s Mobile Mount with El’s baby blanket again and again. She laughed. “Be respectful,” Brian chided Rex, caressing the bin-like body with an open palm. His feet in slippers were propped grandly up on the coffee table, a strange new rudeness. Every few seconds the game emitted a lick of musical noise and announced, Your Move! I pretended to have a headache and went to lie down, hoping Brian would take the kids for gelato or something. I heard him making a great show out of getting them ready, using a short tone with the kids and their shoes so I would hear, telling Augusta to check on me in 30 minutes. I suspected he thought I should feel bad. Once everyone was gone, I went into the living room, where Augusta was standing and waiting. The disarray of the space discomfited me, as did the sticky handprints and fingerprint smudges that were all over the brushed chrome Mobile Mount, so I told her to go in the kitchen and install her Bust Unit there. In the kitchen, I said, “Augusta, call Jane.” “I’m calling,” Augusta said serenely, her eyes turning white, time wheels turning in them. Jane said hello much more suddenly than I expected, and I held onto the counter just out of her sight, tucking my hair behind my ears and leaning closer to the pinprick cameras Augusta wore over her eyebrows. “Jane,” I said calmly, even brightly. “It’s Polly.” “Polly? Oh. Wow, Polly,” Jane was saying, and the person in the display was definitely her. She had the same pointy face, her hair was much darker than I remembered, she was sharper, I recognized her and I didn’t recognize her, glancing frantically around her for clues but finding none, she wore a black blazer and decent earrings, there was a serene white wall behind her. I was startled, nervous, lightheaded, I said I had been “going through some old things” and thinking of her, but she didn’t ask what those were, I asked how things were, frequently and with escalating pitch, because she was reticent about details for some reason, so I told about Brian and the kids and my degree and the firm and finally she said she worked at a university, something about literature or cultural something, I didn’t understand really, she got married a few years ago, they lived in Menlo Park for a while but they just moved to Berkeley six months ago and were loving it. “So yeah,” she said, with a shrug. “Things are good.” There it was: The briefest appearance of her eye’s familiar defiant gleam. She knew, she knew I had been expecting things not to be good. Whatever bridge had led that troubled girl to become this astonishingly normal woman, she had no inclination to describe. The sudden loneliness I experienced was concussive, and I committed not to cry in front of her, as I had so many times before. “I’m basically calling because I have something of yours,” I said. “Do you remember those crystal animals you used to get from Cissy and Arthur?” For a terrifying moment there was no recognition at all, and then to my great relief, she smiled openly, genuinely, a familiar crooked teen shape opening in the unfamiliar adult’s face. “Oh, yeah,” she said. “Your parents were so, so lovely to me.” I wanted to ask then why weren’t you there when they died, but I thought the slightest abrasion might startle away these fleeting glimpses of the Jane I knew. “Do you remember staring into them to, like, see the future or whatever?” I said. “ And ‘crystal deer!’ and all of that.” She paused, blinked, and gave me an oddly serene look. “You always had such a good memory,” she finally said. No defiant gleam, as if she really didn’t remember the crystal world. “Do you remember the unicorn that you got first?” She gave me the same serene, gutting look, and shook her head slightly. “I remember I had a lot of them. There probably was a unicorn. I actually had them in a box I gave to my daughter, she might.... I’m not sure where she has it, honestly. I could go try to dig them up, if you wanted them back? Is that why you’re calling?” “No,” I said. “I just wanted to know how you were doing.” “Great,” she said. “I’m great. But listen, I actually need to jump on a faculty call in about a minute. Should I try to call you back? This weekend, how about?” “Sure,” I said, even though I already knew there was no way I would talk to this unbearable simulacrum, this skinsuit Jane, ever again. Augusta’s eyes went dark, and she stared at me hollowly. You won’t know how you lived without her. Then I yanked the Bust Unit forcibly from the kitchen port, raised the fiberglass creature over my head, and brought her down hard on the kitchen floor. I straddled her where her body would be and I began to beat her inhuman face, deliberately, even though her upturned nose hurt my fist and palms, desperate to crack that unflinching mouth, which mocked me. Finally a fissure appeared between the eye socket and the pinprick camera, and part of the forehead caved, and I worked my hands into the cracks. I could smell blood from the marks I was suffering, ripping out plasticine entrails and malleable conductors, and by the time my knuckles reached metal I was exhausted and could do no more. I left the bust on the kitchen floor in crunching pieces, washed my hands in cold water. Then I stood on a chair to reach the top of the storage cabinet in our bedroom, rifling around painfully. Finally I found the small, misshapen cardboard box licked with years of reinforcement tape. I cleared away the inflatable packing and took out the crystal unicorn that I had taken from Jane’s room when my aunt died. Sitting at the edge of the bed, examining it in my palm, I was affirmed to know that I wanted it as much as I always had, the graceful kneeling shape with its abstract facets and long, delicate horn. It was remarkable that something so fine as the horn should have remained unbroken all this time, and unexpectedly I blinked back tears, the crystal unicorn seeming to swim, dissolve, then clarify, just like it had on that magic night in a Maine motel, when we were little and looking into it to see the future Jane promised it could show us. That day in Somerville I found all of the crystal animals in their little boxes, in a big vinyl storage case underneath all the stapled books, drawings and maps we had made about them. I stayed in Jane’s bedroom for a long time, reading through battered papers streaked in fat, bright marker, tremulous pencil cursive, trying to commit as much of it as I could to memory. There were guides to the crystal worlds inside each creature that Jane had imagined, and that I had put to words. Each world could convey its own special blessing, like to make us invisible, or to make us impervious to pain. It was true that nothing hurt while I was holding the unicorn. We believed that inside the unicorn was a sort of astral lobby, a heart chamber that connected everything. If we ever get separated in the crystal world, Jane always said, we meet back there. I concentrated on the unicorn. It was hard to know if the animal was in the midst of kneeling or rising, and as it swam in my eyes, I let my vision soften, I drew closer. I saw the beautiful, familiar spires rising before me, welcoming me, I heard the soft and distant music. I’m in, I whispered. But I knew she would never be there again. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I write about the intersection of technology, popular culture and the lives we’ve lived inside machines. I’m also a narrative designer! leighalexander1 at gmail " Daniel Simmons,3.4K,8,https://itnext.io/you-can-build-a-neural-network-in-javascript-even-if-you-dont-really-understand-neural-networks-e63e12713a3?source=tag_archive---------8----------------,You can build a neural network in JavaScript even if you don’t really understand neural networks,"Click here to share this article on LinkedIn » (Skip this part if you just want to get on with it...) I should really start by admitting that I’m no expert in neural networks or machine learning. To be perfectly honest, most of it still completely baffles me. But hopefully that’s encouraging to any fellow non-experts who might be reading this, eager to get their feet wet in M.L. Machine learning was one of those things that would come up from time to time and I’d think to myself “yeah, that would be pretty cool... but I’m not sure that I want to spend the next few months learning linear algebra and calculus.” Like a lot of developers, however, I’m pretty handy with JavaScript and would occasionally look for examples of machine learning implemented in JS, only to find heaps of articles and StackOverflow posts about how JS is a terrible language for M.L., which, admittedly, it is. Then I’d get distracted and move on, figuring that they were right and I should just get back to validating form inputs and waiting for CSS grid to take off. But then I found Brain.js and I was blown away. Where had this been hiding?! The documentation was well written and easy to follow and within about 30 minutes of getting started I’d set up and trained a neural network. In fact, if you want to just skip this whole article and just read the readme on GitHub, be my guest. It’s really great. That said, what follows is not an in-depth tutorial about neural networks that delves into hidden input layers, activation functions, or how to use Tensorflow. Instead, this is a dead-simple, beginner level explanation of how to implement Brain.js that goes a bit beyond the documentation. Here’s a general outline of what we’ll be doing: If you’d prefer to just download a working version of this project rather than follow along with the article then you can clone the GitHub repository here. Create a new directory and plop a good ol’ index.html boilerplate file in there. Then create three JS files: brain.js, training-data.js, and scripts.js (or whatever generic term you use for your default JS file) and, of course, import all of these at the bottom of your index.html file. Easy enough so far. Now go here to get the source code for Brain.js. Copy & paste the whole thing into your empty brain.js file, hit save and bam: 2 out of 4 files are finished. Next is the fun part: deciding what your machine will learn. There are countless practical problems that you can solve with something like this; sentiment analysis or image classification for example. I happen to think that applications of M.L. that process text as input are particularly interesting because you can find training data virtually everywhere and they have a huge variety of potential use cases, so the example that we’ll be using here will be one that deals with classifying text: We’ll be determining whether a tweet was written by Donald Trump or Kim Kardashian. Ok, so this might not be the most useful application. But Twitter is a treasure trove of machine learning fodder and, useless though it may be, our tweet-author-identifier will nevertheless illustrate a pretty powerful point. Once it’s been trained, our neural network will be able to look at a tweet that it has never seen before and then be able to determine whether it was written by Donald Trump or by Kim Kardashian just by recognizing patterns in the things they write. In order to do that, we’ll need to feed it as much training data as we can bear to copy / paste into our training-data.js file and then we can see if we can identify ourselves some tweet authors. Now all that’s left to do is set up Brain.js in our scripts.js file and feed it some training data in our training-data.js file. But before we do any of that, let’s start with a 30,000-foot view of how all of this will work. Setting up Brain.js is extremely easy so we won’t spend too much time on that but there are a few details about how it’s going to expect its input data to be formatted that we should go over first. Let’s start by looking at the setup example that’s included in the documentation (which I’ve slightly modified here) that illustrates all this pretty well: First of all, the example above is actually a working A.I (it looks at a given color and tells you whether black text or white text would be more legible on it). Which hopefully illustrates how easy Brain.js is to use. Just instantiate it, train it, and run it. That’s it. I mean, if you inlined the training data that would be 3 lines of code. Pretty cool. Now let’s talk about training data for a minute. There are two important things to note in the above example other than the overall input: {}, output: {} format of the training data. First, the data do not need to be all the same length. As you can see on line 11 above, only an R and a B value get passed whereas the other two inputs pass an R, G, and B value. Also, even though the example above shows the input as objects, it’s worth mentioning that you could also use arrays. I mention this largely because we’ll be passing arrays of varying length in our project. Second, those are not valid RGB values. Every one of them would come out as black if you were to actually use it. That’s because input values have to be between 0 and 1 in order for Brain.js to work with them. So, in the above example, each color had to be processed (probably just fed through a function that divides it by 255 — the max value for RGB) in order to make it work. And we’ll be doing the same thing. So if we want out neural network to accept tweets (i.e. strings) as an input, we’ll need to run them through an similar function (called encode() below) that will turn every character in a string into a value between 0 and 1 and store it in an array. Fortunately, Javascript has a native method for converting any character into ASCII code called charCodeAt(). So we’ll use that and divide the outcome by the max value for Extended ASCII characters: 255 (we’re using extended ASCII just in case we encounter any fringe cases like é or 1⁄2) which will ensure that we get a value <1. Also, we’ll be storing our training data as plain text, not as the encoded data that we’ll ultimately be feeding into our A.I. - you’ll thank me for this later. So we’ll need another function (called processTrainingData() below) that will apply the previously mentioned encoding function to our training data, selectively converting the text into encoded characters, and returning an array of training data that will play nicely with Brain.js So here’s what all of that code will look like (this goes into your ‘scripts.js’ file): Something that you’ll notice here that wasn’t present in the example from the documentation shown earlier (other than the two helper functions that we’ve already gone over) is on line 20 in the train() function, which saves the trained neural network to a global variable called trainedNet . This prevents us from having to re-train our neural network every time we use it. Once the network is trained and saved to the variable, we can just call it like a function and pass in our encoded input (as shown on line 25 in the execute() function) to use our A.I. Alright, so now your index.html, brain.js, and scripts.js files are finished. Now all we need is to put something into training-data.js and we’ll be ready to go. Last but not least, our training data. Like I mentioned, we’re storing all our tweets as text and encoding them into numeric values on the fly, which will make your life a whole lot easier when you actually need to copy / paste training data. No formatting necessary. Just paste in the text and add a new row. Add that to your ‘training-data.js’ file and you’re done! Note: although the above example only shows 3 samples from each person, I used 10 of each; I just didn’t want this sample to take up too much space. Of course, your neural network’s accuracy will increase proportionally to the amount of training data that you give it, so feel free to use more or less than me and see how it affects your outcomes Now, to run your newly-trained neural network just throw an extra line at the bottom of your ‘script.js’ file that calls the execute() function and passes in a tweet from Trump or Kardashian; make sure to console.log it because we haven’t built a UI. Here’s a tweet from Kim Kardashian that was not in my training data (i.e. the network has never encountered this tweet before): Then pull up your index.html page on localhost, check the console, aaand... There it is! The network correctly identified a tweet that it had never seen before as originating from Kim Kardashian, with a certainty of 86%. Now let’s try it again with a Trump tweet: And the result... Again, a never-before-seen tweet. And again, correctly identified! This time with 97% certainty. Now you have a neural network that can be trained on any text that you want! You could easily adapt this to identify the sentiment of an email or your company’s online reviews, identify spam, classify blog posts, determine whether a message is urgent or not, or any of a thousand different applications. And as useless as our tweet identifier is, it still illustrates a really interesting point: that a neural network like this can perform tasks as nuanced as identifying someone based on the way they write. So even if you don’t go out and create an innovative or useful tool that’s powered by machine learning, this is still a great bit of experience to have in your developer tool belt. You never know when it might come in handy or even open up new opportunities down the road. Once again, all of this is available in a GitHub repo here: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Web developer, JavaScript enthusiast, boxing fan ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. " Logan Spears,2.3K,6,https://hackernoon.com/coursera-vs-udacity-for-machine-learning-f9c0d464a0eb?source=tag_archive---------9----------------,Coursera vs Udacity for Machine Learning – Hacker Noon,"2018 is an exciting time for students of machine learning. There is a wealth of readily available educational materials, and the industry’s importance only continues to grow. That said, with so many easily accessible resources, choosing the right fit for your interests can be difficult. To help those considering entering the machine learning world, I’d like to share my experience from two courses I took in 2017: Coursera’s Machine Learning course and Udacity’s Machine Learning Engineer Nanodegree program. I found both courses to be very instructive and worthwhile, but very different in nature. If you don’t have time to take both then hopefully this post can help you decide which one is best for you. Coursera Coursera’s Machine Learning course is the “OG” machine learning course. Led by famed Stanford Professor Andrew Ng, this course feels like a college course with a syllabus, weekly schedule, and standard lectures. The college feel extends to the curriculum as well. Here is an example slide: If that scared you, you aren’t alone. I usually shy away from courses heavy in math, but I actually appreciated the approach in this course. The course begins with a linear algebra refresher and explains machine learning concepts like gradient descent, cost function, regularization, etc. along the way. It is structured better than any in person college course I ever attended. The material isn’t easy, but that’s a good thing. You come away from the course with the satisfaction of genuinely understanding machine learning, enough so that you could even build your own machine learning framework from scratch. Udacity Udacity’s Machine Learning Engineer Nanodegree program is the trade school alternative to Coursera’s academia. From basic statistics to full-fledged deep learning, Udacity teaches you a plethora of industry standard techniques to complete the program’s well-crafted projects. The projects are so good, in fact, that I forked their repos on Github and left my solutions up as portfolio items. The final step of the program is to complete a capstone project of your own choosing. While you could theoretically do a similar project on your own, I found the desire to complete my Nanodegree to be a strong motivator; I ended up putting in much more time and effort than I normally would have put into an independent side project. Ultimately, I ended up creating something of which I am truly proud. Udacity’s program doesn’t so much teach as it does provide a framework and motivation for you to teach yourself. Comparison Now that I’ve introduced the two programs, I’ll highlight the strengths and weakness of each across a number of categories. Programming Environment As I mentioned, Coursera is the “OG” machine learning course; so, it should come as no surprise that the it’s taught in the “OG” 3D math language and programming environment: Matlab. Due to Matlab’s cost and licensing issues, the machine learning world has mostly moved to Python. This move severely limits the utility of the programming assignments because you’ll have to relearn a lot of that work in Python. If you are a seasoned programmer who knows many languages, that might not be a big deal. However, if you are relatively new to programming then this detour may cost you a lot of time. The Udacity course is taught in a modern Python environment with popular frameworks like Sklearn, Tensorflow, and Keras. The course even teaches students how to use AWS to deploy machine learning software to the cloud. The course also simplifies the process of installing machine learning dependencies with a Docker image and AMI (Amazon Machine Image) for local and AWS development respectively. In fact, the entire Udacity environment is in line with industry best practices and students who learn it will be well equipped in the job market. Winner = Udacity Lectures Coursera’s Machine Learning course was created and taught by the AI godfather himself: Andrew Ng. And this course has contributed in no small part to his reputation within the industry. The lectures follow a single uniform format and each one builds upon the last in a methodical way. Not to mention, he leads every one himself. Lastly, Professor Ng is also very encouraging in his videos, which I thought was a nice touch. Udacity’s lectures, by contrast, featured a rotating cast of characters, which can create very jarring transitions between sections. I counted at least seven different people lecturing throughout the program. While Udacity attempts to provide multiple content sources for its students, the lack of homogeneity definitely dented my enthusiasm for the lectures. By the end of the program I just skipped right to the projects and watched the lectures, or even searched Youtube, as needed. Winner = Coursera Projects Coursera’s course has programming assignments in which student’s submit code to be tested against automated unit tests. While this model helps the class scale, it leaves you hunting through the forums when things go wrong. That said, I never hit any major roadblocks. The assignments themselves were directly related to the course material and reinforced the lectures. Sometimes it felt like I was actually creating my own machine learning framework; at other times, however, it felt like I was just implementing methods until the unit tests passed. Udacity’s projects were extremely well designed. In fact, they constituted some of the best educational materials I’ve ever encountered. Each project covered a subject, such as unsupervised learning, reinforcement learning, linear regression, in which you solve a multi-step machine learning problem and write about your approach and understanding. When you feel that you have completed a project, you submit it to be graded by a HUMAN. The quality of the feedback that I got was incredible. The final project is a capstone that you get to pick yourself, but it is still reviewed by Udacity’s staff. The proposal and final report ended up being one of the best portfolio items I have ever created and one of the things I am most proud of in my programming career. Winner = Udacity Cost Coursera’s price is hard to beat because it’s free. To get the certification its $80. If you are machine learning on a budget then Coursera is a great choice. Udacity has recently changed its pricing model for the Machine Learning Nanodegree. When I entered the program, it was $200 a month. Now it is a $999 flat fee. The per month pricing model incentivized me to finish the program quickly in only three months. Though I must admit, given the quality of instructor feedback, even with the price hike tuition still seems reasonable. The highly-skilled labor that is meticulously reviewing projects can’t pay for itself. With such a high dollar amount, however, signing up for the Nanodegree program is obviously a much bigger consideration. Winner = Coursera Conclusion While the courses tied on the number categories won, I am going to pick a winner. It is... Udacity. It may come as no surprise that a paid course beats out a free one, but the Udacity Machine Learning Engineer Nanodegree program gave me the confidence to professional pursue machine learning positions and opportunities; and for that, its entry fee was a very small price to pay. That said, I would still recommend you do both courses. Start with Coursera, so that when you use “batteries included” high level frameworks, you understand the low level details and have a better appreciation of what you’re actually coding. After you’ve built a strong conceptual foundation, further refine your skills by learning practical, industry standard practices at Udacity. Overall, I am so glad I took concrete steps to enter the machine learning world in 2017, and I would encourage you to do the same in 2018. Coursera’s Machine Learning Certificate Machine Learning Engineer Nanodegree Certificate From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Programmer and Entrepreneur. Find me @ spearsx.com Github: notnil how hackers start their afternoons. " James Le,2K,9,https://medium.com/nanonets/how-to-do-image-segmentation-using-deep-learning-c673cc5862ef?source=---------0----------------,How to do Semantic Segmentation using Deep learning,"This article is a comprehensive overview including a step-by-step guide to implement a deep learning image segmentation model. Nowadays, semantic segmentation is one of the key problems in the field of computer vision. Looking at the big picture, semantic segmentation is one of the high-level task that paves the way towards complete scene understanding. The importance of scene understanding as a core computer vision problem is highlighted by the fact that an increasing number of applications nourish from inferring knowledge from imagery. Some of those applications include self-driving vehicles, human-computer interaction, virtual reality etc. With the popularity of deep learning in recent years, many semantic segmentation problems are being tackled using deep architectures, most often Convolutional Neural Nets, which surpass other approaches by a large margin in terms of accuracy and efficiency. Semantic segmentation is a natural step in the progression from coarse to fine inference: It is also worthy to review some standard deep networks that have made significant contributions to the field of computer vision, as they are often used as the basis of semantic segmentation systems: A general semantic segmentation architecture can be broadly thought of as an encoder network followed by a decoder network: Unlike classification where the end result of the very deep network is the only important thing, semantic segmentation not only requires discrimination at pixel level but also a mechanism to project the discriminative features learnt at different stages of the encoder onto the pixel space. Different approaches employ different mechanisms as a part of the decoding mechanism. Let’s explore the 3 main approaches: The region-based methods generally follow the “segmentation using recognition” pipeline, which first extracts free-form regions from an image and describes them, followed by region-based classification. At test time, the region-based predictions are transformed to pixel predictions, usually by labeling a pixel according to the highest scoring region that contains it. R-CNN (Regions with CNN feature) is one representative work for the region-based methods. It performs the semantic segmentation based on the object detection results. To be specific, R-CNN first utilizes selective search to extract a large quantity of object proposals and then computes CNN features for each of them. Finally, it classifies each region using the class-specific linear SVMs. Compared with traditional CNN structures which are mainly intended for image classification, R-CNN can address more complicated tasks, such as object detection and image segmentation, and it even becomes one important basis for both fields. Moreover, R-CNN can be built on top of any CNN benchmark structures, such as AlexNet, VGG, GoogLeNet, and ResNet. For the image segmentation task, R-CNN extracted 2 types of features for each region: full region feature and foreground feature, and found that it could lead to better performance when concatenating them together as the region feature. R-CNN achieved significant performance improvements due to using the highly discriminative CNN features. However, it also suffers from a couple of drawbacks for the segmentation task: Due to these bottlenecks, recent research has been proposed to address the problems, including SDS, Hypercolumns, Mask R-CNN. The original Fully Convolutional Network (FCN) learns a mapping from pixels to pixels, without extracting the region proposals. The FCN network pipeline is an extension of the classical CNN. The main idea is to make the classical CNN take as input arbitrary-sized images. The restriction of CNNs to accept and produce labels only for specific sized inputs comes from the fully-connected layers which are fixed. Contrary to them, FCNs only have convolutional and pooling layers which give them the ability to make predictions on arbitrary-sized inputs. One issue in this specific FCN is that by propagating through several alternated convolutional and pooling layers, the resolution of the output feature maps is down sampled. Therefore, the direct predictions of FCN are typically in low resolution, resulting in relatively fuzzy object boundaries. A variety of more advanced FCN-based approaches have been proposed to address this issue, including SegNet, DeepLab-CRF, and Dilated Convolutions. Most of the relevant methods in semantic segmentation rely on a large number of images with pixel-wise segmentation masks. However, manually annotating these masks is quite time-consuming, frustrating and commercially expensive. Therefore, some weakly supervised methods have recently been proposed, which are dedicated to fulfilling the semantic segmentation by utilizing annotated bounding boxes. For example, Boxsup employed the bounding box annotations as a supervision to train the network and iteratively improve the estimated masks for semantic segmentation. Simple Does It treated the weak supervision limitation as an issue of input label noise and explored recursive training as a de-noising strategy. Pixel-level Labeling interpreted the segmentation task within the multiple-instance learning framework and added an extra layer to constrain the model to assign more weight to important pixels for image-level classification. In this section, let’s walk through a step-by-step implementation of the most popular architecture for semantic segmentation — the Fully-Convolutional Net (FCN). We’ll implement it using the TensorFlow library in Python 3, along with other dependencies such as Numpy and Scipy. In this exercise we will label the pixels of a road in images using FCN. We’ll work with the Kitti Road Dataset for road/lane detection. This is a simple exercise from the Udacity’s Self-Driving Car Nano-degree program, which you can learn more about the setup in this GitHub repo. Here are the key features of the FCN architecture: There are 3 versions of FCN (FCN-32, FCN-16, FCN-8). We’ll implement FCN-8, as detailed step-by-step below: We first load the pre-trained VGG-16 model into TensorFlow. Taking in the TensorFlow session and the path to the VGG Folder (which is downloadable here), we return the tuple of tensors from VGG model, including the image input, keep_prob (to control dropout rate), layer 3, layer 4, and layer 7. Now we focus on creating the layers for a FCN, using the tensors from the VGG model. Given the tensors for VGG layer output and the number of classes to classify, we return the tensor for the last layer of that output. In particular, we apply a 1x1 convolution to the encoder layers, and then add decoder layers to the network with skip connections and upsampling. The next step is to optimize our neural network, aka building TensorFlow loss functions and optimizer operations. Here we use cross entropy as our loss function and Adam as our optimization algorithm. Here we define the train_nn function, which takes in important parameters including number of epochs, batch size, loss function, optimizer operation, and placeholders for input images, label images, learning rate. For the training process, we also set keep_probability to 0.5 and learning_rate to 0.001. To keep track of the progress, we also print out the loss during training. Finally, it’s time to train our net! In this run function, we first build our net using the load_vgg, layers, and optimize function. Then we train the net using the train_nn function and save the inference data for records. About our parameters, we choose epochs = 40, batch_size = 16, num_classes = 2, and image_shape = (160, 576). After doing 2 trial passes with dropout = 0.5 and dropout = 0.75, we found that the 2nd trial yields better results with better average losses. To see the full code, check out this link: https://gist.github.com/khanhnamle1994/e2ff59ddca93c0205ac4e566d40b5e88 If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blue Ocean Thinker (https://jameskle.com/) NanoNets: Machine Learning API " Sarthak Jain,3.9K,10,https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=---------1----------------,How to easily Detect Objects with Deep Learning on Raspberry Pi,"Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API " Bharath Raj,2.2K,15,https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced?source=---------2----------------,Data Augmentation | How to use Deep Learning when you have Limited Data — Part 2,"We have all been there. You have a stellar concept that can be implemented using a machine learning model. Feeling ebullient, you open your web browser and search for relevant data. Chances are, you find a dataset that has around a few hundred images. You recall that most popular datasets have images in the order of tens of thousands (or more). You also recall someone mentioning having a large dataset is crucial for good performance. Feeling disappointed, you wonder; can my “state-of-the-art” neural network perform well with the meagre amount of data I have? The answer is, yes! But before we get into the magic of making that happen, we need to reflect upon some basic questions. When you train a machine learning model, what you’re really doing is tuning its parameters such that it can map a particular input (say, an image) to some output (a label). Our optimization goal is to chase that sweet spot where our model’s loss is low, which happens when your parameters are tuned in the right way. Naturally, if you have a lot of parameters, you would need to show your machine learning model a proportional amount of examples, to get good performance. Also, the number of parameters you need is proportional to the complexity of the task your model has to perform. You don’t need to hunt for novel new images that can be added to your dataset. Why? Because, neural networks aren’t smart to begin with. For instance, a poorly trained neural network would think that these three tennis balls shown below, are distinct, unique images. So, to get more data, we just need to make minor alterations to our existing dataset. Minor changes such as flips or translations or rotations. Our neural network would think these are distinct images anyway. A convolutional neural network that can robustly classify objects even if its placed in different orientations is said to have the property called invariance. More specifically, a CNN can be invariant to translation, viewpoint, size or illumination (Or a combination of the above). This essentially is the premise of data augmentation. In the real world scenario, we may have a dataset of images taken in a limited set of conditions. But, our target application may exist in a variety of conditions, such as different orientation, location, scale, brightness etc. We account for these situations by training our neural network with additional synthetically modified data. Yes. It can help to increase the amount of relevant data in your dataset. This is related to the way with which neural networks learn. Let me illustrate it with an example. Imagine that you have a dataset, consisting of two brands of cars, as shown above. Let’s assume that all cars of brand A are aligned exactly like the picture in the left (i.e. All cars are facing left) . Likewise, all cars of brand B are aligned exactly like the picture in the right (i.e. Facing right) . Now, you feed this dataset to your “state-of-the-art” neural network, and you hope to get impressive results once it’s trained. Let’s say it’s done training, and you feed the image above, which is a Brand A car. But your neural network outputs that it’s a Brand B car! You’re confused. Didn’t you just get a 95% accuracy on your dataset using your “state-of-the-art” neural network? I’m not exaggerating, similar incidents and goof-ups have occurred in the past. Why does this happen? It happens because that’s how most machine learning algorithms work. It finds the most obvious features that distinguishes one class from another. Here, the feature was that all cars of Brand A were facing left, and all cars of Brand B are facing right. How do we prevent this happening? We have to reduce the amount of irrelevant features in the dataset. For our car model classifier above, a simple solution would be to add pictures of cars of both classes, facing the other direction to our original dataset. Better yet, you can just flip the images in the existing dataset horizontally such that they face the other side! Now, on training the neural network on this new dataset, you get the performance that you intended to get. Before we dive into the various augmentation techniques, there’s one issue that we must consider beforehand. The answer may seem quite obvious; we do augmentation before we feed the data to the model right? Yes, but you have two options here. One option is to perform all the necessary transformations beforehand, essentially increasing the size of your dataset. The other option is to perform these transformations on a mini-batch, just before feeding it to your machine learning model. The first option is known as offline augmentation. This method is preferred for relatively smaller datasets, as you would end up increasing the size of the dataset by a factor equal to the number of transformations you perform (For example, by flipping all my images, I would increase the size of my dataset by a factor of 2). The second option is known as online augmentation, or augmentation on the fly. This method is preferred for larger datasets, as you can’t afford the explosive increase in size. Instead, you would perform transformations on the mini-batches that you would feed to your model. Some machine learning frameworks have support for online augmentation, which can be accelerated on the GPU. In this section, we present some basic but powerful augmentation techniques that are popularly used. Before we explore these techniques, for simplicity, let us make one assumption. The assumption is that, we don’t need to consider what lies beyond the image’s boundary. We’ll use the below techniques such that our assumption is valid. What would happen if we use a technique that forces us to guess what lies beyond an image’s boundary? In this case, we need to interpolate some information. We’ll discuss this in detail after we cover the types of augmentation. For each of these techniques, we also specify the factor by which the size of your dataset would get increased (aka. Data Augmentation Factor). You can flip images horizontally and vertically. Some frameworks do not provide function for vertical flips. But, a vertical flip is equivalent to rotating an image by 180 degrees and then performing a horizontal flip. Below are examples for images that are flipped. You can perform flips by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x One key thing to note about this operation is that image dimensions may not be preserved after rotation. If your image is a square, rotating it at right angles will preserve the image size. If it’s a rectangle, rotating it by 180 degrees would preserve the size. Rotating the image by finer angles will also change the final image size. We’ll see how we can deal with this issue in the next section. Below are examples of square images rotated at right angles. You can perform rotations by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x The image can be scaled outward or inward. While scaling outward, the final image size will be larger than the original image size. Most image frameworks cut out a section from the new image, with size equal to the original image. We’ll deal with scaling inward in the next section, as it reduces the image size, forcing us to make assumptions about what lies beyond the boundary. Below are examples or images being scaled. You can perform scaling by using the following commands, using scikit-image. Data Augmentation Factor = Arbitrary. Unlike scaling, we just randomly sample a section from the original image. We then resize this section to the original image size. This method is popularly known as random cropping. Below are examples of random cropping. If you look closely, you can notice the difference between this method and scaling. You can perform random crops by using any the following command for TensorFlow. Data Augmentation Factor = Arbitrary. Translation just involves moving the image along the X or Y direction (or both). In the following example, we assume that the image has a black background beyond its boundary, and are translated appropriately. This method of augmentation is very useful as most objects can be located at almost anywhere in the image. This forces your convolutional neural network to look everywhere. You can perform translations in TensorFlow by using the following commands. Data Augmentation Factor = Arbitrary. Over-fitting usually happens when your neural network tries to learn high frequency features (patterns that occur a lot) that may not be useful. Gaussian noise, which has zero mean, essentially has data points in all frequencies, effectively distorting the high frequency features. This also means that lower frequency components (usually, your intended data) are also distorted, but your neural network can learn to look past that. Adding just the right amount of noise can enhance the learning capability. A toned down version of this is the salt and pepper noise, which presents itself as random black and white pixels spread through the image. This is similar to the effect produced by adding Gaussian noise to an image, but may have a lower information distortion level. You can add Gaussian noise to your image by using the following command, on TensorFlow. Data Augmentation Factor = 2x. Real world, natural data can still exist in a variety of conditions that cannot be accounted for by the above simple methods. For instance, let us take the task of identifying the landscape in photograph. The landscape could be anything: freezing tundras, grasslands, forests and so on. Sounds like a pretty straight forward classification task right? You’d be right, except for one thing. We are overlooking a crucial feature in the photographs that would affect the performance — The season in which the photograph was taken. If our neural network does not understand the fact that certain landscapes can exist in a variety of conditions (snow, damp, bright etc.), it may spuriously label frozen lakeshores as glaciers or wet fields as swamps. One way to mitigate this situation is to add more pictures such that we account for all the seasonal changes. But that is an arduous task. Extending our data augmentation concept, imagine how cool it would be to generate effects such as different seasons artificially? Without going into gory detail, conditional GANs can transform an image from one domain to an image to another domain. If you think it sounds too vague, it’s not; that’s literally how powerful this neural network is! Below is an example of conditional GANs used to transform photographs of summer sceneries to winter sceneries. The above method is robust, but computationally intensive. A cheaper alternative would be something called neural style transfer. It grabs the texture/ambiance/appearance of one image (aka, the “style”) and mixes it with the content of another. Using this powerful technique, we produce an effect similar to that of our conditional GAN (In fact, this method was introduced before cGANs were invented!). The only downside of this method is that, the output tends to looks more artistic rather than realistic. However, there are certain advancements such as Deep Photo Style Transfer, shown below, that have impressive results. We have not explored these techniques in great depth as we are not concerned with their inner working. We can use existing trained models, along with the magic of transfer learning, to use it for augmentation. What if you wanted to translate an image that doesn’t have a black background? What if you wanted to scale inward? Or rotate in finer angles? After we perform these transformations, we need to preserve our original image size. Since our image does not have any information about things outside it’s boundary, we need to make some assumptions. Usually, the space beyond the image’s boundary is assumed to be the constant 0 at every point. Hence, when you do these transformations, you get a black region where the image is not defined. But is that the right assumption? In the real world scenario, it’s mostly a no. Image processing and ML frameworks have some standard ways with which you can decide on how to fill the unknown space. They are defined as follows. The simplest interpolation method is to fill the unknown region with some constant value. This may not work for natural images, but can work for images taken in a monochromatic background The edge values of the image are extended after the boundary. This method can work for mild translations. The image pixel values are reflected along the image boundary. This method is useful for continuous or natural backgrounds containing trees, mountains etc. This method is similar to reflect, except for the fact that, at the boundary of reflection, a copy of the edge pixels are made. Normally, reflect and symmetric can be used interchangeably, but differences will be visible while dealing with very small images or patterns. The image is just repeated beyond its boundary, as if it’s being tiled. This method is not as popularly used as the rest as it does not make sense for a lot of scenarios. Besides these, you can design your own methods for dealing with undefined space, but usually these methods would just do fine for most classification problems. If you use it in the right way, then yes! What is the right way you ask? Well, sometimes not all augmentation techniques make sense for a dataset. Consider our car example again. Below are some of the ways by which you can modify the image. Sure, they are pictures of the same car, but your target application may never see cars presented in these orientations. For instance, if you’re just going to classify random cars on the road, only the second image would make sense to be on the dataset. But, if you own an insurance company that deals with car accidents, and you want to identify models of upside-down, broken cars as well, the third image makes sense. The last image may not make sense for both the above scenarios. The point is, while using augmentation techniques, we have to make sure to not increase irrelevant data. You’re probably expecting some results to motivate you to walk the extra mile. Fair enough; I’ve got that covered too. Let me prove that augmentation really works, using a toy example. You can replicate this experiment to verify. Let’s create two neural networks to classify data to one among four classes: cat, lion, tiger or a leopard. The catch is, one will not use data augmentation, whereas the other will. You can download the dataset from here link. If you’ve checked out the dataset, you’ll notice that there’s only 50 images per class for both training and testing. Clearly, we can’t use augmentation for one of the classifiers. To make the odds more fair, we use Transfer Learning to give the models a better chance with the scarce amount of data. For the one without augmentation, let’s use a VGG19 network. I’ve written a TensorFlow implementation here, which is based on this implementation. Once you’ve cloned my repo, you can get the dataset from here, and vgg19.npy (used for transfer learning) from here. You can now run the model to verify the performance. I would agree though, writing extra code for data augmentation is indeed a bit of an effort. So, to build our second model, I turned to Nanonets. They internally use transfer learning and data augmentation to provide the best results using minimal data. All you need to do is upload the data on their website, and wait until it’s trained in their servers (Usually around 30 minutes). What do you know, it’s perfect for our comparison experiment. Once it’s done training, you can request calls to their API to calculate the test accuracy. Checkout out my repo for a sample code snippet(Don’t forget to insert your model’s ID in the code snippet). Impressive isn’t it. It is a fact that most models perform well with more data. So to provide a concrete proof, I’ve mentioned the table below. It shows the error rate of popular neural networks on the Cifar 10 (C10) and Cifar 100 (C100) datasets. C10+ and C100+ columns are the error rates with data augmentation. Thank you for reading this article! Hit that clap button if you did! Hope it shed some light about data augmentation. If you have any questions, you could hit me up on social media or send me an email (bharathrajn98@gmail.com). From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Undergrad | Computer Vision and AI Enthusiast | Hungry NanoNets: Machine Learning API " Daniel Rothmann,302,8,https://towardsdatascience.com/human-like-machine-hearing-with-ai-1-3-a5713af6e2f8?source=---------3----------------,Human-Like Machine Hearing With AI (1/3) – Towards Data Science,"Significant breakthroughs in AI technology have been achieved through modeling human systems. While artificial neural networks (NNs) are mathematical models which are only loosely coupled with the way actual human neurons function, their application in solving complex and ambiguous real-world problems has been profound. Additionally, modeling the architectural depth of the brain in NNs has opened up broad possibilities in learning more meaningful representations of data. In image recognition and processing, the inspiration from the complex and more spatially invariant cells of the visual system in CNNs has also produced great improvements to our technologies. If you’re interested in applying image recognition technologies on audio spectrograms, check out my article “What’s wrong with CNNs and spectrograms for audio processing?”. As long as human perceptual capacity exceeds that of machines, we stand to gain by understanding the principles of human systems. Humans are very skillful when it comes to perceptual tasks and the contrast between human understanding and the status quo of AI becomes particularly apparent in the area of machine hearing. Considering the benefits reaped from getting inspired by human systems in visual processing, I propose that we stand to gain from a similar process in machine hearing with neural networks. In this article series, I will detail a framework for real-time audio signal processing with AI which was developed in cooperation with Aarhus University and intelligent loudspeaker manufacturer Dynaudio A/S. Its inspiration is primarily drawn from cognitive science which attempts to combine perspectives of biology, neuroscience, psychology and philosophy to gain greater understanding of our cognitive faculties. Perhaps the most abstract domain of sound is how we, as humans, perceive it. While a solution for a signal processing problem has to operate within the parameters of intensity, spectral and temporal properties on a low level, the end goal is most often a cognitive one: Transforming a signal in such a way that our perceptions of the sounds it contains are altered. If one wishes to programatically change the gender of a recorded spoken voice for example, it is necessary to describe this problem in more meaningful terms before defining its lower level characteristics. The gender of a speaker can be conceived as a cognitive property which is constructed from many factors: General pitch and timbre of a voice, differences in pronunciation, differences in choice of words and language and a common understanding of how these properties relate to gender. These parameters can be described in lower level features like intensity, spectral and temporal properties but only in more complex combinations do they form high-level representations. This forms a hierarchy of audio features from which the “meaning” of a sound can be derived. The cognitive property representing a human voice can be thought of as a combinatory pattern of temporal developments in a sound’s intensity, spectral and statistical properties. NNs are great at extracting abstracted representations of data and are therefore well suited for the task of detecting cognitive properties in sound. In order to build a system for this purpose, let’s examine how sound is represented in human auditory organs that we can use to inspire representation of sound for processing with NNs. Hearing in humans starts at the outer ear which firstly consists of the pinna. The pinna acts as a form of spectral preprocessing in which the incoming sound is modified depending on its direction in relation to the listener. Sound then travels through the opening in the pinna into the ear canal which further acts to modify spectral properties of incoming sound by resonating in a way that amplifies frequencies in the range ~1–6 kHz [1]. As sound waves reach the end of the ear canal, they excite the eardrum onto which the ossicles (the smallest bones in the body) are attached. These bones transmit the pressure from the ear canal to the fluid-filled cochlea in the inner ear [1]. The cochlea is of great interest in guiding sound representation for NNs because this is the organ responsible for transducing acoustic vibrations into neural activity in humans . It is a coiled tube which is separated along its length by two membranes being the Reissner’s membrane and the basilar membrane. Along the length of the cochlea, there is a row of around 3,500 inner hair cells [1]. As pressures enter the cochlea, its two membranes are pushed down. The basilar membrane is narrow and stiff at its base but loose and wide at its apex so that each place along its length responds more intensely at a particular frequency. To simplify, the basilar membrane can be thought of as a continuous array of bandpass filters which, along the length of the membrane, acts to separate sounds into their spectral components. This is the primary mechanism by which humans convert sound pressures into neural activity. Therefore, it is reasonable to assume that spectral representations of audio would be beneficial in modeling sound perception with AI. Since frequency responses along the basilar membrane vary exponentially [2], logarithmic frequency representations might prove most efficient. One such representation could be derived using a gammatone filterbank. These filters are commonly applied in modeling spectral filtering in the auditory system since they approximate the impulse response of human auditory filters derived from the measured auditory nerve fiber response to white noise stimuli called the “revcor” function [3]. Since the cochlea has ~3500 inner hair cells and humans can detect gaps in sounds down to ~2–5 ms in length [1], a spectral resolution of 3500 gammatone filters separated into 2 ms windows seem optimal parameters for achieving human-like spectral representation in machines. In practical scenarios however, I assume that lesser resolutions could still achieve desirable effects in most analysis and processing tasks while being more viable from a computational standpoint. A number of software libraries for auditory analysis are available online. A notable example is the Gammatone Filterbank Toolkit by Jason Heeris. It provides adjustable filters as well as tools for spectrogram-like analysis of audio signals with gammatone filters. As neural activity moves from the cochlea onto the auditory nerve and the ascending auditory pathways, a number of processes are applied in brainstem nuclei before it reaches the auditory cortex. These processes form a neural code which represents an interface between stimulus and perception [4]. Much knowledge about the specific inner workings of these nuclei is still speculative or unknown, so I will detail these nuclei only at their higher levels of functioning. Humans have a set of these nuclei for each ear that are interconnected, but for simplicity, I’ve illustrated the flow for only one ear. The cochlear nucleus is the first coding step for neural signals coming from the auditory nerve. It consists of a variety of neurons with different properties which serve to perform initial processing of sound features, some of which are directed to the superior olive which is associated with sound localization while others are directed to the lateral lemniscus and inferior colliculus, commonly associated with more advanced features [1]. J. J. Eggermont details this flow of information from the cochlear nucleus in “Between sound and perception: reviewing the search for a neural code” as follows: “The ventral [cochlear nucleus] (VCN) extracts and enhances the frequency and timing information that is multiplexed in the firing patterns of the [auditory nerve] fibers, and distributes the results via two main pathways: the sound localization path and the sound identification path. The anterior part of the VCN (AVCN) mainly serves the sound localization aspects and its two types of bushy cells provide input to the superior olivary complex (SOC), where interaural time differences (ITDs) and level differences (ILDs) are mapped for each frequency separately” [4]. The information carried by the sound identification pathway is a representation of complex spectra such as vowels. This representation is mainly created in the ventral cochlear nucleus by special types of units dubbed “chopper” (stellate) neurons [4]. The details of these auditory encodings are difficult to specify but they indicate to us that a form of “coding” of incoming frequency spectra could improve understanding of low level sound features as well as making sound impressions less expensive to process in NNs. We can apply the unsupervised autoencoder NN architecture as an attempt to learn common properties associated with complex spectra. Like word embeddings, its possible to find commonalities in frequency spectra that represent select features (or a more tightly condensed meaning) of sounds. An autoencoder is trained to encode an input into a compressed representation that can be reconstructed back into a representation with a high similarity to the input. This means that the autoencoder’s target output is the input itself [5]. If an input can be reconstructed without great loss, the network has learnt to encode it in such a way that the compressed internal representation contains enough meaningful information. This internal representation is then what we refer to as the embedding. The encoding part of the autoencoder can be decoupled from the decoder to generate embeddings for other applications. Embeddings also have the benefit that they are often of lower dimensionality than the original data. For instance, an autoencoder could compress a frequency spectrum with a total of 3500 values into a vector with a length of 500 values. Put simply, each value of such a vector could describe higher level factors of a spectrum such as vowel, harshness or harmonicity - These are only examples, as the meaning of statistically common factors derived by an autoencoder might often be difficult to label in plain language. In the next article, we will expand upon this idea with added memory to produce embeddings for temporal developments of audio frequency spectra. This wraps up the first part of my article series on audio processing with artificial intelligence. Next, we will discuss the essential concepts of sensory memory and temporal dependencies in sound. Follow to stay updated and feel free to leave claps if you enjoyed the article! As always, feel free to connect with me on LinkedIn to stay in touch. [1] C. J. Plack, The Sense of Hearing, 2nd ed. Psychology Press, 2014. [2] S. J. Elliott and C. A. Shera, “The cochlea as a smart structure,” Smart Mater. Struct., vol. 21, no. 6, p. 64001, Jun. 2012. [3] A.M. Darling, “Properties and implementation of the gammatone filter: A tutorial”, Speech hearing and language, University College London, 1991. [4] J. J. Eggermont, “Between sound and perception: reviewing the search for a neural code.,” Hear. Res., vol. 157, no. 1–2, pp. 1–42, Jul. 2001. [5] T. P. Lillicrap et al., Learning Deep Architectures for AI, vol. 2, no. 1. 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Engineer @ Convai. Especially interested in audio and time series forecasting. Reach us at convai.dk Sharing concepts, ideas, and codes. " Amine Aoullay,58,4,https://towardsdatascience.com/how-to-use-noise-to-your-advantage-5301071d9dc3?source=---------4----------------,How to use Noise to your advantage ? – Towards Data Science,"For scientists, random fluctuations, or noise is undesirable. Although typically assumed to degrade performance, it can sometimes improve information processing in non-linear systems. In this post we’ll see some examples where the noise can be used as an advantage. Recent works have shown that, by allowing some inaccuracy when training deep neural networks, not only the training performance but also the accuracy of the model can be improved. Neural networks are capable of learning output functions that can change wildly with small changes in input. Adding noise to inputs randomly is like telling the network to not change the output in a ball around your exact input. By limiting the amount of information in a network, we force it to learn compact representations of input features. RL is an area of machine learning that assumes there is an agent situated in an environment. At each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize the agent’s total reward, given a previously unknown environment, through a learning process that usually involves lots of trial and error. To understand the challenge with exploration in Deep RL systems think about researchers that spend lot of times in a Lab without producing any practical application. Equivalently, RL agents can spend a huge amount of resources without converging to a local optimum. OpenAI proposes a technique called Parameter-Space-Noise, that introduces noises in the model policy parameters at the beginning of each episode. Other approaches were focused on what is known as Action-Space-Noise which introduce noise to change the likelihoods associated with each action the agent might take from one moment to the next. The initial results of the Parameter-Space-Noise model proved to be really promising. The technique helps algorithms explore their environments more effectively, leading to higher scores and more elegant behaviors. More details can be found in the research paper. The important thing to remember is that adding noise was used as an advantage to boost the exploration performance of reinforcement learning algorithms. Boosting recognition isn’t as simple as throwing more labeled images at these systems. Indeed, manually annotating a large number of images is an expensive and time consuming process. Facebook researchers and engineers have addressed this by training image recognition networks on large sets of public images with hashtags. Since people often caption their photos with hashtags, it woul’d be a good source of training data for models. Facebook developed new approaches that are tailored for doing image recognition experiments using hashtag supervision. This study is described in detail in “Exploring the Limits of Weakly Supervised Pretraining” On the COCO object-detection challenge, it has been shown that the use of hashtags for pretraining can boost the average precision of a model by more than 2 percent. Noise should not be our enemy ! It isn’t always an unwanted disturbance and can often be used as an advantage and even serve as a valuable research tool. If anyone tries to tell you otherwise, well, just give him the examples we presented ... Stay tuned and if you liked this article, please leave a 👏! [1] Weakly-supervised-pretraining: https://research.fb.com/publications/exploring-the-limits-of-weakly-supervised-pretraining/ [2] Better Exploration with Parameter Noise: https://blog.openai.com/better-exploration-with-parameter-noise/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. MSc in Machine Learning (MVA) @ ENS Paris-Saclay Sharing concepts, ideas, and codes. " Jonathan Balaban,804,5,https://towardsdatascience.com/deep-learning-tips-and-tricks-1ef708ec5f53?source=---------5----------------,Deep Learning Tips and Tricks – Towards Data Science,"Below is a distilled collection of conversations, messages, and debates I’ve had with peers and students on how to optimize deep models. If you have tricks you’ve found impactful, please share them!! Deep learning models like the Convolutional Neural Network (CNN) have a massive number of parameters; we can actually call these hyper-parameters because they are not optimized inherently in the model. You could gridsearch the optimal values for these hyper-parameters, but you’ll need a lot of hardware and time. So, does a true data scientist settle for guessing these essential parameters? One of the best ways to improve your models is to build on the design and architecture of the experts who have done deep research in your domain, often with powerful hardware at their disposal. Graciously, they often open-source the resulting modeling architectures and rationale. Here are a few ways you can improve your fit time and accuracy with pre-trained models: Here’s how to modify dropout and limit weight sizes in Keras with MNIST: Here’s an example of final layer modification in Keras with 14 classes for MNIST: And an example of how to freeze weights in the first five layers: Alternatively, we can set the learning rate to zero for that layer, or use per-parameter adaptive learning algorithm like Adadelta or Adam. This is somewhat complicated and better implemented in other platforms, like Caffe. It’s often essential to get a visual idea of how your model looks. If you’re working in Keras, abstraction is nice but doesn’t allow you to drill down into sections of your model for deeper analysis. Fortunately, the code below lets us visualize our models directly with Python: This will plot a graph of the model and save it as a png file: plot takes two optional arguments: You can also directly obtain the pydot.Graph object and render it yourself, for example to show it in an ipython notebook : I hope this collection helps with your modeling endeavors! Let me know your best tricks, and connect with me on Twitter and LinkedIn! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Science Nomad Sharing concepts, ideas, and codes. " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=---------6----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " SAGAR SHARMA,2.5K,5,https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6?source=---------7----------------,Activation Functions: Neural Networks – Towards Data Science,"What is Activation Function ? Why we use Activation functions with Neural Networks? The Activation Functions can be basically divided into 2 types- As you can see the function is a line or linear.Therefore, the output of the functions will not be confined between any range. Equation : f(x) = x Range : (-infinity to infinity) It doesn’t help with the complexity or various parameters of usual data that is fed to the neural networks. The Nonlinear Activation Functions are the most used activation functions. Nonlinearity helps to makes the graph look something like this It makes it easy for the model to generalize or adapt with variety of data and to differentiate between the output. The main terminologies needed to understand for nonlinear functions are: The Nonlinear Activation Functions are mainly divided on the basis of their range or curves- 1. Sigmoid or Logistic Activation Function The Sigmoid Function curve looks like a S-shape. The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output.Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable.That means, we can find the slope of the sigmoid curve at any two points. The function is monotonic but function’s derivative is not. The logistic sigmoid function can cause a neural network to get stuck at the training time. The softmax function is a more generalized logistic activation function which is used for multiclass classification. 2. Tanh or hyperbolic tangent Activation Function tanh is also like logistic sigmoid but better. The range of the tanh function is from (-1 to 1). tanh is also sigmoidal (s - shaped). The advantage is that the negative inputs will be mapped strongly negative and the zero inputs will be mapped near zero in the tanh graph. The function is differentiable. The function is monotonic while its derivative is not monotonic. The tanh function is mainly used classification between two classes. 3. ReLU (Rectified Linear Unit) Activation Function The ReLU is the most used activation function in the world right now.Since, it is used in almost all the convolutional neural networks or deep learning. As you can see, the ReLU is half rectified (from bottom). f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to zero. Range: [ 0 to infinity) The function and its derivative both are monotonic. But the issue is that all the negative values become zero immediately which decreases the ability of the model to fit or train from the data properly. That means any negative input given to the ReLU activation function turns the value into zero immediately in the graph, which in turns affects the resulting graph by not mapping the negative values appropriately. 4. Leaky ReLU It is an attempt to solve the dying ReLU problem Can you see the Leak? 😆 The leak helps to increase the range of the ReLU function. Usually, the value of a is 0.01 or so. When a is not 0.01 then it is called Randomized ReLU. Therefore the range of the Leaky ReLU is (-infinity to infinity). Both Leaky and Randomized ReLU functions are monotonic in nature. Also, their derivatives also monotonic in nature. I will be posting 2 posts per week so don’t miss the tutorial. So, follow me on Medium, Facebook, Twitter, LinkedIn, Google+, Quora to see similar posts. Any comments or if you have any question, write it in the comment. Clap it! Share it! Follow Me! Happy to be helpful. kudos..... 2. Epoch vs Batch Size vs Iterations 3. Train Inception with Custom Images on CPU 4. TensorFlow Image Recognition Python API Tutorial On CPU From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I am interested in Programming (Python, C++), Arduino, Machine learning :) I'm the editor of Arduino Community on Medium. I also like to write stuff. Sharing concepts, ideas, and codes. " Jae Duk Seo,33,6,https://towardsdatascience.com/principal-component-analysis-network-in-tensorflow-with-interactive-code-7be543047704?source=---------8----------------,Principal Component Analysis Network in Tensorflow with Interactive Code,"A natural extension from Principle Component Analysis pooling layer would be making a full neural network out of the layer. I wanted to know if this was even possible as well as how well or worse it performs on MNIST data. Principle Component Analysis (PCA) Pooling Layer For anyone who is not familiar with PCAP please read this blog post first. The basic idea is Pooling layers such as Max or Mean pooling operations performs dimensionality reduction to not only to save computational power but also to act as a regularizer. PCA is a dimensionality reduction technique in which converts correlated variables into a set of values of linearly uncorrelated variables called principal components. And we can take advantage of this operation to do a similar job as max/mean pooling. Network Composed of Majority of Pooling Layers Now I know what you are thinking, it doesn’t make sense to have a network that is only composed of pooling layer while performing classification. And you are completely right! It doesn’t! But I just wanted to try this out for fun. Data Set / Network Architecture Blue Rectangle → PCAP or Max Pooling LayerGreen Rectangle → Convolution Layer to increase channel size + Global Averaging Pooling operation The network itself is very simple, only four pooling layers and one convolution layer to increase the channel size. However, in-order for the dimension to match up we will downsample each images into 16*16 dimension. Hence the Tensors will have a shape of ... [Batch Size,16,16,1] → [Batch Size,8,8,1] → [Batch Size,4,4,1] → [Batch Size,2,2,1] → [Batch Size,1,1,1] → [Batch Size,1,1,10] → [Batch Size,10] And we can perform classification with soft max layer as any other network does. Results: Principle Component Network As seen above, the training accuracy have stagnated at 18 percent accuracy which is horrible LOL. But I suspected that the network didn’t have enough learning capacity from the start and this was best it could do. However I wanted to see how each PCAP layer transforms the image. Top Left Image → Original InputTop Right Image → After First LayerBottom Left Image → After Second LayerBottom Right Image → After Fourth Layer One obvious pattern we can observe is the change of brightens. For example if the top left pixel was white in the second layer this pixel will change to black in the next layer. Currently, I am not 100% sure on why this is happening, but with more study I hope to know exactly why. Results: Max Pooling Network As seen above, when we replace all of the PCAP layers with max pooling operation we can observe that the accuracy on training images stagnated around 14 percent, confirming the fact that the network didn’t have enough learning capacity from the start. Top Left Image → Original InputTop Right Image → After First LayerBottom Left Image → After Second LayerBottom Right Image → After Fourth Layer Contrast to PCAP, with max pooling we can clearly observe that the pixel with most high intensity moves on to the next layer. This is expected since, that is what max pooling does. Interactive Code For Google Colab, you would need a google account to view the codes, also you can’t run read only scripts in Google Colab so make a copy on your play ground. Finally, I will never ask for permission to access your files on Google Drive, just FYI. Happy Coding! To access the network with PCAP please click here.To access the network with Max Pooling please click here. Final Words I wasn’t expecting much of this network from the start but I expected at least 30 percent accuracy on training / testing images LOL. If any errors are found, please email me at jae.duk.seo@gmail.com, if you wish to see the list of all of my writing please view my website here. Meanwhile follow me on my twitter here, and visit my website, or my Youtube channel for more content. I also implemented Wide Residual Networks, please click here to view the blog post. Reference From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. https://jaedukseo.me | | | | |Your everyday Seo, who likes kimchi Sharing concepts, ideas, and codes. " Jae Duk Seo,20,7,https://towardsdatascience.com/multi-stream-rnn-concat-rnn-internal-conv-rnn-lag-2-rnn-in-tensorflow-f4f17189a208?source=---------9----------------,"Multi-Stream RNN, Concat RNN, Internal Conv RNN, Lag 2 RNN in Tensorflow","For the last two week I have been dying to implement different kinds of Recurrent Neural Networks (RNN) and finally I have the time to implement all of them. Below is the list of different RNN cases I wanted to try out. Case a: Vanilla Recurrent Neural Network Case b: Multi-Stream Recurrent Neural NetworkCase c: Concatenated Recurrent Neural NetworkCase d: Internal Convolutional Recurrent Neural NetworkCase e: Lag 2 Recurrent Neural Network Vanilla Recurrent Neural Network There is in total of 5 different case of RNN I wish to implement. However, in order to fully understand all of the implementations it would be a good idea to have a strong understanding of vanilla RNN (Case a is vanilla RNN so if you understand code for case a you are good to go.) If anyone wishes to review simple RNN please visit my old blog post “Only Numpy: Vanilla Recurrent Neural Network Deriving Back propagation Through Time Practice ”. Case a: Vanilla Recurrent Neural Network ( Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time Stamp As seen above, the base network is simple RNN combined with convolutional neural network for classification. The RNN have time stamp of 4, which means we are going to give the network 4 different kinds of input at each time stamp. And to do that I am going to add some noise to the original image. Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above our base network already performs well. Now the question is how other methods performs and would it be able to regularize better than our base network. Case b: Multi-Stream Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Convolution Input Stream Yellow Circle → Fully Connected Network StreamBlack Box → Recurrent Neural Network with 4 Time Stamp The idea behind this RNN is simply to give different representation of data to the RNN. In our base network we have the network either the raw image or image with some noise added. Red Box → Additional Four CNN/FNN layers to ‘process’ the inputBlue Box → Creating Inputs at each different time stamps As seen below now our RNN takes in input of tensor size with [batch_size, 26, 26, 1] reducing the width and the height by 2. And I was hoping that different representation of the data would act as a regularization. (Similar to data augmentation) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above the network did pretty well, and have outperformed our base network by 1 percent on the testing images. Case c: Concatenated Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampBlack Curved Arrow → Concatenated Input for Each Time Stamp This approach is very simple, the idea was that on each time stamp different features will be extracted and it might be useful for the network to have more features overtime. (For the Recurrent Layers.) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time Sadly, this was a huge failure. I guess the empty hidden values does not help (one bit) for the network to perform well. Case d: Internal Convolutional Recurrent Neural Network (Idea/Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampGray Arrow → Performing Internal Convolution before passing onto the next time stamp As seen above, this network takes in the exact same input as our base network. However this time we are going to perform additional convolution operations in the internal representation of the data. Right Image → Declaring 3 new convolution layerLeft Image (Red Box) → If the current internal layer is not None, we are going to perform additional convolution operation. I actually had no theoretical reason behind this implementation, I just wanted to see if it works LOL. Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above the network did a fine job at converging, however it was not able to outperform our base network. (Sadly). Case e: Lag 2 Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0 (or Lag of 1)Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampPurple Circle → Hidden State Lag of 2 In a traditional RNN setting we only rely on the most previous values to determine the current value. For a while I was thinking that there is no reason for us to limit the look back time (or lag) as 1. We can extend this idea into lag 3 or lag 4 etc. (Just for simplicity I took lag 2) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time Thankfully the network did better than the base network. (But with very small margin), however this type of network would be most suitable for time series data. Interactive Code / Transparency For Google Colab, you would need a google account to view the codes, also you can’t run read only scripts in Google Colab so make a copy on your play ground. Finally, I will never ask for permission to access your files on Google Drive, just FYI. Happy Coding! Also for transparency I uploaded all of the training logs on my github. To access the code for case a click here, for the logs click here. To access the code for case b click here, for the logs click here.To access the code for case c click here, for the logs click here.To access the code for case c click here, for the logs click here.To access the code for case c click here, for the logs click here. Final Words I wanted to Review RNN for quite a long time now, finally I get to do it. If any errors are found, please email me at jae.duk.seo@gmail.com, if you wish to see the list of all of my writing please view my website here. Meanwhile follow me on my twitter here, and visit my website, or my Youtube channel for more content. I also implemented Wide Residual Networks, please click here to view the blog post. Reference From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. https://jaedukseo.me | | | | |Your everyday Seo, who likes kimchi Sharing concepts, ideas, and codes. " Wallarm,72,4,https://lab.wallarm.com/tensorflow-dataset-api-for-increasing-training-speed-of-neural-networks-43a3050f2080?source=---------3----------------,TensorFlow Dataset API for increasing training speed of neural networks,"Wallarm AI engine is the heart of our security solution. Two key parameters of our AI engine efficiency are how fast neural networks can be train to reflect the updated training sets and how much compute power need to be dedicated to the training on the on-going basis. Many of our machine learning algorithms are written on top of TensorFlow, an open-source dataflow software library originally release by Google. Our average CPU load for the AI engine today is as high as 80% so we are always looking for ways to speed things up in software. Our latest find is Dataset API. Dataset is a mid-level TensorFlow APIs which makes working with data faster and more convenient.. In this blog, we will measure just how much faster model training can be with Dataset, compared to the you use of feed_dict. For starters, let’s prepare data that will be used to train the model. Dataset can usually be stored in numpy’s arrays regardless of kind of data they are.. That’s why we prepare all our dataset without TensorFlow and store it in .npz format similar to this: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/storing_in_npz_format.py#L1-L10 This step helps us avoid unnecessary data processing load on CPU and memory during model training. Now we are ready to train the model. First, let’s load preprocessed data from disk: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/load_from_npz.py#L1-L7. Next the data will be converted from numphy arrays into TensorFlow tensors (tf.data.Dataset.from_tensor_slices method is used for that) and loaded into TensorFlow. Dataset.from_tensor_slices method takes placeholders with the same size of the 0th dimension element and returns dataset object. Once the dataset is in TF, you can process it, for example, you can use .map(f) function which can process the data. But we already preprocess our dataset and all we need to do is apply batching and, maybe, shuffling. Fortunately, Dataset API already has needed functions. They are .batch and .shuffle. Ok, if we shuffle our dataset how can we use it for production? It’s easy, we simply make another dataset without data been shuffled. https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/datasets.py#L1-L5 Dataset API has other good methods for preprocessing data. There is a comprehensive list of methods in the. official docs. Next we should extract data from dataset object step by step for each of the training epochs, tf.data.Iterator is tailor-made for it. TF currently supported four type of iterators: Reinitializeble iterator is very useful, all we need to do to start the work is to create an iterator and initializers for it. iterator.get_next() yields the next elements of our dataset when executed. https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/iterator.py#L1-L8 To demonstrate the viability of using Dataset API let’s use proposed approach for MNIST dataset and for our corporate data . First, we prepared data and after that, we processed 1 and 5 epochs with Dataset API and without. Model for this MNIST example can be found on github: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/model.py#L1-L25 Below are the results we obtained on a machine with one Nvidia GTX 1080 and TF 1.8.0. All code of this experiment is available on GitHub [Link]. MNIST is a very small dataset and profit of Dataset API isn’t representative. By contrast, the results on a real-life dataset are much more impressive. Thus Dataset API is very good for increasing your training speed. With no source code changes, just some modifications in the stack, you can save 20–30% off the training time. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Adaptive Application Security for DevOps. @NGINX partner. @YCombibator S16 Wallarm is DevOps-friendly WAF with hybrid architecture uniquely suited for cloud applications. It applies machine learning to traffic to adaptively generate security rules and verifies the impact of malicious payloads in real time " Maryna Hlaiboroda,5,5,https://blog.heyml.com/%D0%B8%D0%B8-%D0%BF%D1%81%D0%B8%D1%85%D0%BE%D0%BF%D0%B0%D1%82-%D0%B8-%D0%B8%D0%B8-%D0%BE%D0%B1%D0%BC%D0%B0%D0%BD%D1%89%D0%B8%D0%BA-94c6a8e6c63e?source=---------4----------------,ИИ-психопат и ИИ-обманщик – Hey Machine Learning,"Команда исследователей из Массачусетского технологического института (MIT) представила нейронную сеть Norman, которая распознает изображения и генерирует текстовое описание к ним. Ее особенность в том, что ученые тренировали сеть на картинках с подписями о смерти из сообщества Reddit и Norman во всем видит ужасы. Алгоритм назван в честь персонажа романа “Психо” — убийцы с раздвоением личности Нормана Бейтса. Специалисты хотели продемонстрировать важность данных, на которых обучают модель, а также их сбалансированность. Чтобы наглядно показать влияние данных на результат, исследователи из MIT показали картинки из теста Роршаха двум нейросетям: ИИ-алгоритму, который обучали на обычных наборах с изображениями людей, кошек и птиц, и Norman. Если обычная нейросеть видела на представленных картинках вазу с цветами, стаю птиц на ветке или сидящих на лавочке людей, то психопатическая сеть определяла это как застреленного мужчину или человека, выпрыгнувшего из окна, сбитого машиной на высокой скорости или убитого разрядом электрического тока. Инженеры из MIT утверждают, что создали нейросеть Norman как напоминание, что поведение ИИ — вина не его алгоритмов, а данных для его обучения. Бывший СVO компании Microsoft Дейв Коплин считает, что создание подобного алгоритма является отличным поводом для публичного обсуждения проблем технологий искусственного интеллекта, на который общество и бизнес начинают все больше полагаться. BBC News Ученые из Торонтского университета — Авишек Бозе и Пархам Аараби — разработали систему, которая ретуширует портретные фотографии таким образом, чтобы алгоритмы распознавания лиц давали сбой. Проект позволяет сохранять конфиденциальность и бороться с раскрытием персональных данных. В качестве данных для обучения исследователи использовали две нейронные сети: одна распознавала лица на фото, а вторая попиксельно ретушировала снимки и повторно отправляла их распознающей сети. Изменения, которые давали наибольшее число ложных срабатываний, формировали ядро фильтра. Ученые также отметили, что разработанная ими система смогла обмануть алгоритм Faster R-CNN, который создала компания Facebook. В будущем она позволит полностью пресекать идентификацию пользователя без его согласия. В нынешнем же варианте технология снижает точность распознавания личности по фото до 0,5%. Данный алгоритм — часть магистерской работы Авишека Бозе. В августе 2018 года он намерен представить проект на семинаре MMSP 2018 в Ванкувере. U of T News Генеральный директор корпорации Google Сундар Пичай заявил, что инженеры компании не будут заниматься военными разработками искусственного интеллекта. Однако, специалисты поискового гиганта будут будут продолжать взаимодействовать с военными и правительственными ведомствами. Решение было принято после массового бойкота сотрудников компании против сотрудничества с Пентагоном. Компания планировала создать искусственный интеллект для военных беспилотников. По словам Пичайа, являясь лидером в ИИ-разработках, Google чувствует огромную ответственность, возложенную на плечи компании. Поэтому он объявил о семи принципах, которых компания будет придерживаться в будущем. Он также отметил, что использование ИИ должно быть “социально полезным”, а при его разработке необходимо предусмотреть “надежные средства обеспечения безопасности”. Алгоритмы ИИ и собранные для них данные должны находиться под контролем людей, а при их разработке должны быть учтены наивысшие научные стандарты и компания будет стремиться к тому, чтобы “ограничить вред” использования таких систем. TSN.ua Инженеры компании Google разработали алгоритм AutoAugment, который дополняет данные для обучения алгоритмов компьютерного зрения изображениями, созданными на основе существующих. Алгоритм трансформирует, обрезает, отражает и изменяет цвета на изображениях, что позволяет увеличить набор исходных данных для обучения. Для создания алгоритма специалисты компании использовали модель обучения с подкреплением. В результате он научился самостоятельно определять правила, по которым необходимо изменить изображение и создать уникальное, не исказив его при этом. AutoAugment умеет отражать изображения по горизонтали и вертикали, поворачивать, менять цвет и так далее. При этом алгоритм может комбинировать правила и предотвращать создание одинаковых копий. Так, система учитывает специфику конкретного набора изображений. В случае с номерами домов в наборе SVHN, алгоритм использует геометрические преобразования изображения, а также изменение его цвета. В наборах CIFAR-10 и ImageNet AutoAugment не использует геометрические преобразования и не меняет цвет, так как это правило может создать нереалистичную фотографию. Вместо этого алгоритм меняет оттенки на изображениях, сохраняя при этом оригинальную цветовую гамму. Blog Google AI Калифорнийский университет в Беркли опубликовал в открытом доступе архив видеороликов BDD100K для обучения автомобилей самостоятельной езде по общественным дорогам. Архив состоит из 100 тыс. роликов по 40 секунд, в разрешении 720р и 30 кадров в секунду. Кроме этого, к каждому файлу прикреплены GPS-данные, собранные мобильными устройствами, которые могут приблизительно описывать траекторию движения транспорта. В роликах содержаться различные дорожные ситуации и погодные условия, снятые в различных уголках США. Также в кадрах архива запечатлены 85 тыс. пешеходов, что может быть полезно разработчикам систем обнаружения пешеходов. Analytics Vidhya Хотите быть в курсе актуальных событий? Читайте нас в Telegram и Facebook и будьте в тренде! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. We are young and talented team and our passion is Machine Learning, Data Science and Artificial Intelligence. http://heyml.com " Amine Aoullay,58,4,https://towardsdatascience.com/how-to-use-noise-to-your-advantage-5301071d9dc3?source=---------6----------------,How to use Noise to your advantage ? – Towards Data Science,"For scientists, random fluctuations, or noise is undesirable. Although typically assumed to degrade performance, it can sometimes improve information processing in non-linear systems. In this post we’ll see some examples where the noise can be used as an advantage. Recent works have shown that, by allowing some inaccuracy when training deep neural networks, not only the training performance but also the accuracy of the model can be improved. Neural networks are capable of learning output functions that can change wildly with small changes in input. Adding noise to inputs randomly is like telling the network to not change the output in a ball around your exact input. By limiting the amount of information in a network, we force it to learn compact representations of input features. RL is an area of machine learning that assumes there is an agent situated in an environment. At each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize the agent’s total reward, given a previously unknown environment, through a learning process that usually involves lots of trial and error. To understand the challenge with exploration in Deep RL systems think about researchers that spend lot of times in a Lab without producing any practical application. Equivalently, RL agents can spend a huge amount of resources without converging to a local optimum. OpenAI proposes a technique called Parameter-Space-Noise, that introduces noises in the model policy parameters at the beginning of each episode. Other approaches were focused on what is known as Action-Space-Noise which introduce noise to change the likelihoods associated with each action the agent might take from one moment to the next. The initial results of the Parameter-Space-Noise model proved to be really promising. The technique helps algorithms explore their environments more effectively, leading to higher scores and more elegant behaviors. More details can be found in the research paper. The important thing to remember is that adding noise was used as an advantage to boost the exploration performance of reinforcement learning algorithms. Boosting recognition isn’t as simple as throwing more labeled images at these systems. Indeed, manually annotating a large number of images is an expensive and time consuming process. Facebook researchers and engineers have addressed this by training image recognition networks on large sets of public images with hashtags. Since people often caption their photos with hashtags, it woul’d be a good source of training data for models. Facebook developed new approaches that are tailored for doing image recognition experiments using hashtag supervision. This study is described in detail in “Exploring the Limits of Weakly Supervised Pretraining” On the COCO object-detection challenge, it has been shown that the use of hashtags for pretraining can boost the average precision of a model by more than 2 percent. Noise should not be our enemy ! It isn’t always an unwanted disturbance and can often be used as an advantage and even serve as a valuable research tool. If anyone tries to tell you otherwise, well, just give him the examples we presented ... Stay tuned and if you liked this article, please leave a 👏! [1] Weakly-supervised-pretraining: https://research.fb.com/publications/exploring-the-limits-of-weakly-supervised-pretraining/ [2] Better Exploration with Parameter Noise: https://blog.openai.com/better-exploration-with-parameter-noise/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. MSc in Machine Learning (MVA) @ ENS Paris-Saclay Sharing concepts, ideas, and codes. " Kelvin Li,56,5,https://medium.com/@kelfun5354/the-complex-language-used-in-back-propagation-88c6e58f676c?source=---------9----------------,The Complex language used in Back Propagation – Kelvin Li – Medium,"I’ve looked all over the internet for explanations of what exactly back propagation is and everyone either uses complicated mathematical language or complex codes to try to explain what back propagation. If someone who doesn’t know either wants to know what it is then how will they really grasp what it is? In this post, I would like to unveil the secrets of the universe with everyone and hopefully I’ll do a good job at it. According to Wikipedia, Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function If you have taken a basic elementary algebra class, you may have heard of the idea of a slope. Some people might think the idea of a slope is very insignificant but it is actually the game-changing concept that caused all the technological advancement within the last century. To know the slope of something means that you know the rate of something changing over a period of time. Knowing this gives us power to manipulate things to our advantage. Now you can think of a gradient as the slope of something in a higher dimension. I won’t go into details but that is the general gist of what a gradient is. Weights are the values that we want to use to adjust the outputs of our functions in each neuron. So say we have an output of 2 and we want to change the 2 into a 1, then we would multiply the 2 by a .5 to get the desired result. This means that .5 will be the weight in this case. In a way we are weighing down the output to what we want it to be. A neuron is simply just a function. A Neural Network(a bunch of neurons) is simply a bunch of functions. Each neuron also has an activation function that spits out a value for the next neuron to calculate. Think of these functions as how much of a yes or a no an input is. An example would be picture recognizing. When you feed the neural network a picture, the node will spit out a number between 0 and 1. Where 0 is being very NO and 1 being very YES. This process continues between every node until the very end. Which ever node has the highest number, between 0 and 1, would be the decision the machine makes. A loss function is just some function that we use to determine how correct the predicted output is from the real output. For example, we input a picture of a cat into the machine but the machine predicts that it’s a dinosaur. Clearly the machine is not doing a very good job. So we need some way to know how correct the machine is compared to the real data. Which is where the loss function comes in. Now that we have all the necessary understandings, we can go into the real sauce. Now what I am about to explain to you is going to either confuse the crap out of you or make you feel enlightened. Let’s pretend you are trying to build a door lock opening mechanism. This mechanism involves you pressing a button, which triggers a ball rolling down a platform and knocks over a switch that unlocks the door. Now lets think about this. There are a few components that we have to keep in mind. The 1st component being you pressing the button, the 2nd component is the ball rolling down a platform, and the 3rd component being the switch being knocked over. There is actually a lot of physics going on around here but let’s just focus on the ball rolling down the platform. Now when you create this mechanism, you want the door to ideally open in 3 seconds. But you don’t have any tools to measure the time and length, so all you can do is to create a platform through intuition. You build your first platform and let the ball roll and realized that it took 9 seconds for the door to open after pressing the button. So you go back to the platform and make the platform steeper. You performed the same trial and error over and over again until you got the ideal opening time. This my friend, is Backpropagation. Well true. But the idea is basically the same. In a Neural Net, we have weights assigned to each neuron. These weights will get multiplied by a certain input and modified through some activation function. The result of these activation function might not always be what we want. What backpropagation would do is that it will do some calculus (will be covered in another post) to determine the direction of increase/decrease, aka the gradient,(cut less of the platform or cut more of the platform) to achieve the best weights (ideal time the door opens). It then updates these weights every time it has created new weights and runs the neural net again(every trial you cut a piece of the platform to test). Eventually we will achieve the best possible weights that satisfies our desired accuracy. In my next post, I will discuss more in depth about the math that is involved with backpropagation. References and Links From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Getting stuck 24/7 " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------1----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Stefan Kojouharov,14.2K,7,https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------2----------------,"Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data","Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Andrej Karpathy,9.2K,7,https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------3----------------,Yes you should understand backprop – Andrej Karpathy – Medium,"When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets. " Avinash Sharma V,6.9K,10,https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0?source=tag_archive---------4----------------,Understanding Activation Functions in Neural Networks,"Recently, a colleague of mine asked me a few questions like “why do we have so many activation functions?”, “why is that one works better than the other?”, ”how do we know which one to use?”, “is it hardcore maths?” and so on. So I thought, why not write an article on it for those who are familiar with neural network only at a basic level and is therefore, wondering about activation functions and their “why-how-mathematics!”. NOTE: This article assumes that you have a basic knowledge of an artificial “neuron”. I would recommend reading up on the basics of neural networks before reading this article for better understanding. So what does an artificial neuron do? Simply put, it calculates a “weighted sum” of its input, adds a bias and then decides whether it should be “fired” or not ( yeah right, an activation function does this, but let’s go with the flow for a moment ). So consider a neuron. Now, the value of Y can be anything ranging from -inf to +inf. The neuron really doesn’t know the bounds of the value. So how do we decide whether the neuron should fire or not ( why this firing pattern? Because we learnt it from biology that’s the way brain works and brain is a working testimony of an awesome and intelligent system ). We decided to add “activation functions” for this purpose. To check the Y value produced by a neuron and decide whether outside connections should consider this neuron as “fired” or not. Or rather let’s say — “activated” or not. The first thing that comes to our minds is how about a threshold based activation function? If the value of Y is above a certain value, declare it activated. If it’s less than the threshold, then say it’s not. Hmm great. This could work! Activation function A = “activated” if Y > threshold else not Alternatively, A = 1 if y> threshold, 0 otherwise Well, what we just did is a “step function”, see the below figure. Its output is 1 ( activated) when value > 0 (threshold) and outputs a 0 ( not activated) otherwise. Great. So this makes an activation function for a neuron. No confusions. However, there are certain drawbacks with this. To understand it better, think about the following. Suppose you are creating a binary classifier. Something which should say a “yes” or “no” ( activate or not activate ). A Step function could do that for you! That’s exactly what it does, say a 1 or 0. Now, think about the use case where you would want multiple such neurons to be connected to bring in more classes. Class1, class2, class3 etc. What will happen if more than 1 neuron is “activated”. All neurons will output a 1 ( from step function). Now what would you decide? Which class is it? Hmm hard, complicated. You would want the network to activate only 1 neuron and others should be 0 ( only then would you be able to say it classified properly/identified the class ). Ah! This is harder to train and converge this way. It would have been better if the activation was not binary and it instead would say “50% activated” or “20% activated” and so on. And then if more than 1 neuron activates, you could find which neuron has the “highest activation” and so on ( better than max, a softmax, but let’s leave that for now ). In this case as well, if more than 1 neuron says “100% activated”, the problem still persists.I know! But..since there are intermediate activation values for the output, learning can be smoother and easier ( less wiggly ) and chances of more than 1 neuron being 100% activated is lesser when compared to step function while training ( also depending on what you are training and the data ). Ok, so we want something to give us intermediate ( analog ) activation values rather than saying “activated” or not ( binary ). The first thing that comes to our minds would be Linear function. A = cx A straight line function where activation is proportional to input ( which is the weighted sum from neuron ). This way, it gives a range of activations, so it is not binary activation. We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that. So that is ok too. Then what is the problem with this? If you are familiar with gradient descent for training, you would notice that for this function, derivative is a constant. A = cx, derivative with respect to x is c. That means, the gradient has no relationship with X. It is a constant gradient and the descent is going to be on constant gradient. If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !!! This is not that good! ( not always, but bear with me ). There is another problem too. Think about connected layers. Each layer is activated by a linear function. That activation in turn goes into the next level as input and the second layer calculates weighted sum on that input and it in turn, fires based on another linear activation function. No matter how many layers we have, if all are linear in nature, the final activation function of last layer is nothing but just a linear function of the input of first layer! Pause for a bit and think about it. That means these two layers ( or N layers ) can be replaced by a single layer. Ah! We just lost the ability of stacking layers this way. No matter how we stack, the whole network is still equivalent to a single layer with linear activation ( a combination of linear functions in a linear manner is still another linear function ). Let’s move on, shall we? Well, this looks smooth and “step function like”. What are the benefits of this? Think about it for a moment. First things first, it is nonlinear in nature. Combinations of this function are also nonlinear! Great. Now we can stack layers. What about non binary activations? Yes, that too!. It will give an analog activation unlike step function. It has a smooth gradient too. And if you notice, between X values -2 to 2, Y values are very steep. Which means, any small changes in the values of X in that region will cause values of Y to change significantly. Ah, that means this function has a tendency to bring the Y values to either end of the curve. Looks like it’s good for a classifier considering its property? Yes ! It indeed is. It tends to bring the activations to either side of the curve ( above x = 2 and below x = -2 for example). Making clear distinctions on prediction. Another advantage of this activation function is, unlike linear function, the output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function. So we have our activations bound in a range. Nice, it won’t blow up the activations then. This is great. Sigmoid functions are one of the most widely used activation functions today. Then what are the problems with this? If you notice, towards either end of the sigmoid function, the Y values tend to respond very less to changes in X. What does that mean? The gradient at that region is going to be small. It gives rise to a problem of “vanishing gradients”. Hmm. So what happens when the activations reach near the “near-horizontal” part of the curve on either sides? Gradient is small or has vanished ( cannot make significant change because of the extremely small value ). The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ). There are ways to work around this problem and sigmoid is still very popular in classification problems. Another activation function that is used is the tanh function. Hm. This looks very similar to sigmoid. In fact, it is a scaled sigmoid function! Ok, now this has characteristics similar to sigmoid that we discussed above. It is nonlinear in nature, so great we can stack layers! It is bound to range (-1, 1) so no worries of activations blowing up. One point to mention is that the gradient is stronger for tanh than sigmoid ( derivatives are steeper). Deciding between the sigmoid or tanh will depend on your requirement of gradient strength. Like sigmoid, tanh also has the vanishing gradient problem. Tanh is also a very popular and widely used activation function. Later, comes the ReLu function, A(x) = max(0,x) The ReLu function is as shown above. It gives an output x if x is positive and 0 otherwise. At first look this would look like having the same problems of linear function, as it is linear in positive axis. First of all, ReLu is nonlinear in nature. And combinations of ReLu are also non linear! ( in fact it is a good approximator. Any function can be approximated with combinations of ReLu). Great, so this means we can stack layers. It is not bound though. The range of ReLu is [0, inf). This means it can blow up the activation. Another point that I would like to discuss here is the sparsity of the activation. Imagine a big neural network with a lot of neurons. Using a sigmoid or tanh will cause almost all neurons to fire in an analog way ( remember? ). That means almost all activations will be processed to describe the output of a network. In other words the activation is dense. This is costly. We would ideally want a few neurons in the network to not activate and thereby making the activations sparse and efficient. ReLu give us this benefit. Imagine a network with random initialized weights ( or normalised ) and almost 50% of the network yields 0 activation because of the characteristic of ReLu ( output 0 for negative values of x ). This means a fewer neurons are firing ( sparse activation ) and the network is lighter. Woah, nice! ReLu seems to be awesome! Yes it is, but nothing is flawless.. Not even ReLu. Because of the horizontal line in ReLu( for negative X ), the gradient can go towards 0. For activations in that region of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input ( simply because gradient is 0, nothing changes ). This is called dying ReLu problem. This problem can cause several neurons to just die and not respond making a substantial part of the network passive. There are variations in ReLu to mitigate this issue by simply making the horizontal line into non-horizontal component . for example y = 0.01x for x<0 will make it a slightly inclined line rather than horizontal line. This is leaky ReLu. There are other variations too. The main idea is to let the gradient be non zero and recover during training eventually. ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. That is a good point to consider when we are designing deep neural nets. Now, which activation functions to use. Does that mean we just use ReLu for everything we do? Or sigmoid or tanh? Well, yes and no. When you know the function you are trying to approximate has certain characteristics, you can choose an activation function which will approximate the function faster leading to faster training process. For example, a sigmoid works well for a classifier ( see the graph of sigmoid, doesn’t it show the properties of an ideal classifier? ) because approximating a classifier function as combinations of sigmoid is easier than maybe ReLu, for example. Which will lead to faster training process and convergence. You can use your own custom functions too!. If you don’t know the nature of the function you are trying to learn, then maybe i would suggest start with ReLu, and then work backwards. ReLu works most of the time as a general approximator! In this article, I tried to describe a few activation functions used commonly. There are other activation functions too, but the general idea remains the same. Research for better activation functions is still ongoing. Hope you got the idea behind activation function, why they are used and how do we decide which one to use. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Musings of an AI, Deep Learning, Mathematics addict " Arthur Juliani,3.5K,8,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------5----------------,Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C),"In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series. So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments. Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself. Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse. Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods. Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately. The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow: Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage. In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation. In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository. The general outline of the code architecture is: The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself. Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU. ~ From here we go asynchronous ~ Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network. Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment. Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action. A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy. A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment. Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again. To view the full and functional code, see the Github repository here. The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as: pip install vizdoom Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory. The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks. I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs. (There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.) If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani. If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Elle O'Brien,2.3K,6,https://towardsdatascience.com/romance-novels-generated-by-artificial-intelligence-1b31d9c872b2?source=tag_archive---------6----------------,"Romance Novels, Generated by Artificial Intelligence","I’ve always been fascinated with romance novels — the kind they sell at the drugstore for a couple of dollars, usually with some attractive, soft-lit couples on the cover. So when I started futzing around with text-generating neural networks a few weeks ago, I developed an urgent curiosity to discover what artificial intelligence could contribute to the ever-popular genre. Maybe one day there will be entire books written by computers. For now, let’s start with titles. I gathered over 20,000 Harlequin Romance novel titles and gave them to a neural network, a type of artificial intelligence that learns the structure of text. It’s powerful enough to string together words in a way that seems almost human. 90% human. The other 10% is all wackiness. I was not disappointed with what came out. I even photoshopped some of my favorites into existence (the author names are synthesized from machine learning, too). Let’s have a look by theme: A common theme in romance novels is pregnancy, and the word “baby” had a strong showing in the titles I trained the neural network on. Naturally, the neural network came up with a lot of baby-themed titles: There’s an unusually high concentration of sheikhs, vikings, and billionaires in the Harlequin world. Likewise, the neural network generated some colorful new bachelor-types: I have so many questions. How is the prince pregnant? What sort of consulting does the count do? Who is Butterfly Earl? And what makes the sheikh’s desires so convenient? Although there are exceptions, most romance novels end in happily-ever-afters. A lot of them even start with an unexpected wedding — a marriage of convenience, or a stipulation of a business contract, or a sham that turns into real love. The neural network seems to have internalized something about matrimony: Doctors and surgeons are common paramours for mistresses headed towards the marriage valley: Christmas is a magical time for surgeons, sheikhs, playboys, dads, consultants, and the women who love them: What or where is Knith? I just like Mission: Christmas... This neural network has never seen the big Montana sky, but it has some questionable ideas about cowboys: The neural network generated some decidedly PG-13 titles: They can’t all live happily ever after. Some of the generated titles sounded like M. Night Shyamalan was a collaborator: How did the word “fear” get in there? It’s possible the network generated it without having “fear” in the training set, but a subset of the Harlequin empire is geared towards paranormal and gothic romance that might have included the word (*Note: I checked, and there was “Veil of Fear” published in 2012). To wrap it up, some of the adorable failures and near-misses generated by the neural network: I hope you’ve enjoyed computer-generated romance novel titles half as much as I have. Maybe someone out there can write about the Virgin Viking, or the Consultant Count, or the Baby Surgeon Seduction. I’d buy it. I built a webscraper in Python (thanks, Beautiful Soup!) that grabbed about 20,000 romance novel titles published under the Harlequin brand off of FictionDB.com. Harlequin is, to me, synonymous with the romance genre, although it comprises only a fraction (albeit a healthy one) of the entire market. I fed this list of book titles into a recurrent neural network, using software I got from GitHub, and waited a few hours for the magic to happen. The model I fit was a 3-layer, 256-node recurrent neural network. I also trained the network on the author list in to create some new pen names. For more about the neural network I used, have a look at the fabulous work of Andrej Karpathy. I discovered that “Surgery by the Sea” is actually a real novel, written by Sheila Douglas and published in 1979! So, this one isn’t an original neural network creation. Because the training set is rather small (only about 1 MB of text data), it’s to be expected that sometimes, the machine will spit out one of the titles it was trained on. One of the more challenging aspects of this project was discerning when that happened, since the real published titles can be more surprising than anything born out of artificial intelligence. For example: “The $4.98 Daddy” and “6'1” Grinch” are both real. In fact, the very first romance novel published by Harlequin was called “The Manatee”. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computational scientist, software developer, science writer Sharing concepts, ideas, and codes. " Slav Ivanov,2.9K,9,https://blog.slavv.com/picking-a-gpu-for-deep-learning-3d4795c273b9?source=tag_archive---------8----------------,Picking a GPU for Deep Learning – Slav,"Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was building my personal Deep Learning box, I reviewed all the GPUs on the market. In this article, I’m going to share my insights about choosing the right graphics processor. Also, we’ll go over: Deep Learning (DL) is part of the field of Machine Learning (ML). DL works by approximating a solution to a problem using neural networks. One of the nice properties of about neural networks is that they find patterns in the data (features) by themselves. This is opposed to having to tell your algorithm what to look for, as in the olde times. However, often this means the model starts with a blank state (unless we are transfer learning). To capture the nature of the data from scratch the neural net needs to process a lot of information. There are two ways to do so — with a CPU or a GPU. The main computational module in a computer is the Central Processing Unit (better known as CPU). It is designed to do computation rapidly on a small amount of data. For example, multiplying a few numbers on a CPU is blazingly fast. But it struggles when operating on a large amount of data. E.g., multiplying matrices of tens or hundreds thousand numbers. Behind the scenes, DL is mostly comprised of operations like matrix multiplication. Amusingly, 3D computer games rely on these same operations to render that beautiful landscape you see in Rise of the Tomb Raider. Thus, GPUs were developed to handle lots of parallel computations using thousands of cores. Also, they have a large memory bandwidth to deal with the data for these computations. This makes them the ideal commodity hardware to do DL on. Or at least, until ASICs for Machine Learning like Google’s TPU make their way to market. For me, the most important reason for picking a powerful graphics processor is saving time while prototyping models. If the networks train faster the feedback time will be shorter. Thus, it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. See Tim Dettmers’ answer to “Why are GPUs well-suited to deep learning?” on Quora for a better explanation. Also for an in-depth, albeit slightly outdated GPUs comparison see his article “Which GPU(s) to Get for Deep Learning”. There are main characteristics of a GPU related to DL are: There are two reasons for having multiple GPUs: you want to train several models at once, or you want to do distributed training of a single model. We’ll go over each one. Training several models at once is a great technique to test different prototypes and hyperparameters. It also shortens your feedback cycle and lets you try out many things at once. Distributed training, or training a single network on several video cards is slowly but surely gaining traction. Nowadays, there are easy to use approaches to this for Tensorflow and Keras (via Horovod), CNTK and PyTorch. The distributed training libraries offer almost linear speed-ups to the number of cards. For example, with 2 GPUs you get 1.8x faster training. PCIe Lanes (Updated): The caveat to using multiple video cards is that you need to be able to feed them with data. For this purpose, each GPU should have 16 PCIe lanes available for data transfer. Tim Dettmers points out that having 8 PCIe lanes per card should only decrease performance by “0–10%” for two GPUs. For a single card, any desktop processor and chipset like Intel i5 7500 and Asus TUF Z270 will use 16 lanes. However, for two GPUs, you can go 8x/8x lanes or get a processor AND a motherboard that support 32 PCIe lanes. 32 lanes are outside the realm of desktop CPUs. An Intel Xeon with a MSI — X99A SLI PLUS will do the job. For 3 or 4 GPUs, go with 8x lanes per card with a Xeon with 24 to 32 PCIe lanes. To have 16 PCIe lanes available for 3 or 4 GPUs, you need a monstrous processor. Something in the class of or AMD ThreadRipper (64 lanes) with a corresponding motherboard. Also, for more GPUs you need a faster processor and hard disk to be able to feed them data quickly enough, so they don’t sit idle. Nvidia has been focusing on Deep Learning for a while now, and the head start is paying off. Their CUDA toolkit is deeply entrenched. It works with all major DL frameworks — Tensoflow, Pytorch, Caffe, CNTK, etc. As of now, none of these work out of the box with OpenCL (CUDA alternative), which runs on AMD GPUs. I hope support for OpenCL comes soon as there are great inexpensive GPUs from AMD on the market. Also, some AMD cards support half-precision computation which doubles their performance and VRAM size. Currently, if you want to do DL and want to avoid major headaches, choose Nvidia. Your GPU needs a computer around it: Hard Disk: First, you need to read the data off the disk. An SSD is recommended here, but an HDD can work as well. CPU: That data might have to be decoded by the CPU (e.g. jpegs). Fortunately, any mid-range modern processor will do just fine. Motherboard: The data passes via the motherboard to reach the GPU. For a single video card, almost any chipset will work. If you are planning on working with multiple graphic cards, read this section. RAM: It is recommended to have 2 gigabytes of memory for every gigabyte of video card RAM. Having more certainly helps in some situations, like when you want to keep an entire dataset in memory. Power supply: It should provide enough power for the CPU and the GPUs, plus 100 watts extra. You can get all of this for $500 to $1000. Or even less if you buy a used workstation. Here is performance comparison between all cards. Check the individual card profiles below. Notably, the performance of Titan XP and GTX 1080 Ti is very close despite the huge price gap between them. The price comparison reveals that GTX 1080 Ti, GTX 1070 and GTX 1060 have great value for the compute performance they provide. All the cards are in the same league value-wise, except Titan XP. The king of the hill. When every GB of VRAM matters, this card has more than any other on the (consumer) market. It’s only a recommended buy if you know why you want it. For the price of Titan X, you could get two GTX 1080s, which is a lot of power and 16 GBs of VRAM. This card is what I currently use. It’s a great high-end option, with lots of RAM and high throughput. Very good value. I recommend this GPU if you can afford it. It works great for Computer Vision or Kaggle competitions. Quite capable mid to high-end card. The price was reduced from $700 to $550 when 1080 Ti was introduced. 8 GB is enough for most Computer Vision tasks. People regularly compete on Kaggle with these. The newest card in Nvidia’s lineup. If 1080 is over budget, this will get you the same amount of VRAM (8 GB). Also, 80% of the performance for 80% of the price. Pretty sweet deal. It’s hard to get these nowadays because they are used for cryptocurrency mining. With a considerable amount of VRAM for this price but somewhat slower. If you can get it (or a couple) second-hand at a good price, go for it. It’s quite cheap but 6 GB VRAM is limiting. That’s probably the minimum you want to have if you are doing Computer Vision. It will be okay for NLP and categorical data models. Also available as P106–100 for cryptocurrency mining, but it’s the same card without a display output. The entry-level card which will get you started but not much more. Still, if you are unsure about getting in Deep Learning, this might be a cheap way to get your feet wet. Titan X Pascal It used to be the best consumer GPU Nvidia had to offer. Made obsolete by 1080 Ti, which has the same specs and is 40% cheaper. Tesla GPUsThis includes K40, K80 (which is 2x K40 in one), P100, and others. You might already be using these via Amazon Web Services, Google Cloud Platform, or another cloud provider. In my previous article, I did some benchmarks on GTX 1080 Ti vs. K40. The 1080 performed five times faster than the Tesla card and 2.5x faster than K80. K40 has 12 GB VRAM and K80 a whopping 24 GBs. In theory, the P100 and GTX 1080 Ti should be in the same league performance-wise. However, this cryptocurrency comparison has P100 lagging in every benchmark. It is worth noting that you can do half-precision on P100, effectively doubling the performance and VRAM size. On top of all this, K40 goes for over $2000, K80 for over $3000, and P100 is about $4500. And they get still get eaten alive by a desktop-grade card. Obviously, as it stands, I don’t recommend getting them. All the specs in the world won’t help you if you don’t know what you are looking for. Here are my GPU recommendations depending on your budget: I have over $1000: Get as many GTX 1080 Ti or GTX 1080 as you can. If you have 3 or 4 GPUs running in the same box, beware of issues with feeding them with data. Also keep in mind the airflow in the case and the space on the motherboard. I have $700 to $900: GTX 1080 Ti is highly recommended. If you want to go multi-GPU, get 2x GTX 1070 (if you can find them) or 2x GTX 1070 Ti. Kaggle, here I come! I have $400 to $700: Get the GTX 1080 or GTX 1070 Ti. Maybe 2x GTX 1060 if you really want 2 GPUs. However, know that 6 GB per model can be limiting. I have $300 to $400: GTX 1060 will get you started. Unless you can find a used GTX 1070. I have less than $300: Get GTX 1050 Ti or save for GTX 1060 if you are serious about Deep Learning. Deep Learning has the great promise of transforming many areas of our life. Unfortunately, learning to wield this powerful tool, requires good hardware. Hopefully, I’ve given you some clarity on where to start in this quest. Disclosure: The above are affiliate links, to help me pay for, well, more GPUs. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Datafiniti,3,5,https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------0----------------,Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog,"At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like: and do the following: Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages: Or Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML. While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks. Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation: A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible. Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them. In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like: For a more detailed explanation of neural networks, you can check out the following links: In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including: Our output layer had the following: Our algorithm for the neural network took the following steps: The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so: So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are: Our implementation had the following results: These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time. Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search. We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the

and

tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post! We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff! You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Instant Access to Web Data Building the world’s largest database of web data — follow our journey. " "Yingjie Miao ",43,6,https://medium.com/kifi-engineering/from-word2vec-to-doc2vec-an-approach-driven-by-chinese-restaurant-process-93d3602eaa31?source=tag_archive---------0----------------,From word2vec to doc2vec: an approach driven by Chinese restaurant process,"Google’s word2vec project has created lots of interests in the text mining community. It’s a neural network language model that is “both supervised and unsupervised”. Unsupervised in the sense that you only have to provide a big corpus, say English wiki. Supervised in the sense that the model cleverly generates supervised learning tasks from the corpus. How? Two approaches, known as Continuous Bag of Words (CBOW) and Skip-Gram (See Figure 1 in this paper). CBOW forces the neural net to predict current word by surrounding words, and Skip-Gram forces the neural net to predict surrounding words of the current word. Training is essentially a classic back-propagation method with a few optimization and approximation tricks (e.g. hierarchical softmax). Word vectors generated by the neural net have nice semantic and syntactic behaviors. Semantically, “iOS” is close to “Android”. Syntactically, “boys” minus “boy” is close to “girls” minus “girl”. One can checkout more examples here. Although this provides high quality word vectors, there is still no clear way to combine them into a high quality document vector. In this article, we discuss one possible heuristic, inspired by a stochastic process called Chinese Restaurant Process (CRP). Basic idea is to use CRP to drive a clustering process and summing word vectors in the right cluster. Imagine we have an document about chicken recipe. It contains words like “chicken”, “pepper”, “salt”, “cheese”. It also contains words like “use”, “buy”, “definitely”, “my”, “the”. The word2vec model gives us a vector for each word. One could naively sum up every word vector as the doc vector. This clearly introduces lots of noise. A better heuristic is to use a weighted sum, based on other information like idf or Part of Speech (POS) tag. The question is: could we be more selective when adding terms? If this is a chicken recipe document, I shouldn’t even consider words like “definitely”, “use”, “my” in the summation. One can argue that idf based weights can significantly reduce noise of boring words like “the” and “is”. However, for words like “definitely”, “overwhelming”, the idfs are not necessarily small as you would hope. It’s natural to think that if we can first group words into clusters, words like “chicken”, “pepper” may stay in one cluster, along with other clusters of “junk” words. If we can identify the “relevant” clusters, and only summing up word vectors from relevant clusters, we should have a good doc vector. This boils down to clustering the words in the document. One can of course use off-the-shelf algorithms like K-means, but most these algorithms require a distance metric. Word2vec behaves nicely by cosine similarity, this doesn’t necessarily mean it behaves as well under Eucledian distance (even after projection to unit sphere, it’s perhaps best to use geodesic distance.) It would be nice if we can directly work with cosine similarity. We have done a quick experiment on clustering words driven by CRP-like stochastic process. It worked surprisingly well — so far. Now let’s explain CRP. Imagine you go to a (Chinese) restaurant. There are already n tables with different number of peoples. There is also an empty table. CRP has a hyperparamter r > 0, which can be regarded as the “imagined” number of people on the empty table. You go to one of the (n+1) tables with probability proportional to existing number of people on the table. (For the empty table, the number is r). If you go to one of the n existing tables, you are done. If you decide to sit down at the empty table, the Chinese restaurant will automatically create a new empty table. In that case, the next customer comes in will choose from (n+2) tables (including the new empty table). Inspired by CRP, we tried the following variations of CRP to include the similarity factor. Common setup is the following: we are given M vectors to be clustered. We maintain two things: cluster sum (not centroid!), and vectors in clusters. We iterate through vectors. For current vector V, suppose we have n clusters already. Now we find the cluster C whose cluster sum is most similar to current vector. Call this score sim(V, C). Variant 1: v creates a new cluster with probability 1/(1 + n). Otherwise v goes to cluster C. Variant 2: If sim(V, C) > 1/(1 + n), goes to cluster C. Otherwise with probability 1/(1+n) it creates a new cluster and with probability n/(1+n) it goes to C. In any of the two variants, if v goes to a cluster, we update cluster sum and cluster membership. There is one distinct difference to traditional CRP: if we don’t go to empty table, we deterministically go to the “most similar” table. In practice, we find these variants create similar results. One difference is that variant 1 tend to have more clusters and smaller clusters, variant 2 tend to have fewer but larger clusters. The examples below are from variant 2. For example, for a chicken recipe document, the clusters look like this: Apparently, the first cluster is most relevant. Now let’s take the cluster sum vector (which is the sum of all vectors from this cluster), and test if it really preserves semantic. Below is a snippet of python console. We trained word vector using the c implementation on a fraction of English Wiki, and read the model file using python library gensim.model.word2vec. c[0] below denotes the cluster 0. Looks like the semantic is preserved well. It’s convincing that we can use this as the doc vector. The recipe document seems easy. Now let’s try something more challenging, like a news article. News articles tend to tell stories, and thus has less concentrated “topic words”. We tried the clustering on this article, titled “Signals on Radar Puzzle Officials in Hunt for Malaysian Jet”. We got 4 clusters: Again, looks decent. Note that this is a simple 1-pass clustering process and we don’t have to specify number of clusters! Could be very helpful for latency sensitive services. There is still a missing step: how to find out the relevant cluster(s)? We haven’t yet done extensive experiments on this part. A few heuristics to consider: There are other problems to think about: 1) how do we merge clusters? Based on similarity among cluster sum vectors? Or averaging similarity between cluster members? 2) what is the minimal set of words that can reconstruct cluster sum vector (in the sense of cosine similarity)? This could be used as a semantic keyword extraction method. Conclusion: Google’s word2vec provides powerful word vectors. We are interested in using these vectors to generate high quality document vectors in an efficient way. We tried a strategy based on a variant of Chinese Restaurant Process and obtained interesting results. There are some open problems to explore, and we would like to hear what you think. Appendix: python style pseudo-code for similarity driven CRP We wrote this post while working on Kifi — Connecting people with knowledge. Learn more. Originally published at eng.kifi.com on March 17, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The Kifi Engineering Blog " Milo Spencer-Harper,2.2K,3,https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------1----------------,How to build a multi-layered neural network in Python,"In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron. But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be? The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0. So the correct answer is 0. However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output. Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs. You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers. In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples. The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further. Also available here: https://github.com/miloharper/multi-layer-neural-network This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post. What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”. Ok, let’s try running it using the Terminal command: python main.py You should get a result that looks like this: First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close! You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”. That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Josh,462,9,https://medium.com/technology-invention-and-more/everything-you-need-to-know-about-artificial-neural-networks-57fac18245a1?source=tag_archive---------3----------------,Everything You Need to Know About Artificial Neural Networks,"The year 2015 was a monumental year in the field of artificial intelligence. Not only are computers learning more and learning faster, but we’re learning more about how to improve their systems. Everything is starting to align, and because of it we’re seeing strides we’ve never thought possible until now. We have programs that can tell stories about pictures. We have cars that are driving themselves. We even have programs that create art. If you want to read more about advancements in 2015, read this article. Here at Josh.ai, with AI technology becoming the core of just about everything we do, we think it’s important to understand some of the common terminology and to get a rough idea of how it all works. A lot of the advances in artificial intelligence are new statistical models, but the overwhelming majority of the advances are in a technology called artificial neural networks (ANN). If you’ve read anything about them before, you’ll have read that these ANNs are a very rough model of how the human brain is structured. Take note that there is a difference between artificial neural networks and neural networks. Though most people drop the artificial for the sake of brevity, the word artificial was prepended to the phrase so that people in computational neurobiology could still use the term neural network to refer to their work. Below is a diagram of actual neurons and synapses in the brain compared to artificial ones. Fear not if the diagram doesn’t come through very clearly. What’s important to understand here is that in our ANNs we have these units of calculation called neurons. These artificial neurons are connected by synapses which are really just weighted values. What this means is that given a number, a neuron will perform some sort of calculation (for example the sigmoid function), and then the result of this calculation will be multiplied by a weight as it “travels.” The weighted result can sometimes be the output of your neural network, or as I’ll talk about soon, you can have more neurons configured in layers, which is the basic concept to an idea that we call deep learning. Artificial neural networks are not a new concept. In fact, we didn’t even always call them neural networks and they certainly don’t look the same now as they did at their inception. Back during the 1960s we had what was called a perceptron. Perceptrons were made of McCulloch-Pitts neurons. We even had biased perceptrons, and ultimately people started creating multilayer perceptrons, which is synonymous with the general artificial neural network we hear about now. But wait, if we’ve had neural networks since the 1960s, why are they just now getting huge? It’s a long story, and I encourage you to listen to this podcast episode to listen to the “fathers” of modern ANNs talk about their perspective of the topic. To quickly summarize, there’s a hand full of factors that kept ANNs from becoming more popular. We didn’t have the computer processing power and we didn’t have the data to train them. Using them was frowned upon due to them having a seemingly arbitrary ability to perform well. Each one of these factors is changing. Our computers are getting faster and more powerful, and with the internet, we have all kinds of data being shared for use. You see, I mentioned above that the neurons and synapses perform calculations. The question on your mind should be: “How do they learn what calculations to perform?” Was I right? The answer is that we need to essentially ask them a large amount of questions, and provide them with answers. This is a field called supervised learning. With enough examples of question-answer pairs, the calculations and values stored at each neuron and synapse are slowly adjusted. Usually this is through a process called backpropagation. Imagine you’re walking down a sidewalk and you see a lamp post. You’ve never seen a lamp post before, so you walk right into it and say “ouch.” The next time you see a lamp post you scoot a few inches to the side and keep walking. This time your shoulder hits the lamp post and again you say “ouch.” The third time you see a lamp post, you move all the way over to ensure you don’t hit the lamp post. Except now something terrible has happened — now you’ve walked directly into the path of a mailbox, and you’ve never seen a mailbox before. You walk into it and the whole process happens again. Obviously, this is an oversimplification, but it is effectively what backpropogation does. An artificial neural network is given a multitude of examples and then it tries to get the same answer as the example given. When it is wrong, an error is calculated and the values at each neuron and synapse are propagated backwards through the ANN for the next time. This process takes a LOT of examples. For real world applications, the number of examples can be in the millions. Now that we have an understanding of artificial neural networks and somewhat of an understanding in how they work, there’s another question that should be on your mind. How do we know how many neurons we need to use? And why did you bold the word layers earlier? Layers are just sets of neurons. We have an input layer which is the data we provide to the ANN. We have the hidden layers, which is where the magic happens. Lastly, we have the output layer, which is where the finished computations of the network are placed for us to use. Layers themselves are just sets of neurons. In the early days of multilayer perceptrons, we originally thought that having just one input layer, one hidden layer, and one output layer was sufficient. It makes sense, right? Given some numbers, you just need one set of computations, and then you get an output. If your ANN wasn’t calculating the correct value, you just added more neurons to the single hidden layer. Eventually, we learned that in doing this we were really just creating a linear mapping from each input to the output. In other words, we learned that a certain input would always map to a certain output. We had no flexibility and really could only handle inputs we’d seen before. This was by no means what we wanted. Now introduce deep learning, which is when we have more than one hidden layer. This is one of the reasons we have better ANNs now, because we need hundreds of nodes with tens if not more layers. This leads to a massive amount of variables that we need to keep track of at a time. Advances in parallel programming also allow us to run even larger ANNs in batches. Our artificial neural networks are now getting so large that we can no longer run a single epoch, which is an iteration through the entire network, at once. We need to do everything in batches which are just subsets of the entire network, and once we complete an entire epoch, then we apply the backpropagation. Along with now using deep learning, it’s important to know that there are a multitude of different architectures of artificial neural networks. The typical ANN is setup in a way where each neuron is connected to every other neuron in the next layer. These are specifically called feed forward artificial neural networks (even though ANNs are generally all feed forward). We’ve learned that by connecting neurons to other neurons in certain patterns, we can get even better results in specific scenarios. Recurrent Neural Networks (RNN) were created to address the flaw in artificial neural networks that didn’t make decisions based on previous knowledge. A typical ANN had learned to make decisions based on context in training, but once it was making decisions for use, the decisions were made independent of each other. When would we want something like this? Well, think about playing a game of Blackjack. If you were given a 4 and a 5 to start, you know that 2 low cards are out of the deck. Information like this could help you determine whether or not you should hit. RNNs are very useful in natural language processing since prior words or characters are useful in understanding the context of another word. There are plenty of different implementations, but the intention is always the same. We want to retain information. We can achieve this through having bi-directional RNNs, or we can implement a recurrent hidden layer that gets modified with each feedforward. If you want to learn more about RNNs, check out either this tutorial where you implement an RNN in Python or this blog post where uses for an RNN are more thoroughly explained. An honorable mention goes to Memory Networks. The concept is that we need to retain more information than what an RNN or LSTM keeps if we want to understand something like a movie or book where a lot of events might occur that build on each other. Convolutional Neural Networks (CNN), sometimes called LeNets (named after Yann LeCun), are artificial neural networks where the connections between layers appear to be somewhat arbitrary. However, the reason for the synapses to be setup the way they are is to help reduce the number of parameters that need to be optimized. This is done by noting a certain symmetry in how the neurons are connected, and so you can essentially “re-use” neurons to have identical copies without necessarily needing the same number of synapses. CNNs are commonly used in working with images thanks to their ability to recognize patterns in surrounding pixels. There’s redundant information contained when you look at each individual pixel compared to its surrounding pixels, and you can actually compress some of this information thanks to their symmetrical properties. Sounds like the perfect situation for a CNN if you ask me. Christopher Olah has a great blog post about understanding CNNs as well as other types of ANNs which you can find here. Another great resource for understanding CNNs is this blog post. The last ANN type that I’m going to talk about is the type called Reinforcement Learning. Reinforcement Learning is a generic term used for the behavior that computers exhibit when trying to maximize a certain reward, which means that it in itself isn’t an artificial neural network architecture. However, you can apply reinforcement learning or genetic algorithms to build an artificial neural network architecture that you might not have thought to use before. A great example and explanation can be found in this video, where YouTube user SethBling creates a reinforcement learning system that builds an artificial neural network architecture that plays a Mario game entirely on its own. Another successful example of reinforcement learning can be seen in this video where the company DeepMind was able to teach a program to master various Atari games. Now you should have a basic understanding of what’s going on with the state of the art work in artificial intelligence. Neural networks are powering just about everything we do, including language translation, animal recognition, picture captioning, text summarization and just about anything else you can think of. You’re sure to hear more about them in the future so it’s good that you understand them now! This post was written by Aaron at Josh.ai. Previously, Aaron worked at Northrop Grumman before joining the Josh team where he works on natural language programming (NLP) and artificial intelligence (AI). Aaron is a skilled YoYo expert, loves video games and music, has been programming since middle school and recently turned 21. Josh.ai is an AI agent for your home. If you’re interested in following Josh and getting early access to the beta, enter your email at https://josh.ai. Like Josh on Facebook — http://facebook.com/joshdotai Follow Josh on Twitter — http://twitter.com/joshdotai From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please. " Milo Spencer-Harper,317,6,https://medium.com/deep-learning-101/how-to-create-a-mind-the-secret-of-human-thought-revealed-6211bbdb092a?source=tag_archive---------4----------------,How to create a mind: The secret of human thought revealed,"In my quest to learn about AI, I read ‘How to create a mind: The secret of human thought revealed’ by Ray Kurzweil. It was incredibly exciting and I’m going to share what I’ve learned. If I was going to summarise the book in one sentence, I could do no better than Kurzweil’s own words: Kurzweil argues convincingly that it is both possible and desirable. He goes on to suggest that the algorithm may be simpler than we would expect and that it will be based on the Pattern Recognition Theory of the Mind (PRTM). The human brain is the most incredible thing in the known universe. A three-pound object, it can discover relativity, imagine the universe, create music, build the Taj Mahal and write a book about the brain. However, it also has limitations and this gives us clues as to how it works. Recite the alphabet. Ok. Good. Now recite it backwards. The former was easy, the latter likely impossible. Yet, a computer finds it trivial to reverse a list. This tells us that the human brain can only retrieve information sequentially. Studies have also revealed that when thinking about something, we can only hold around four high level concepts in our brain at a time. That’s why we use tools, such as pen and paper to solve a maths problem, to help us think. So how does the human brain work? Mammals actually have two brains. The old reptilian brain, called the amygdala and the conscious part, called the neocortex. The amygdala is pre-programmed through evolution to seek pleasure and avoid pain. We call this instinct. But what distinguishes mammals from other animals, is that we have also evolved to have a neocortex. Our neocortex rationalises the world around us and makes predictions. It allows us to learn. The two brains are tightly bound and work together. However when reading the book, I wondered if these two brains might also be in conflict. It would explain why the idea of internal struggle is present throughout literature and religion: good vs. evil, social conformity vs. hedonism. What’s slightly more alarming is we may have more minds than that. Our brain is divided into two hemispheres, left and right. Studies of split-brain patients, where the connection between them has been severed, shows that these patients are not necessarily aware that the other mind exists. If one mind moves the right-hand, the other mind will post-rationalise this decision by creating a false memory (a process known as confabulation). This has implications for us all. We may not have the free will which we perceive to have. Our conscious part of the brain, may simply be creating explanations for what the unconscious parts have already done. So how does the neocortex work? We know that it consists of around 30 billion cells, which we call neurons. These neurons are connected together and transmit information using electrical impulses. If the sum of the electrical pulses across multiple inputs to a neuron exceeds a certain threshold, that neuron fires causing the next neuron in the chain to fire, and this goes on continuously. We call these processes thoughts. At first, scientists thought this neural network was such a complicated and tangled web, that it would be impossible to ever understand. However, Kurzweil uses the example of the Einstein’s famous equation E = mc^2 to demonstrate that sometimes the solutions to complex problems are surprisingly simple. There are many examples in science, from Newtonian mechanics to thermodynamics, which show that moving up a level of abstraction dramatically simplifies modelling complex systems. Recent innovations in brain imaging techniques have revealed that the neocortex contains modules, each consisting of around 100 neurons, repeating over and over again. There are around 300 million of these modules arranged in a grid. So if we could discover the equations which model this module, repeat it on a computer 300 million times and expose it to sensory input, we could create an intelligent being. But what do these modules do? Kurzweil, who has spent decades researching AI, proposes that these modules are pattern recognisers. When reading this page, one pattern recogniser might be responsible for detecting a horizontal stroke. This module links upward to a module responsible for the letter ‘A’, and if the other relevant stroke modules light up, the ‘A’ module also lights up. The modules ‘A’ , ‘p’, ‘p’ and ‘l’ link to the ‘Apple’ module, which in turn is linked to higher level pattern recognisers, such as thoughts about apples. You don’t actually need to see the ‘e’ because the ‘Apple’ pattern recogniser fires downward, telling the one responsible for the letter ‘e’ that there is a high probability of seeing one. Conversely, inhibitory signals suppress pattern recognisers from firing if a higher level pattern recogniser has detected such an event is unlikely, given the context. We literally see what we expect to see. Kurzweil calls this the ‘Pattern Recogniser Theory of the Mind (PRTM)’. Although it is hard for us to imagine, all of our thoughts and decisions, can be explained by huge numbers of these pattern recognisers hooked together. We organise these thoughts to explain the world in a hierarchal fashion and use words to give meaning to these modules. The world is naturally hierarchal and the brain mirrors this. Leaves are on trees, trees make up a forest, and a forest covers a mountain. Language is closely related to our thoughts, because language directly evolved from and mirrors our brain. This helps to explain why different languages follow remarkably similar structures. It explains why we think using our native language. We use language not only to express ideas to others, but to express ideas within our own mind. What’s interesting, is that when AI researchers have worked independently of neuroscientists, their most successful methods turned out to be equivalent to the human brain’s methods. Thus, the human brain offers us clues for how to create an intelligent nonbiological entity. If we work out the algorithm for a single pattern recogniser, we can repeat it on a computer, creating a neural network. Kurzweil argues that these neural networks could become conscious, like a human mind. Free from biological constraints and benefiting from the exponential growth in computing power, these entities could create even smarter entities, and surpass us in intelligence (this prediction is called technological singularity). I’ll discuss the ethical and social considerations in a future blog post, but for now let’s assume it is desirable. The question then becomes, what is the algorithm for a single pattern recogniser? Kurzweil recommends using a mathematical technique called hierarchal hidden Markov models, named after the Russian mathematician Andrey Markov (1856–1922). However, this technique is too technical to be properly explained in Kurzweil’s book. So my next two goals are: (1) To learn as much as I can about hierarchal hidden Markov models. (2) To build a simple neural network written in Python from scratch which can be trained to complete a simple task. In my next blog post, I learn how to build a neural network in 9 lines of Python code. Note: Submissions do not necessarily represent the views of the editors. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. Fundamentals and Latest Developments in #DeepLearning " Karl N.,10,7,https://gab41.lab41.org/taking-keras-to-the-zoo-9a76243152cb?source=tag_archive---------5----------------,Taking Keras to the Zoo – Gab41,"If you follow any of the popular blogs like Google’s research, FastML, Smola’s Adventures in Data Land, or one of the indie-pop ones like Edwin Chen’s blog, you’ve probably also used ModelZoo. Actually, if you’re like our boss, you affectionately call it “The Zoo”. (Actually x 2, if you have interesting blogs that you read, feel free to let us know!) Unfortunately, ModelZoo is only supported in Caffe. Fortunately, we’ve taken a look at the difference between the kernels in Keras, Theano, and Caffe for you, and after reading this blog, you’ll be able to load models from ModelZoo into any of your favorite Python tools. Why this post? Why not just download our Github code? In short, it’s better you figure out how these things work before you use them. That way, you’re better armed to use the latest TensorFlow and Neon toolboxes if you’re prototyping and transitioning your code to Caffe. So, there’s Hinton’s Dropout and then there’s Caffe’s Dropout...and they’re different. You might be wondering, “What’s the big deal?” Well sir, I have a name of a guy for you, and it’s Willy...Mr. Willy Nilly. One thing Willy Nilly likes is the number 4096. Another thing he likes is to introduce regularization (which includes Dropout) arbitrarily, and Bayesian theorists aren’t a fan. Those people try to fit their work into the probabilistic framework, and they’re trying to hold onto what semblance of theoretical bounds exist for neural networks. However, for you as a practitioner, understanding who’s doing what will save you hours of debugging code. We singled out Dropout because the way people have implemented it spans the gamut. There’s actually some history as to this variation, but no one really cared, because optimizing for it has almost universally produced similar results. Much of the discussion stems from how the chain rule is implemented since randomly throwing stuff away is apparently not really a differentiable operation. Passing gradients back (i.e., backpropagation) is a fun thing to do; there’s a “technically right” way to do it, and then there’s what’s works. Back to ModelZoo, where we’d recommend you note the only sentence of any substance in this section, and the sentence is as follows. While Keras and perhaps other packages multiply the gradients by the retention probability at inference time, Caffe does not. That is to say, if you have a dropout level of 0.2, your retention probability is 0.8, and at inference time, Keras will scale the output of your prediction by 0.8. So, download the ModelZoo *.caffemodels, but know that deploying them on Caffe will produce non-scaled results, whereas Keras will. Hinton explains the reason why you need to scale, and the intuition is as follows. If you’ve only got a portion of your signal seeping through to the next layer during training, you should scale the expectation of what the energy of your final result should be. Seems like a weird thing to care about, right? The argument that minimizes x is still the same as the argument that minimizes 2x. This turns out to be a problem when you’re passing multiple gradients back and don’t implement your layers uniformly. Caffe works in instances like Siamese Networks or Bilinear Networks, but should you scale your networks on two sides differently, don’t be surprised if you’re getting unexpected results. What does this look like in Francois’s code? Look at the “Dropout” code on Github, or in your installation folder under keras/layers/core.py. If you want to make your own layer for loading in the Dropout module, just comment out the part of the code that does this scaling: You can modify the original code, or you can create your own custom layer. (We’ve opted to keep our installation of Keras clean and just implemented a new class that extended MaskedLayer.) BTW, you should be careful in your use of Dropout. Our experience with them is that they regularize okay, but could contribute to vanishing gradients really quickly. Everyday except for Sunday and some holidays, a select few machine learning professors and some signal processing leaders meet in an undisclosed location in the early hours of the morning. The topic of their discussion is almost universally, “How do we get researchers and deep learning practitioners to code bugs into their programs?” One of the conclusions a while back was that the definition of convolution and dense matrix multiplication (or cross-correlation) should be exactly opposite of each other. That way, when people are building algorithms that call themselves “Convolutional Neural Networks”, no one will know which implementation is actually being used for the convolution portion itself. For those who don’t know, convolutions and sweeping matrix multiplication across an array of data, differ in that convolutions will be flipped before being slid across the array. From Wikipedia, the definition is: On the other hand, if you’re sweeping matrix multiplications across the array of data, you’re essentially doing cross-correlation, which on Wikipedia, looks like: Like we said, the only difference is that darned minus/plus sign, which caused us some headache. We happen to know that Theano and Caffe follow different philosophies. Once again, Caffe doesn’t bother with pleasantries and straight up codes efficient matrix multiplies. To load models from ModelZoo into either Keras and Theano will require the transformation because they strictly follow the definition of convolution. The easy fix is to flip it yourself when you’re loading the weights into your model. For 2D convolution, this looks like: weights=weights[:,:,::-1,::-1] Here, the variable “weights” will be inserted into your model’s parameters. You can set weights by indexing into the model. For example, say I want to set the 9th layer’s weights. I would type: model.layers[9].set_weights(weights) Incidentally, and this is important, when loading any *.caffemodel into Python, you may have to transpose it in order to use it. You can quickly find this out by loading it if you get an error, but we thought it worth noting. Alright, alright, we know what you’re really here for; just getting the code and running with it. So, we’ve got some example code that classifies using Keras and the VGG net from the web at our Git (see the link below). But, let’s go through it just a bit. Here’s a step by step account of what you need to do to use the VGG caffe model. And now you have the basics! Go ahead and take a look at our Github for some goodies. Let us know! Originally published at www.lab41.org on December 13, 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Gab41 is Lab41’s blog exploring data science, machine learning, and artificial intelligence. Geek out with us! " Milo Spencer-Harper,42,3,https://medium.com/@miloharper/thanks-so-much-for-your-response-jared-really-glad-to-hear-you-enjoyed-reading-it-9d73caa469ff?source=tag_archive---------6----------------,Thanks so much for your response Jared. Really glad you enjoyed reading it.,"Thanks so much for your response Jared. Really glad you enjoyed reading it. Could you go into more detail about finding the error on layer 1? That’s a really great question! I’ve changed this response quite a bit as I wrote it, because your question helped me improve my own understanding. It sounds like you know quite a lot about neural networks already, however I’m going to explain everything fully for readers who are new to the field. In the article you read, I modelled the neural network using matrices (grids of numbers). That’s the most common method as it is computationally faster and mathematically equivalent, but it hides a lot of the details. For example, line 15 calculates the error in layer 1, but it is hard to visualise what it is doing. To help me learn, I’ve re-written that same code by modelling the layers, neurons and synapses explicitly and have created a video of the neural network learning. I’m going to use this new version of my code to answer your question. For clarity, I’ll describe how I’m going to refer to the layers. The three input neurons are layer 0, the four neurons in the hidden layer are layer 1 and the single output neuron is layer 2. In my code, I chose to associate the synapses with the neuron they flow into. How do I find the error in layer 1? First I calculate the error of the output neuron (layer 2), which is the difference between its output and the output in the training set example. Then I work my way backwards through the neural network. So I look at the incoming synapses into layer 2, and estimate how much each of the neurons in layer 1 were responsible for the error. This is called back propagation. In my new version of the code, the neural network is represented by a class called NeuralNetwork, and it has a method called train(), which is shown below. You can see me calculating the error of the ouput neuron (lines 3 and 4). Then I work backwards through the layers (line 5). Next, I cycle through all the neurons in a layer (line 6) and call each individual neuron’s train() method (line 7). But what does the neuron’s train() method do? Here it is: You can see that I cycle through every incoming synapse into the neuron. The two key things to note are: Let’s consider Line 4 even more carefully, since this is the line which answers your question directly. For each neuron in layer 1, its error is equal to the error in the output neuron (layer 2), multiplied by the weight of its synapse into the output neuron, multiplied by the sensitivity of the output neuron to input. The sensitivity of a neuron to input, is described by the gradient of its output function. Since I used the Sigmoid curve as my ouput function, the gradient is the derivative of the Sigmoid curve. As well as using the gradient to calculate the errors, I also used the gradient to adjust the weights, so this method of learning is called gradient descent. If you look back at my old code, which uses matrices you can see that it is mathematically equivalent (unless I made a mistake). With the matrices method, I calculated the error for all the neurons in layer 1 simultaneously. With the new code, I iterated through each neuron separately. I hope that helps answer your question. Also, I’m curious if there is any theory or rule of thumb on how many hidden layers and how many neurons in each layer should be used to solve a problem. Another good question! I’m not sure. I’m pretty new to neural networks. I only started learning about them recently. I did read a book by the AI researcher Ray Kurzweil, which said that an evolutionary approach works better than consulting experts, when selecting the overall parameters for a neural network. Those neural networks which learned the best, would be selected, he would make random mutations to the parameters, and then pit the offspring against one another. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI. " Nikolai Savas,50,10,https://medium.com/@savas/craig-using-neural-networks-to-learn-mario-a76036b639ad?source=tag_archive---------7----------------,CrAIg: Using Neural Networks to learn Mario – Nikolai Savas – Medium,"Joe Crozier and I recently came back from YHack, a 36-hour, 1500 person hackathon held by Yale University. This is our second year in a row attending, and for the second time we managed to place in the top 8! Our project, named “crAIg”, is a self-teaching algorithm that learns to play Super Mario Bros. for the Nintendo Entertainment System (NES). It begins with absolutely no knowledge of what Mario is, how to play it, or what winning looks like, and using neuroevolution, slowly builds up a foundation for itself to be able to progress in the game. My focus on this project was the gritty details of the implementation of crAIg’s evolution algorithm, so I figured I’d make a relatively indepth blog post about it. crAIg’s evolution is based on a paper titled Evolving Neural Networks through Augmented Topologies, specifically an algorithm titled “NEAT”. The rest of the blog post is going to cover my implementation of it, hopefully in relatively layman’s terms. Before we jump right into the algorithm, I’m going to lay a foundation for the makeup of crAIg’s brain. His “brain” at any given point playing the game is made up of a collection of “neurons” and “synapses”, alternatively titled nodes and connections/links. Essentially, his brain is a directed graph. Above is the second part of this project, a Node.js server that displays the current state of crAIg’s brain, or what he is “thinking”. Let’s go through it quickly to understand what it’s representing. On the left you see a big grid of squares. This is what the game looks like right now, or what crAIg can “see”. He doesn’t know what any of the squares mean, but he knows that an “air” tile is different from a “ground” tile in some way. Each of the squares is actually an input neuron. On the right side you can see the 4 “output neurons”, or the buttons that crAIg can press. You can also see a line going from one of the black squares on the left grid to the “R” neuron, labelled “1”. This is a synapse, and when the input neuron fires on the left, it will send a signal down the synapse and tell crAIg to press the “R” button. In this way, crAIg walks right. As crAIg evolves, more neurons and synapses are created until his brain might look something more like this: In this one I’ll just point out a couple things. First of all, the green square on the left is a goomba. Second, you can see another neuron at the very bottom (labelled 176). This is called a hidden neuron, and represents a neuron that is neither input nor output. They appear in crAIg’s brain for added complexity as he evolves. You can also see that at his time of death (Mario just died to a goomba), he was trying to press the “R” and “B” buttons. While learning Mario is a neat application of neural networks and neuroevolution, it serves mostly as a means to demonstrate the power of these self-evolving neural networks. In reality, the applications for neural networks is endless. While crAIg only learned how to play a simple NES game, the exact same algorithm that was implemented could also be applied to a robot that cleans your house, works in a factory, or even paints beautiful paintings. crAIg is a cool peek into the future where machines no longer need to be programmed to complete specific tasks, but are instead given guidelines and can teach themselves and learn from experience. As the tasks we expect machines to complete become more and more complex, it becomes less possible to “hard code” their tasks in. We need more versatile machines to work for us, and evolving neural networks are a step in that direction. If you’re curious about some history behind the problems encountered by neuroevolution, I highly recommend reading the paper that this algorithm is based off. The first section of the paper covers many different approaches to neuroevolution and their benefits. NEAT is a genetic algorithm that puts every iteration of crAIg’s brain to the test and then selectively breeds them in a very similar way to the evolution of species in nature. The hierarchy is as follows: Synapse/Neuron: Building blocks of crAIg’s brain. Genome: An iteration of crAIg’s brain. Essentially a collection of neurons and synapses. Species: A collection of Genomes. Generation: An iteration of the NEAT algorithm. This is repeated over and over to evolve crAIg. The first step every generation is to calculate the fitness of every individual genome from the previous generation. This involves running the same function on each genome so that NEAT knows how successful each one is. For crAIg, this means running through a Mario level using a particular genome, or “brain”. After running through the level, we determine the “fitness” of the genome by this function: Once the fitness of every genome has been calculated, we can move on to the next portion of the algorithm. This part of the algorithm is probably the least intuitive. The reason for this “adjusted fitness” is to discourage species from growing too big. As the population in a species goes up, their “adjusted fitness” goes down, forcing the genetic algorithm to diversify. The proper implementation of this algorithm is relatively intensive, so for crAIg’s implementation we simplified it to the following: The important part here is that each genome now has an adjusted fitness value associated with it. Here’s where the natural selection part comes in! The “Survival of the fittest” portion is all about determining how many genomes survive another generation, as well as how many offspring will be born in the species. The algorithms used here aren’t outlined directly in the paper, so most of these algorithms were created through trial and error. The first step is to determine how many off a species will die to make room for more babies. This is done proportionally to a species’ adjusted fitness: the higher the adjusted fitness, the more die off to make room for babies. The second step is to determine how many children should be born in the species. This is also proportional to the adjusted fitness of the species. By the end of these two functions, the species will have a certain number of genomes left as well as a “baby quota” — the difference between the number of genomes and the populationSize. This algorithm is necessary to allow for species to be left behind. Sometimes a species will go down the completely wrong path, and there’s no point in keeping them around. This algorithm works in a very simple way: If a species is in the bottom __% of the entire generation, it is marked for extinction. If a species is marked for extinction __ times in a row, then all genomes in the species are killed off. Now comes the fun genetics part! Each species should have a certain number of genomes as well as a certain number of allotted spots for new offspring. Those spots now need to be populated. Each empty population spot needs to be filled, but can be filled through either “asexual” or “sexual” reproduction. In other words, offspring can result from either two genomes in the species being merged or from a mutation of a single genome in the species. Before I discuss the process of “merging” two genomes, I’ll first discuss mutations. There are three kinds of mutations that can happen to a genome in NEAT. They are as follows: This involves a re-distribution of all synapse weights in a genome. They can be either completely re-distributed or simply “perturbed”, meaning changed slightly. 2. Mutate Add Synapse Adding a synapse means finding two previously unconnected nodes and connecting them with a synapse. This new synapse is given a random weight. 3. Mutate Add Node This is the trickiest of the mutations. When adding a node, you need to split an already existing synapse into two synapses and add a node in between them. The weight of the original synapse is copied on to the second synapse, while the first synapse is given a weight of 1. One important fact to note is that the first synapse (bright red in the above picture) is not actually deleted, but merely “disabled”. This means that it exists in the genome, but it is marked as inactive. Synapses added in either Mutate Add Node or Mutate Add Synapse are given a unique “id” called a “historical marking”, that is used in the crossover (mating) algorithm. When two genomes “mate” to produce an offspring, there is an algorithm detailed in the NEAT paper that must be followed. The intuition behind it is to match up common ancestor synapses (remember we’ve been keeping their “historical marking”s), then take the mutations that don’t match up and mix and match them to create the child. Once a child has been created in this way, it undergoes the mutation process outlined above. I won’t go into too much detail on this algorithm but if you’re curious about it you can find a more detailed explanation of it in section 3.2 of the original paper, or you can see the code I used to implement it here. Once all the babies have been created in every species, we can finally progress to the final stage of the genetic algorithm: Respeciation. Essentially, we first select a “candidate genome” from each species. This genome is now the representative for the species. All genomes that are not selected as candidates are put into a generic pool and re-organized. The re-organization relies on an equation called the “compatibility distance equation”. This equation determines how similar (or different) any two given genomes are. I won’t go into the gritty details of how the equation works, as it is well explain in section 3.3 of the original paper, as well as here in crAIg’s code. If a genome is too different from any of the candidate genomes, it is placed in its own species. Using this process, all of the genomes in the generic pool are re-placed into species. Once this process has completed, the generation is done, and we are ready to re-calculate the fitness of each of the genomes. While creating crAIg meant getting very little sleep at YHack, it was well worth it for a couple reasons. First of all, the NEAT algorithm is a very complex one. Learning how to implement a complex algorithm without losing myself in its complexity was an exercise in code cleanliness, despite being pressed for time because of the hackathon. It was also very interesting to create an algorithm that is mostly based off a paper as opposed to one that I have example code to work with. Often this meant carefully looking into the wording used in the paper to determine whether I should be using a > or a >=, for example. One of the most difficult parts of this project was that I was unable to test as I was programming. I essentially wrote all of the code blind and then was able to test and debug it once it had all been created. This was for a couple reasons, partially because of the time constraints of a hackathon, and partially because the algorithm as a whole has a lot of interlocking parts, meaning they needed to be in a working state to be able to see if the algorithm worked. Overall I’m happy and proud by how Joe and I were able to deal with the stress of creating such a deep and complex project from scratch in a short 36 hour period. Not only did we enjoy ourselves and place well, but we also managed to teach crAIg some cool skills, like jumping over the second pipe in Level 1: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. http://savas.ca/ — niko@savas.ca " Dr Ben Medlock,32,4,https://medium.com/@Ben_Medlock/why-turing-s-legacy-demands-a-smarter-keyboard-9e7324463306?source=tag_archive---------8----------------,Why Turing’s legacy demands a smarter keyboard – Dr Ben Medlock – Medium,"Why Turing’s legacy demands a smarter keyboard When you start a company, you dream of walking in the footsteps of your heroes. For those working in artificial intelligence, the British computer scientist and father of the field Alan Turing always comes to mind. I thought of him when I did my PhD, when I co-founded an AI keyboard company in 2009, and when we pasted his name on a meeting room door in our first real office. As a British tech company, today is a big day for SwiftKey. We’ve introduced some of the principles originally conceived of by Turing — artificial neural networks — into our smartphone keyboard for the first time. I want to explain how we managed to do it and how a technology like this, something you may never have heard of before, will help define the smartphone experience of the future. This is my personal take; for the official version check out the SwiftKey blog. Frustration-free typing on a smartphone relies on complex software to automatically fix typos and predict the words you might want to use. SwiftKey has been at the forefront of this area since 2009, and today our software is used across the world on more than half a billion handsets. Soon after we launched the first version of our app in 2010, I started to think about using neural networks to power smartphone typing rather than the more traditional n-gram approach (a sophisticated form of word frequency counting). At the time it seemed little more than theoretical, as mobile hardware wasn’t up to the task. However, three years later, the situation began to look more favorable, and in late 2013, our team started working on the idea in earnest. In order to build a neural network-powered SwiftKey, our engineers were tasked with the enormous challenge of coming up with a solution that would run locally on a smartphone without any perceptible lag. Neural network language models are typically deployed on large servers, requiring huge computational resources. Getting the tech to fit into a handheld mobile device would be no small feat. After many months of trial, error and lots of experimentation, the team realized they might have found an answer with a combination of two approaches. The first was to make use of the graphical processing unit (GPU) on the phone (utilizing the powerful hardware acceleration designed for rendering complex graphical images) but thanks to some clever programming, they were also able to run the same code on the standard processing unit when the GPU wasn’t available. This combo turned out to be the winning ticket. So, back to Turing. In 1948 he published a little-known essay called Intelligent Machinery in which he outlined two forms of computing he felt could ultimately lead to machines exhibiting intelligent behavior. The first was a variant of his highly influential “universal Turing machine”, destined to become the foundation for hardware design in all modern digital computers. The second was an idea he called an “unorganized machine”, a type of computer that would use a network of “artificial neurons” to accept inputs and translate them into predicted outputs. Connecting together many small computing units, each with the ability to receive, modify and pass on basic signals, is inspired by the structure of the human brain. That’s why the appropriation of this concept in software form is called an “artificial neural network”, or a “neural network” for short. The idea is that a collection of artificial neurons are connected together in a specific way (called a “topology”) such that a given set of inputs (what you’ve just typed, for example) can be turned into a useful output (e.g. your most likely next word). The network is then “trained” on millions, or even billions, of data samples and the behavior of the individual neurons is automatically tweaked to achieve the desired overall results. In the last few years, neural network approaches have facilitated great progress on tough problems such as image recognition and speech processing. Researchers have also begun to demonstrate advances in non-traditional tasks such as automatically generating whole sentence descriptions of images. Such techniques will allow us to better manage the explosion of uncategorized visual data on the web, and will lead to smarter search engines and aids for the visually impaired, among a host of other applications. The fact that the human brain is so adept at working with language suggests that neural networks, inspired by the brain’s internal structure, are a good bet for the future of smartphone typing. In principle, neural networks also allow us to integrate powerful contextual cues to improve accuracy, for instance a user’s current location and the time of day. These will be stepping stones to more efficient and personal device interactions — the keyboard of the future will provide an experience that feels less like typing and more like working with a close friend or personal assistant. Applying neural networks to real world problems is part of a wider technology movement that’s changing the face of consumer electronics for good. Devices are getting smarter, more useful and more personal. My goal is that SwiftKey contributes to this revolution. We should all be spending less time fixing typos and more time saying what we mean, when it matters. It’s the legacy we owe to Turing. The photograph “Alan Turing” by joncallas is licensed under CC BY 2.0. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Technopreneur, @SwiftKey co-founder " Nieves Ábalos,18,7,https://labs.beeva.com/sem%C3%A1ntica-desde-informaci%C3%B3n-desestructurada-90ce87736812?source=tag_archive---------9----------------,Semántica desde información desestructurada – BEEVA Labs,"Detectar patrones es un núcleo importante en el mundo del Procesamiento del Lenguaje Natural. Esta detección de patrones nos permite clasificar documentos, lo que tiene muchas aplicaciones: análisis de sentimiento (sentiment analysis),recuperación de documentos (document retrieval), búsqueda web, filtrado de spam, etc. Esta clasificación se hace de manera automática de forma supervisada o no supervisada (también conocida como clustering de documentos). Entre las técnicas más clásicas y utilizadas (generalmente supervisadas) encontramos clasificadores Naive Bayes, árboles de decisión (ID3 o C4.5), tf-idf, Latent semantic indexing (LSI) y Support vector machines (SVM). Algunas técnicas utilizadas para extraer características suelen inspirarse en cómo el ser humano es capaz de aprender de información simple y llegar a información más compleja. Se pueden diferenciar entre redes neuronales (algunas topologías de redes neuronales se engloban dentro del concepto ‘deep learning’) y técnicas que no usan estas redes para reconocer patrones. En BEEVA nos hemos encontrado varias veces con un mismo problema: ¿cómo sabemos si dos documentos son semejantes? (y con “semejantes” queremos decir que tratan de lo mismo). Esto, entre otras cosas, nos permitiría categorizar documentos dentro del mismo tema de manera automática. Así que, a priori, nos encontramos con dos retos: Necesitamos representar los documentos de manera que los algoritmos que usemos los puedan entender. Normalmente, estas representaciones o modelos están basadas en matrices de características que posee cada documento. Para representar textos, podemos usar técnicas de representación de manera local o de manera continua. La representación local es aquella en la que solo tenemos en cuenta las palabras de forma aislada y se representa como un conjunto de términos índice o palabras clave (n-gramas, bag-of-words...). Este tipo de representación no tiene en cuenta la relación entre términos. La representacióncontinua es aquella en la que sí se tiene en cuenta el contexto de las palabras y la relación entre ellas y se representan como matrices, vectores, conjuntos e incluso nodos (LSA o LSI, LDS, LDA, representaciones distribuidas o predictivas usando redes neuronales). Para nuestro primer reto, extraer semántica, vamos a probar una representación continua llamada representación distribuida de palabras (distributed representations of words). Esta consiste en aprender representaciones vectoriales de palabras, es decir, vamos a tener un espacio multidimensional en el que una palabra es representada como un vector. Una de las cosas interesantes de estos vectores es que son capaces de extraer características tan relevantes como propiedades sintácticas y semánticas de las palabras (Turian et al, 2010). La otra es que este aprendizaje automático se realiza con datos de entrada no etiquetados, es decir, es no supervisado. Estos vectores pueden ser utilizados como entrada de muchas aplicaciones de Procesamiento del Lenguaje Natural y Machine Learning. De hecho, es nuestro segundo reto utilizaremos estos vectores para intentar extraer temas de documentos. Para aplicar esta técnica usamos la herramienta word2vec (Mikolov et al — Google, 2013), que utiliza como entrada un corpusde textos o documentos cualquiera, y obtener como salida vectores representando las palabras. La arquitectura en la que se basa word2vec utiliza redes neuronales para aprender estas representaciones. Aunque también se pueden obtener vectores que representen frases, párrafos o incluso documentos completos (Le and Mikolov, 2014). Primero, utilizamos la implementación en Python de la herramienta word2vec, incluida en la librería gensim. Como entrada para generar los vectores tenemos dos datasets con documentos en castellano: Wikipedia y Yahoo! Answers (de este dataset, solo los que están en español). El proceso es el siguiente (Figura 1), dado el conjunto de textos, se construye un vocabulario y word2vec aprende las representaciones vectoriales de palabras. Los algoritmos de aprendizaje que utiliza word2vec son: bag-of-words continuo y skip-gram continuo. Ambos algoritmos aprenden las representaciones de una palabra, las cuales son útiles para predecir otras palabras en la frase. Como sabemos que los vectores capturan muchas regularidades lingüísticas, podemos aplicar operaciones vectoriales para extraer muchas propiedades interesantes. Por ejemplo, si queremos saber qué palabras son las más similares a una dada, buscamos cuales están más cerca aplicando ‘distancia coseno’ (cosine distance) o ‘similitud coseno’ (cosine similarity). Por ejemplo, con el modelo de Wikipedia, qué cinco palabras se parecen más a una dada. También podemos obtener qué seis palabras se parecen más a dos dadas con el modelo de la Wikipedia y el de Yahoo para ver las diferencias: Otra propiedad interesante es que las operaciones vectoriales: vector(rey) — vector(hombre) + vector(mujer) nos da como resultado un vector muy cercano a vector(reina). Por ejemplo, vector(pareja) — vector(hombre) + vector(novio) nos da como resultado estos vectores: Al haber trabajado con dos conjuntos de datos diferentes, Wikipedia y Yahoo! answers, podemos crear dos espacios de representaciones vectoriales ligeramente diferentes con respecto al vocabulario usado y la semántica inherente en ellos. En el de Yahoo! encontramos entre las palabras más similares la misma palabra mal escrita de diferentes maneras. En wikipedia esto no pasa, pues la escritura es mucho más correcta. Además, en el conjunto de Yahoo! tenemos no sólo preguntas en castellano, sino que también encontramos otras en mejicano, argentino y otros dialectos de sudamérica. Esto nos permite encontrar palabras similares en diferentes dialectos. Con respecto al tiempo que tardamos en crear nuestro espacio de vectores, la mayoría del tiempo se dedica al preprocesamiento y limpieza de esos documentos. La implementación de gensim permite modificar los parámetros de creación del modelo e incluso utilizar varios workers con Cython para que el entrenamiento sea más rápido. La calidad de estos vectores dependerá de la cantidad de datos de entrenamiento, del tamaño de los vectores y del algoritmo elegido para entrenar. Para obtener mejores resultados, es necesario entrenar los modelos con datasets grandes y con suficiente dimensionalidad. Para más detalles os recomendamos la lectura del trabajo de Mikolov y Le. En la siguiente tabla os mostramos el tiempo que se tarda aproximadamente en entrenar unos 500 MB de datos, suficientes para obtener un buen modelo de vectores. El tiempo total es el tiempo que tardamos en preprocesar los datos, entrenar y guardar el modelo para posteriores usos. Para usar a representaciones vectoriales de documentos hemos utilizado doc2vec, también de gensim. Como entrada de datos, hemos considerado documento como una página de la wikipedia o una pregunta de yahoo con sus respuestas. Hemos variado el tamaño del fichero de entrada (de 100.000 documentos a 258.088) para un worker y una dimensión de 300 y el tiempo de entrenamiento se reduce bastante, lo podemos ver en la siguiente tabla: Los tests ejecutados para ver el comportamiento del espacio de vectores no han sido tan satisfactorios como con word2vec. Los resultados para palabras similares son peores que con word2vec y para encontrar documentos similares a uno dado, vemos que no devuelve nada con mucho sentido. Como alternativa buscamos otros métodos que nos puedan decir qué documentos son parecidos entre sí. Os los presentaremos en el siguiente post. Word2vec es considerado como un método inspirado en deep learning (recomiendo la lectura de este artículo para aclarar conceptos) en ciertos grupos de especialistas en la materia y no tanto ‘deep learning’, sino ‘shallow learning’ en otros grupos. Sea como sea, la creación de espacios vectoriales para extraer propiedades sintácticas y semánticas de las palabras, de manera automática y no supervisada, nos abre todo un mundo de posibilidades a explorar. Estos vectores sirven de entrada para muchas aplicaciones como traducción automática, clusterización, categorización, e incluso puede ser entrada de otros modelos basados en deep learning. Y es que además de aplicarse al lenguaje natural, se está aplicando también en imágenes y reconocimiento de voz. Ya que doc2vec no nos ha gustado mucho, nuestro siguiente paso es aplicar estos espacios vectoriales a extraer temas y categorías de documentos con técnicas habituales en el mundo del Procesamiento del Lenguaje Natural y de Machine Learning como tf-idf. De ello hablaremos en un siguiente post. El corpus de datos de Yahoo (L6 — Yahoo! Answers Comprehensive Questions and Answers version 1.0 (multi part)) ha sido obtenido gracias a Yahoo! Webscope. Para procesar estos datos hemos utilizado la librería gensim para Python que implementa word2vec. Fuente imagen principal: freedigitalphotos.net / kangshutters From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Conversational interfaces expert, indie maker, product manager & entrepreneur. #VoiceFirst, #chatbots, #AI, #NLProc. Creating future concepts at @monoceros_xyz Innovative Knowledge " Arthur Juliani,9K,6,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------0----------------,Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks,"For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Andrej Karpathy,9.2K,7,https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------1----------------,Yes you should understand backprop – Andrej Karpathy – Medium,"When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets. " Arthur Juliani,3.5K,8,https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------2----------------,Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C),"In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series. So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments. Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself. Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse. Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods. Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately. The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow: Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage. In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation. In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository. The general outline of the code architecture is: The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself. Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU. ~ From here we go asynchronous ~ Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network. Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment. Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action. A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy. A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment. Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again. To view the full and functional code, see the Github repository here. The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as: pip install vizdoom Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory. The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks. I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs. (There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.) If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani. If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come " Rohan Kapur,1K,30,https://ayearofai.com/rohan-lenny-1-neural-networks-the-backpropagation-algorithm-explained-abf4609d4f9d?source=tag_archive---------3----------------,"Rohan & Lenny #1: Neural Networks & The Backpropagation Algorithm, Explained","In Rohan’s last post, he talked about evaluating and plugging holes in his knowledge of machine learning thus far. The backpropagation algorithm — the process of training a neural network — was a glaring one for both of us in particular. Together, we embarked on mastering backprop through some great online lectures from professors at MIT & Stanford. After attempting a few programming implementations and hand solutions, we felt equipped to write an article for AYOAI — together. Today, we’ll do our best to explain backpropagation and neural networks from the beginning. If you have an elementary understanding of differential calculus and perhaps an intuition of what machine learning is, we hope you come out of this blog post with an (acute, but existent nonetheless) understanding of neural networks and how to train them. Let us know if we succeeded! Let’s start off with a quick introduction to the concept of neural networks. Fundamentally, neural networks are nothing more than really good function approximators — you give a trained network an input vector, it performs a series of operations, and it produces an output vector. To train our network to estimate an unknown function, we give it a collection of data points — which we denote the “training set” — that the network will learn from and generalize on to make future inferences. Neural networks are structured as a series of layers, each composed of one or more neurons (as depicted above). Each neuron produces an output, or activation, based on the outputs of the previous layer and a set of weights. When using a neural network to approximate a function, the data is forwarded through the network layer-by-layer until it reaches the final layer. The final layer’s activations are the predictions that the network actually makes. All this probably seems kind of magical, but it actually works. The key is finding the right set of weights for all of the connections to make the right decisions (this happens in a process known as training) — and that’s what most of this post is going to be about. When we’re training the network, it’s often convenient to have some metric of how good or bad we’re doing; we call this metric the cost function. Generally speaking, the cost function looks at the function the network has inferred and uses it to estimate values for the data points in our training set. The discrepancies between the outputs in the estimations and the training set data points are the principle values for our cost function. When training our network, the goal will be to get the value of this cost function as low as possible (we’ll see how to do that in just a bit, but for now, just focus on the intuition of what a cost function is and what it’s good for). Generally speaking, the cost function should be more or less convex, like so: In reality, it’s impossible for any network or cost function to be truly convex. However, as we’ll soon see, local minima may not be a big deal, as long as there is still a general trend for us to follow to get to the bottom. Also, notice that the cost function is parameterized by our network’s weights — we control our loss function by changing the weights. One last thing to keep in mind about the loss function is that it doesn’t just have to capture how correctly your network estimates — it can specify any objective that needs to be optimized. For example, you generally want to penalize larger weights, as they could lead to overfitting. If this is the case, simply adding a regularization term to your cost function that expresses how big your weights will mean that, in the process of training your network, it will look for a solution that has the best estimates possible while preventing overfitting. Now, let’s take a look at how we can actually minimize the cost function during the training process to find a set of weights that work the best for our objective. Now that we’ve developed a metric for “scoring” our network (which we’ll denote as J(W)), we need to find the weights that will make that score as low as possible. If you think back to your pre-calculus days, your first instinct might be to set the derivative of the cost function to zero and solve, which would give us the locations of every minimum/maximum in the function. Unfortunately, there are a few problems with this approach: Especially as the size of networks begins to scale up, solving for the weights directly becomes increasingly infeasible. Instead, we look at a different class of algorithms, called iterative optimization algorithms, that progressively work their way towards the optimal solution. The most basic of these algorithms is gradient descent. Recall that our cost function will be essentially convex, and we want to get as close as possible to the global minimum. Instead of solving for it analytically, gradient descent follows the derivatives to essentially “roll” down the slope until it finds its way to the center. Let’s take the example of a single-weight neural network, whose cost function is depicted below. We start off by initializing our weight randomly, which puts us at the red dot on the diagram above. Taking the derivative, we see the slope at this point is a pretty big positive number. We want to move closer to the center — so naturally, we should take a pretty big step in the opposite direction of the slope. If we repeat the process enough, we soon find ourselves nearly at the bottom of our curve and much closer to the optimal weight configuration for our network. More formally, gradient descent looks something like this: Let’s dissect. Every time we want to update our weights, we subtract the derivative of the cost function w.r.t. the weight itself, scaled by a learning rate , and — that’s it! You’ll see that as it gets closer and closer to the center, the derivative term gets smaller and smaller, converging to zero as it approaches the solution. The same process applies with networks that have tens, hundreds, thousands, or more parameters — compute the gradient of the cost function w.r.t. each of the weights, and update each of your weights accordingly. I do want to say a few more words on the learning rate, because it’s one of the more important hyperparameters (“settings” for your neural network) that you have control over. If the learning rate is too high, it could jump too far in the other direction, and you never get to the minimum you’re searching for. Set it too low, and your network will take ages to find the right weights, or it will get stuck in a local minimum. There’s no “magic number” to use when it comes to a learning rate, and it’s usually best to try several and pick the one that works the best for your individual network and dataset. In practice, many choose to anneal the learning rate over time — it starts out high, because it’s furthest from the solution, and decays as it gets closer. But as it turns out, gradient descent is kind of slow. Really slow, actually. Earlier I used the analogy of the weights “rolling” down the gradient to get to the bottom, but that doesn’t actually make any sense — it should pick up speed as it gets to the bottom, not slow down! Another iterative optimization algorithm, known as momentum, does just that. As the weights begin to “roll” down the slope, they pick up speed. When they get closer to the solution, the momentum that they picked up carries them closer to the optima while gradient descent would simply stop. As a result, training with momentum updates is both faster and can provide better results. Here’s what the update rule looks like for momentum: As we train, we accumulate a “velocity” value V. At each training step, we update V with the gradient at the current position (once again scaled by the learning rate). Also notice that, with each time step, we decay velocity V by a factor mu (usually somewhere around .9), so that over time we lose momentum instead of bouncing around by the minimum forever. We then update our weight in the direction of the velocity, and repeat the process again. Over the first few training iterations, V will grow as our weights “pick up speed” and take successively bigger leaps. As we approach the minimum, our velocity stops accumulating as quickly, and eventually begins to decay, until we’ve essentially reached the minimum. An important thing to note is that we accumulate a velocity independently for each weight — just because one weight is changing particularly clearly doesn’t mean any of the other weights need to be. There are lots of other iterative optimization algorithms that are commonly used with neural networks, but I won’t go into all of them here (if you’re curious, some of the more popular ones include Adagrad and Adam). The basic principle remains the same throughout — gradually update the weights to get them closer to the minimum. But regardless of which optimization algorithm you use, we still need to be able to compute the gradient of the cost function w.r.t. each weight. But our cost function isn’t a simple parabola anymore — it’s a complicated, many-dimensional function with countless local optima that we need to watch out for. That’s where backpropagation comes in. The backpropagation algorithm was a major milestone in machine learning because, before it was discovered, optimization methods were extremely unsatisfactory. One popular method was to perturb (adjust) the weights in a random, uninformed direction (ie. increase or decrease) and see if the performance of the ANN increased. If it did not, one would attempt to either a) go in the other direction b) reduce the perturbation size or c) a combination of both. Another attempt was to use Genetic Algorithms (which became popular in AI at the same time) to evolve a high-performance neural network. In both cases, without (analytically) being informed on the correct direction, results and efficiency were suboptimal. This is where the backpropagation algorithm comes into play. Recall that, for any given supervised machine learning problem, we (aim to) select weights that provide the optimal estimation of a function that models our training data. In other words, we want to find a set of weights W that minimizes on the output of J(W). We discussed the gradient descent algorithm — one where we update each weight by some negative, scalar reduction of the error derivative with respect to that weight. If we do choose to use gradient descent (or almost any other convex optimization algorithm), we need to find said derivatives in numerical form. For other machine learning algorithms like logistic regression or linear regression, computing the derivatives is an elementary application of differentiation. This is because the outputs of these models are just the inputs multiplied by some chosen weights, and at most fed through a single activation function (the sigmoid function in logistic regression). The same, however, cannot be said for neural networks. To demonstrate this, here is a diagram of a double-layered neural network: As you can see, each neuron is a function of the previous one connected to it. In other words, if one were to change the value of w1, both “hidden 1” and “hidden 2” (and ultimately the output) neurons would change. Because of this notion of functional dependencies, we can mathematically formulate the output as an extensive composite function: And thus: Here, the output is a composite function of the weights, inputs, and activation function(s). It is important to realize that the hidden units/nodes are simply intermediary computations that, in actuality, can be reduced down to computations of the input layer. If we were to then take the derivative of said function with respect to some arbitrary weight (for example w1), we would iteratively apply the chain rule (which I’m sure you all remember from your calculus classes). The result would look similar to the following: Now, let’s attach a black box to the tail of our neural network. This black box will compute and return the error — using the cost function — from our output: All we’ve done is add another functional dependency; our error is now a function of the output and hence a function of the input, weights, and activation function. If we were to compute the derivative of the error with any arbitrary weight (again, we’ll choose w1), the result would be: Each of these derivatives can be simplified once we choose an activation and error function, such that the entire result would represent a numerical value. At that point, any abstraction has been removed, and the error derivative can be used in gradient descent (as discussed earlier) to iteratively improve upon the weight. We compute the error derivatives w.r.t. every other weight in the network and apply gradient descent in the same way. This is backpropagation — simply the computation of derivatives that are fed to a convex optimization algorithm. We call it “backpropagation” because it almost seems as if we are traversing from the output error to the weights, taking iterative steps using chain the rule until we “reach” our weight. When I first truly understood the backprop algorithm (just a couple of weeks ago), I was taken aback by how simple it was. Sure, the actual arithmetic/computations can be difficult, but this process is handled by our computers. In reality, backpropagation is just a rather tedious (but again, for a generalized implementation, computers will handle this) application of the chain rule. Since neural networks are convoluted multilayer machine learning model structures (at least relative to other ones), each weight “contributes” to the overall error in a more complex manner, and hence the actual derivatives require a lot of effort to produce. However, once we get past the calculus, backpropagation of neural nets is equivalent to typical gradient descent for logistic/linear regression. Thus far, I’ve walked through a very abstract form of backprop for a simple neural network. However, it is unlikely that you will ever use a single-layered ANN in applications. So, now, let’s make our black boxes — the activation and error functions — more concrete such that we can perform backprop on a multilayer neural net. Recall that our error function J(W) will compute the “error” of our neural network based on the output predictions it produces vs. the correct a priori outputs we know in our training set. More formally, if we denote our predicted output estimations as vector p, and our actual output as vector a, then we can use: This is just one example of a possible cost function (the log-likelihood is also a popular one), and we use it because of its mathematical convenience (this is a notion one will frequently encounter in machine learning): the squared expression exaggerates poor solutions and ensures each discrepancy is positive. It will soon become clear why we multiply the expression by half. The derivative of the error w.r.t. the output was the first term in the error w.r.t. weight derivative expression we formulated earlier. Let’s now compute it! Our result is simply our predictions take away our actual outputs. Now, let’s move on to the activation function. The activation function used depends on the context of the neural network. If we aren’t in a classification context, ReLU (Rectified Linear Unit, which is zero if input is negative, and the identity function when the input is positive) is commonly used today. If we’re in a classification context (that is, predicting on a discrete state with a probability ie. if an email is spam), we can use the sigmoid or tanh (hyperbolic tangent) function such that we can “squeeze” any value into the range 0 to 1. These are used instead of a typical step function because their “smoothness” properties allows for the derivatives to be non-zero. The derivative of the step function before and after the origin is zero. This will pose issues when we try to update our weights (nothing much will happen!). Now, let’s say we’re in a classification context and we choose to use the sigmoid function, which is of the following equation: As per usual, we’ll compute the derivative using differentiation rules as: EDIT: On the 2nd line, the denominator should be raised to +2, not -2. Thanks to a reader for pointing this out. Sidenote: ReLU activation functions are also commonly used in classification contexts. There are downsides to using the sigmoid function — particularly the “vanishing gradient” problem — which you can read more about here. The sigmoid function is mathematically convenient (there it is again!) because we can represent its derivative in terms of the output of the function. Isn’t that cool‽ We are now in a good place to perform backpropagation on a multilayer neural network. Let me introduce you to the net we are going to work with: This net is still not as complex as one you may use in your programming, but its architecture allows us to nevertheless get a good grasp on backprop. In this net, we have 3 input neurons and one output neuron. There are four layers in total: one input, one output, and two hidden layers. There are 3 neurons in each hidden layer, too (which, by the way, need not be the case). The network is fully connected; there are no missing connections. Each neuron/node (save the inputs, which are usually pre-processed anyways) is an activity; it is the weighted sum of the previous neurons’ activities applied to the sigmoid activation function. To perform backprop by hand, we need to introduce the different variables/states at each point (layer-wise) in the neural network: It is important to note that every variable you see here is a generalization on the entire layer at that point. For example, when I say x_i, I am referring to the input to any input neuron (arbitrary value of i). I chose to place it in the middle of the layer for visibility purposes, but that does not mean that x_i refers to the middle neuron. I’ll demonstrate and discuss the implications of this later on. x refers to the input layer, y refers to hidden layer 1, z refers to hidden layer 2, and p refers to the prediction/output layer (which fits in nicely with the notation used in our cost function). If a variable has the subscript i, it means that the variable is the input to the relevant neuron at that layer. If a variable has the subscript j, it means that the variable is the output of the relevant neuron at that layer. For example, x_i refers to any input value we enter into the network. x_j is actually equal to x_i, but this is only because we choose not to use an activation function — or rather, we use the identity activation function — in the input layer’s activities. We only include these two separate variables to retain consistency. y_i is the input to any neuron in the first hidden layer; it is the weighted sum of all previous neurons (each neuron in the input layer multiplied by the corresponding connecting weights). y_j is the output of any neuron at the hidden layer, so it is equal to activation_function(y_i) = sigmoid(y_i) = sigmoid(weighted_sum_of_x_j). We can apply the same logic for z and p. Ultimately, p_j is the sigmoid output of p_i and hence is the output of the entire neural network that we pass to the error/cost function. The weights are organized into three separate variables: W1, W2, and W3. Each W is a matrix (if you are not comfortable with Linear Algebra, think of a 2D array) of all the weights at the given layer. For example, W1 are the weights that connect the input layer to the hidden layer 1. Wlayer_ij refers to any arbitrary, single weight at a given layer. To get an intuition of ij (which is really i, j), Wlayer_i are all the weights that connect arbitrary neuron i at a given layer to the next layer. Wlayer_ij (adding the j component) is the weight that connects arbitrary neuron i at a given layer to an arbitrary neuron j at the next layer. Essentially, Wlayer is a vector of Wlayer_is, which is a vector of real-valued Wlayer_ijs. NOTE: Please note that the i’s and j’s in the weights and other variables are completely different. These indices do not correspond in any way. In fact, for x/y/z/p, i and j do not represent tensor indices at all, they simply represent the input and output of a neuron. Wlayer_ij represents an arbitrary weight at an index in a weight matrix, and x_j/y_j/z_j/p_j represent an arbitrary input/output point of a neuron unit. That last part about weights was tedious! It’s crucial to understand how we’re separating the neural network here, especially the notion of generalizing on an entire layer, before moving forward. To acquire a comprehensive intuition of backpropagation, we’re going to backprop this neural net as discussed before. More specifically, we’re going to find the derivative of the error w.r.t. an arbitrary weight in the input layer (W1_ij). We could find the derivative of the error w.r.t. an arbitrary weight in the first or second hidden layer, but let’s go as far back as we can; the more backprop, the better! So, mathematically, we are trying to obtain (to perform our iterative optimization algorithm with): We can express this graphically/visually, using the same principles as earlier (chain rule), like so: In two layers, we have three red lines pointing in three different directions, instead of just one. This is a reinforcement of (and why it is important to understand) the fact that variable j is just a generalization/represents any point in the layer. So, when we differentiate p_i with respect to the layer before that, there are three different weights, as I hope you can see, in W3_ij that contribute to the value p_i. There also happen to be three weights in W3 in total, but this isn’t the case for the layers before; it is only the case because layer p has one neuron — the output — in it. We stop backprop at the input layer and so we just point to the single weight we are looking for. Wonderful! Now let’s work out all this great stuff mathematically. Immediately, we know: We have already established the left hand side, so now we just need to use the chain rule to simplify it further. The derivative of the error w.r.t. the weight can be written as the derivative of the error w.r.t. the output prediction multiplied by the derivative of the output prediction w.r.t. the weight. At this point, we’ve traversed one red line back. We know this because is reducible to a numerical value. Specifically, the derivative of the error w.r.t. the output prediction is: Hence: Going one more layer backwards, we can determine that: In other words, the derivative of the output prediction w.r.t. the weight is the derivative of the output w.r.t. the input to the output layer (p_i) multiplied by the derivative of that value w.r.t. the weight. This represents our second red line. We can solve the first term like so: This corresponds with the derivative of the sigmoid function we solved earlier, which was equal to the output multiplied by one minus the output. In this case, p_j is the output of the sigmoid function. Now, we have: Let’s move on to the third red line(s). This one is interesting because we begin to “spread” out. Since there are multiple different weights that contribute to the value of p_i, we need to take into account their individual “pull” factors into our derivative: If you’re a mathematician, this notation may irk you slightly; sorry if that’s the case! In computer science, we tend to stray from the notion of completely legal mathematical expressions. This is yet again again another reason why it’s key to understand the role of layer generalization; z_j here is not just referring to the middle neuron, it’s referring to an arbitrary neuron. The actual value of j in the summation is not changing (it’s not even an index or a value in the first place), and we don’t really consider it. It’s less of a mathematical expression and more of a statement that we will iterate through each generalized neuron z_j and use it. In other words, we iterate over the derivative terms and sum them up using z_1, z_2, and z_3. Before, we could write p_j as any single value because the output layer just contains one node; there is just one p_j. But we see here that this is no longer the case. We have multiple z_j values, and p_i is functionally dependent on each of these z_j values. So, when we traverse from p_j to the preceding layer, we need to consider each contribution from layer z to p_j separately and add them up to create one total contribution. There’s no upper bound to the summation; we just assume that we start at zero and end at our maximum value for the number of neurons in the layer. Please again note that the same changes are not reflected in W1_ij, where j refers to an entirely different thing. Instead, we’re just stating that we will use the different z_j neurons in layer z. Since p_i is a summation of each weight multiplied by each z_j (weighted sum), if we were to take the derivative of p_i with any arbitrary z_j, the result would be the connecting weight since said weight would be the coefficient of the term (derivative of m*x w.r.t. x is just m): W3_ij is loosely defined here. ij still refers to any arbitrary weight — where ij are still separate from the j used in p_i/z_j — but again, as computer scientists and not mathematicians, we need not be pedantic about the legality and intricacy of expressions; we just need an intuition of what the expressions imply/mean. It’s almost a succinct form of psuedo-code! So, even though this defines an arbitrary weight, we know it means the connecting weight. We can also see this from the diagram: when we walk from p_j to an arbitrary z_j, we walk along the connecting weight. So now, we have: At this point, I like to continue playing the “reduction test”. The reduction test states that, if we can further simplify a derivative term, we still have more backprop to do. Since we can’t yet quite put the derivative of z_j w.r.t. W1_ij into a numerical term, let’s keep going (and fast-forward a bit). Using chain rule, we follow the fourth line back to determine that: Since z_j is the sigmoid of z_i, we use the same logic as the previous layer and apply the sigmoid derivative. The derivative of z_i w.r.t. W1_ij, demonstrated by the fifth line(s) back, requires the same idea of “spreading out” and summation of contributions: Briefly, since z_i is the weighted sum of each y_j in y, we sum over the derivatives which, similar to before, simplifies to the relevant connecting weights in the preceding layer (W2 in this case). We’re almost there, let’s go further; there’s still more reduction to do: We have, of course, another sigmoid activation function to deal with. This is the sixth red line. Notice, now, that we have just one line remaining. In fact, our last derivative term here passes (or rather, fails) the reduction test! The last line traverses from the input at y_i to x_j, walking along W1_ij. Wait a second — is this not what we are attempting to backprop to? Yes, it is! Since we are, for the first time, directly deriving y_i w.r.t. the weight W1_ij, we can think of the coefficient of W1_ij as being x_j in our weighted sum (instead of the vice versa as used previously). Hence, the simplification follows: Of course, since each x_j in layer x contributes to the weighted sum y_i, we sum over the effects. And that’s it! We can’t reduce any further from here. Now, let’s tie all these individual expressions together: EDIT: The denominator on the left hand side should say dW1ij instead of “layer”. With no more partial derivative terms left, our work is complete! This gives us the derivative of the error w.r.t. any arbitrary weight in the input layer/W1. That was a lot of work — maybe now we can sympathize with the poor computers! Something you should notice is that values such as p_j, a, z_j, y_j, x_j etc. are the values of the network at the different points. But where do they come from? Actually, we would need to perform a feed-forward of the neural network first and then capture these variables. Our task is to now perform Gradient Descent to train the neural net: We perform gradient descent on each weight in each layer. Notice that the resulting gradient should change each time because the weight itself changes, (and as a result, the performance and output of the entire net should change) even if it’s a small perturbation. This means that, at each update, we need to do a feed-forward of the neural net. Not just once before, but once each iteration. These are then the steps to train an entire neural network: It’s important to note that one must not initialize the weights to zero, similar to what may be done in other machine learning algorithms. If weights are initialized to zero, after each update, the outgoing weights of each neuron will be identical, because the gradients will be identical (which can be proved). Because of this, the proceeding hidden units will remain the same value and will continue to follow each other. Ultimately, this means that our training will become extremely constrained (due to the “symmetry”), and we won’t be able to build interesting functions. Also, neural networks may get stuck at local optima (places where the gradient is zero but are not the global minima), so random weight initialization allows one to hopefully have a chance of circumventing that by starting at many different random values. 3. Perform one feed-forward using the training data 4. Perform backpropagation to get the error derivatives w.r.t. each and every weight in the neural network 5. Perform gradient descent to update each weight by the negative scalar reduction (w.r.t. some learning rate alpha) of the respective error derivative. Increment the number of iterations. 6. If we have converged (in reality, though, we just stop when we have reached the number of maximum iterations) training is complete. Else, repeat starting at step 3. If we initialize our weights randomly (and not to zero) and then perform gradient descent with derivatives computed from backpropagation, we should expect to train a neural network in no time! I hope this example brought clarity to how backprop works and the intuition behind it. If you didn’t understand the intricacies of the example but understand and appreciate the concept of backprop as a whole, you’re still in a great place! Next we’ll go ahead and explain backprop code that works on any generalized architecture of a neural network using the ReLU activation function. Now that we’ve developed the math and intuition behind backpropagation, let’s try to implement it. We’ll divide our implementation into three distinct steps: Let’s start off by defining what the API we’re implementing looks like. We’ll define our network as a series of Layer instances that our data passes through — this means that instead of modeling each individual neuron, we group neurons from a single layer together. This makes it a bit easier to reason about larger networks, but also makes the actual computations faster (as we’ll see shortly). Also — we’re going to write the code in Python. Each layer will have the following API: (This isn’t great API design — ideally, we would decouple the backprop and weight update from the rest of the object, so the specific algorithm we use for updating weights isn’t tied to the layer itself. But that’s not the point, so we’ll stick with this design for the purposes of explaining how backpropagation works in a real-life scenario. Also: we’ll be using numpy throughout the implementation. It’s an awesome tool for mathematical operations in Python (especially tensor based ones), but we don’t have the time to get into how it works — if you want a good introduction, here ya’ go.) We can start by implementing the weight initialization. As it turns out, how you initialize your weights is actually kind of a big deal for both network performance and convergence rates. Here’s how we’ll initialize our weights: This initializes a weight matrix of the appropriate dimensions with random values sampled from a normal distribution. We then scale it rad(2/self.size_in), giving us a variance of 2/self.size_in (derivation here). And that’s all we need for layer initialization! Let’s move on to implementing our first objective — feed-forward. This is actually pretty simple — a dot product of our input activations with the weight matrix, followed by our activation function, will give us the activations we need. The dot product part should make intuitive sense; if it doesn’t, you should sit down and try to work through it on a piece of paper. This is where the performance gains of grouping neurons into layers comes from: instead of keeping an individual weight vector for each neuron, and performing a series of vector dot products, we can just do a single matrix operation (which, thanks to the wonders of modern processors, is significantly faster). In fact, we can compute all of the activations from a layer in just two lines: Simple enough. Let’s move on to backpropagation. This one’s a bit more involved. First, we compute the derivative of the output w.r.t. the weights, then the derivative of the cost w.r.t. the output, followed by chain rule to get the derivative of the cost w.r.t. the weights. Let’s start with the first part — the derivative of the output w.r.t. the weights. That should be simple enough; because you’re multiplying the weight by the corresponding input activation, the derivative will just be the corresponding input activation. Except, because we’re using the ReLU activation function, the weights have no effect if the corresponding output is < 0 (because it gets capped anyway). This should take care of that hiccup: (More formally, you’re multiplying by the derivative of the activation function, which is 0 when the activation is < 0 and 1 elsewhere.) Let’s take a brief detour to talk about the out_grad parameter that our backward method gets. Let’s say we have a network with two layers: the first has m neurons, and the second has n. Each of the m neurons produces an activation, and each of the n neurons looks at each of the m activations. The out_grad parameter is an m x n matrix of how each m affects each of the n neurons it feeds into. Now, we need the derivative of the cost w.r.t. each of the outputs — which is essentially the out_grad parameter we’re given! We just need to sum up each row of the matrix we’re given, as per the backpropagation formula. Finally, we end up with something like this: Now, we need to compute the derivative of our inputs to pass along to the next layer. We can perform a similar chain rule — derivative of the output w.r.t. the inputs times the derivative of the cost w.r.t. the outputs. And that’s it for the backpropagation step. The final step is the weight update. Assuming we’re sticking with gradient descent for this example, this can be a simple one-liner: To actually train our network, we take one of our training samples and call forward on each layer consecutively, passing the output of the previous layer as the input of the following layer. We compute dJ, passing that as the out_grad parameter to the last layer’s backward method. We call backward on each of the layers in reverse order, this time passing the output of the further layer as out_grad to the previous layer. Finally, we call update on each of our layers and repeat. There’s one last detail that we should include, which is the concept of a bias (akin to that of a constant term in any given equation). Notice that, with our current implementation, the activation of a neuron is determined solely based on the activations of the previous layer. There’s no bias term that can shift the activation up or down independent of the inputs. A bias term isn’t strictly necessary — in fact, if you train your network as-is, it would probably still work fine. But if you do need a bias term, the code stays almost the same — the only difference is that you need to add a column of 1s to the incoming activations, and update your weight matrix accordingly, so one of your weights gets treated as a bias term. The only other difference is that, when returning cost_wrt_inputs, you can cut out the first row — nobody cares about the gradients associated with the bias term because the previous layer has no say in the activation of the bias neuron. Implementing backpropagation can be kind of tricky, so it’s often a good idea to check your implementation. You can do so by computing the gradient numerically (by literally perturbing the weight and calculating the difference in your cost function) and comparing it to your backpropagation-computed gradient. This gradient check doesn’t need to be run once you’ve verified your implementation, but it could save a lot of time tracking down potential problems with your network. Nowadays, you often don’t even need to implement a neural network on your own, as libraries such as Caffe, Torch, or TensorFlow will have implementations ready to go. That being said, it’s often a good idea to try implementing it on your own to get a better grasp of how everything works under the hood. Intrigued? Looking to learn more about neural networks? Here are some great online classes to get you started: Stanford’s CS231n. Although it’s technically about convolutional neural networks, the class provides an excellent introduction to and survey of neural networks in general. Class videos, notes, and assignments are all posted here, and if you have the patience for it I would strongly recommend walking through the assignments so you can really get to know what you’re learning. MIT 6.034. This class, taught by Prof. Patrick Henry Winston, explores many different algorithms and disciplines in Artificial Intelligence. There’s a great lecture on backprop that I actually used as a stepping stone to getting setup writing this article. I also learned genetic algorithms from Prof. Winston — he’s a great teacher! We hope that, if you visited this article without knowing how the backpropagation algorithm works, you are reading this with an (at least rudimentary) mathematical or conceptual intuition of it. Writing and conveying such a complex algorithm to a supposed beginner has proven to be an extremely difficult task for us, but it’s helped us truly understand what we’ve been learning about. With greater knowledge in a fundamental area of machine learning, we are now excited to take a look at new, interesting algorithms and disciplines in the field. We are looking forward to continue documenting these endeavors together. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. rohankapur.com Our ongoing effort to make the mathematics, science, linguistics, and philosophy of artificial intelligence fun and simple. " Per Harald Borgen,1.3K,7,https://medium.com/learning-new-stuff/how-to-learn-neural-networks-758b78f2736e?source=tag_archive---------4----------------,Learning How To Code Neural Networks – Learning New Stuff – Medium,"This is the second post in a series of me trying to learn something new over a short period of time. The first time consisted of learning how to do machine learning in a week. This time I’ve tried to learn neural networks. While I didn’t manage to do it within a week, due to various reasons, I did get a basic understanding of it throughout the summer and autumn of 2015. By basic understanding, I mean that I finally know how to code simple neural networks from scratch on my own. In this post, I’ll give a few explanations and guide you to the resources I’ve used, in case you’re interested in doing this yourself. So what is a neural network? Let’s wait with the network part and start off with one single neuron. The circle below illustrates an artificial neuron. Its input is 5 and its output is 1. The input is the sum of the three synapses connecting to the neuron (the three arrows at the left). At the far left we see two input values plus a bias value. The input values are 1 and 0 (the green numbers), while the bias holds a value of -2 (the brown number). The two inputs are then multiplied by their so called weights, which are 7 and 3 (the blue numbers). Finally we add it up with the bias and end up with a number, in this case: 5 (the red number). This is the input for our artificial neuron. The neuron then performs some kind of computation on this number — in our case the Sigmoid function, and then spits out an output. This happens to be 1, as Sigmoid of 5 equals to 1, if we round the number up (more info on the Sigmoid function follows later). If you connect a network of these neurons together, you have a neural network, which propagates forward — from input output, via neurons which are connected to each other through synapses, like on the image to the left. I can strongly recommend the Welch Labs videos on YouTube for getting a better intuitive explanation of this process. After you’ve seen the Welch Labs videos, its a good idea to spend some time watching Week 4 of the Coursera’s Machine Learning course, which covers neural networks, as it’ll give you more intuition of how they work. The course is fairly mathematical, and its based around Octave, while I prefer Python. Because of this, I did not do the programming exercises. Instead, I used the videos to help me understand what I needed to learn. The first thing I realized I needed to investigate further was the Sigmoid function, as this seemed to be a critical part of many neural networks. I knew a little bit about the function, as it was also covered in Week 3 of the same course. So I went back and watched these videos again. But watching videos won’t get you all the way. To really understand it, I felt I needed to code it from the ground up. So I started to code a logistic regression algorithm from scratch (which happened to use the Sigmoid function). It took a whole day, and it’s probably not a very good implementation of logistic regression. But that doesn’t matter, as I finally understood how it works. Check the code here. You don’t need to perform this entire exercise yourself, as it requires some knowledge about and cost functions and gradient descent, which you might not have at this point. But make sure you understand how the Sigmoid function works. Understanding how a neural network works from input to output isn’t that difficult to understand, at least conceptually. More difficult though, is understanding how the neural network actually learns from looking at a set of data samples. The concept is called backpropagation. The weights were the blue numbers on our neuron in the beginning of the article. This process happens backwards, because you start at the end of the network (observe how wrong the networks ‘guess’ is), and then move backwards through the network, while adjusting the weights on the way, until you finally reach the inputs. To calculate this by hand requires some calculus, as it involves getting some derivatives of the networks’ weights. The Kahn Academy calculus courses seems like a good way to start, though I haven’t used them myself, as I took calculus on university. The three best sources I found for understanding backpropagation are these: You should definitely code along while you’re reading the articles, especially the two first ones. It’ll give you some sample code to look back at when you’re confused in the future. Plus, I can’t really emphasize this enough: The third article is also fantastic, but I’ve used this more as a wiki than a plain tutorial, as it’s actually an entire book. It contains thorough explanations all the important concepts in neural networks. These articles will also help you understand important concepts as cost functions and gradient descent, which play equally important roles in neural networks. In some articles and tutorials you’ll actually end up coding small neural networks. As soon as you’re comfortable with that, I recommend you to go all in on this strategy. It’s both fun and an extremely effective way of learning. One of the articles I also learned a lot from was A Neural Network in 11 Lines Of Python by IAmTrask. It contains an extraordinary amount of compressed knowledge and concepts in just 11 lines. After you’ve coded along with this example, you should do as the article states at the bottom, which is to implement it once again without looking at the tutorial. This forces you to really understand the concepts, and will likely reveal holes in your knowledge, which isn’t fun. However, when you finally manage it, you’ll feel like you’ve just acquired a new superpower. When you’ve done this, you can continue with this Wild ML tutorial, by Denny Britz, which guides you through a little more robust neural network. At this point, you could either try and code your own neural network from scratch or start playing around with some of the networks you have coded up already. It’s great fun to find a dataset that interests you and try to make some predictions with your neural nets. To get a hold of a dataset, just visit my side project Datasets.co (← shameless self promotion) and find one you like. Anyway, the point is that you’re now better off experimenting with stuff that interests you rather than following my advices. Personally, I’m currently learning how to use Python libraries that makes it easier to code up neural networks, like Theano, Lasagne and nolearn. I’m using this to do challenges on Kaggle, which is both great fun and great learning. Good luck! And don’t forget to press the heart button if you liked the article :) Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code. If you’ve read this far, I’d recommend you to check out this demo! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com. A publication about improving your technical skills. " Shi Yan,4.4K,7,https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714?source=tag_archive---------5----------------,Understanding LSTM and its diagrams – ML Review – Medium,"I just want to reiterate what’s said here: I’m not better at explaining LSTM, I want to write this down as a way to remember it myself. I think the above blog post written by Christopher Olah is the best LSTM material you would find. Please visit the original link if you want to learn LSTM. (But I did create some nice diagrams.) Although we don’t know how brain functions yet, we have the feeling that it must have a logic unit and a memory unit. We make decisions by reasoning and by experience. So do computers, we have the logic units, CPUs and GPUs and we also have memories. But when you look at a neural network, it functions like a black box. You feed in some inputs from one side, you receive some outputs from the other side. The decision it makes is mostly based on the current inputs. I think it’s unfair to say that neural network has no memory at all. After all, those learnt weights are some kind of memory of the training data. But this memory is more static. Sometimes we want to remember an input for later use. There are many examples of such a situation, such as the stock market. To make a good investment judgement, we have to at least look at the stock data from a time window. The naive way to let neural network accept a time series data is connecting several neural networks together. Each of the neural networks handles one time step. Instead of feeding the data at each individual time step, you provide data at all time steps within a window, or a context, to the neural network. A lot of times, you need to process data that has periodic patterns. As a silly example, suppose you want to predict christmas tree sales. This is a very seasonal thing and likely to peak only once a year. So a good strategy to predict christmas tree sale is looking at the data from exactly a year back. For this kind of problems, you either need to have a big context to include ancient data points, or you have a good memory. You know what data is valuable to remember for later use and what needs to be forgotten when it is useless. Theoretically the naively connected neural network, so called recurrent neural network, can work. But in practice, it suffers from two problems: vanishing gradient and exploding gradient, which make it unusable. Then later, LSTM (long short term memory) was invented to solve this issue by explicitly introducing a memory unit, called the cell into the network. This is the diagram of a LSTM building block. At a first sight, this looks intimidating. Let’s ignore the internals, but only look at the inputs and outputs of the unit. The network takes three inputs. X_t is the input of the current time step. h_t-1 is the output from the previous LSTM unit and C_t-1 is the “memory” of the previous unit, which I think is the most important input. As for outputs, h_t is the output of the current network. C_t is the memory of the current unit. Therefore, this single unit makes decision by considering the current input, previous output and previous memory. And it generates a new output and alters its memory. The way its internal memory C_t changes is pretty similar to piping water through a pipe. Assuming the memory is water, it flows into a pipe. You want to change this memory flow along the way and this change is controlled by two valves. The first valve is called the forget valve. If you shut it, no old memory will be kept. If you fully open this valve, all old memory will pass through. The second valve is the new memory valve. New memory will come in through a T shaped joint like above and merge with the old memory. Exactly how much new memory should come in is controlled by the second valve. On the LSTM diagram, the top “pipe” is the memory pipe. The input is the old memory (a vector). The first cross ✖ it passes through is the forget valve. It is actually an element-wise multiplication operation. So if you multiply the old memory C_t-1 with a vector that is close to 0, that means you want to forget most of the old memory. You let the old memory goes through, if your forget valve equals 1. Then the second operation the memory flow will go through is this + operator. This operator means piece-wise summation. It resembles the T shape joint pipe. New memory and the old memory will merge by this operation. How much new memory should be added to the old memory is controlled by another valve, the ✖ below the + sign. After these two operations, you have the old memory C_t-1 changed to the new memory C_t. Now lets look at the valves. The first one is called the forget valve. It is controlled by a simple one layer neural network. The inputs of the neural network is h_t-1, the output of the previous LSTM block, X_t, the input for the current LSTM block, C_t-1, the memory of the previous block and finally a bias vector b_0. This neural network has a sigmoid function as activation, and it’s output vector is the forget valve, which will applied to the old memory C_t-1 by element-wise multiplication. Now the second valve is called the new memory valve. Again, it is a one layer simple neural network that takes the same inputs as the forget valve. This valve controls how much the new memory should influence the old memory. The new memory itself, however is generated by another neural network. It is also a one layer network, but uses tanh as the activation function. The output of this network will element-wise multiple the new memory valve, and add to the old memory to form the new memory. These two ✖ signs are the forget valve and the new memory valve. And finally, we need to generate the output for this LSTM unit. This step has an output valve that is controlled by the new memory, the previous output h_t-1, the input X_t and a bias vector. This valve controls how much new memory should output to the next LSTM unit. The above diagram is inspired by Christopher’s blog post. But most of the time, you will see a diagram like below. The major difference between the two variations is that the following diagram doesn’t treat the memory unit C as an input to the unit. Instead, it treats it as an internal thing “Cell”. I like the Christopher’s diagram, in that it explicitly shows how this memory C gets passed from the previous unit to the next. But in the following image, you can’t easily see that C_t-1 is actually from the previous unit. and C_t is part of the output. The second reason I don’t like the following diagram is that the computation you perform within the unit should be ordered, but you can’t see it clearly from the following diagram. For example to calculate the output of this unit, you need to have C_t, the new memory ready. Therefore, the first step should be evaluating C_t. The following diagram tries to represent this “delay” or “order” with dash lines and solid lines (there are errors in this picture). Dash lines means the old memory, which is available at the beginning. Some solid lines means the new memory. Operations require the new memory have to wait until C_t is available. But these two diagrams are essentially the same. Here, I want to use the same symbols and colors of the first diagram to redraw the above diagram: This is the forget gate (valve) that shuts the old memory: This is the new memory valve and the new memory: These are the two valves and the element-wise summation to merge the old memory and the new memory to form C_t (in green, flows back to the big “Cell”): This is the output valve and output of the LSTM unit: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software engineer & wantrepreneur. Interested in computer graphics, bitcoin and deep learning. Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts. " Ross Goodwin,686,23,https://medium.com/artists-and-machine-intelligence/adventures-in-narrated-reality-6516ff395ba3?source=tag_archive---------6----------------,Adventures in Narrated Reality – Artists and Machine Intelligence – Medium,"By Ross Goodwin In May 2015, Stanford PhD student Andrej Karpathy wrote a blog post entitled The Unreasonable Effectiveness of Recurrent Neural Networks and released a code repository called Char-RNN. Both received quite a lot of attention from the machine learning community in the months that followed, spurring commentary and a number of response posts from other researchers. I remember reading these posts early last summer. Initially, I was somewhat underwhelmed—as at least one commentator pointed out, much of the generated text that Karpathy chose to highlight did not seem much better than results one might expect from high order character-level Markov chains. Here is a snippet of Karpathy’s Char-RNN generated Shakespeare: And here is a snippet of generated Shakespeare from a high order character-level Markov chain, via the post linked above: So I was discouraged. And without access to affordable GPUs for training recurrent neural networks, I continued to experiment with Markov chains, generative grammars, template systems, and other ML-free solutions for generating text. In December, New York University was kind enough to grant me access to their High Performance Computing facilities. I began to train my own recurrent neural networks using Karpathy’s code, and I finally discovered the quasi-magical capacities of these machines. Since then, I have been training a collection of recurrent neural network models for my thesis project at NYU, and exploring possibilities for devices that could enable such models to serve as expressive real-time narrators in our everyday lives. At this point, since this is my very first Medium post, perhaps I should introduce myself: my name is Ross Goodwin, I’m a graduate student at NYU ITP in my final semester, and computational creative writing is my personal obsession. Before I began my studies at ITP, I was a political ghostwriter. I graduated from MIT in 2009 with a B.S. degree in Economics, and during my undergraduate years I had worked on Barack Obama’s 2008 Presidential campaign. At the time, I wanted to be a political speechwriter, and my first job after graduation was a Presidential Writer position at the White House. In this role, I wrote Presidential Proclamations, which are statements of national days, weeks, and months of things—everything from Thanksgiving and African American History Month to lesser known observances like Safe Boating Week. It was a very strange job, but I thoroughly enjoyed it. I left the White House in 2010 for a position at the U.S. Department of the Treasury, where I worked for two years, mostly putting together briefing binders for then-Secretary Timothy Geithner and Deputy Secretary Neal Wolin in the Department’s front office. I didn’t get many speechwriting opportunities, and pursuing a future in the financial world did not appeal to me, so I left to work as a freelance ghostwriter. This was a rather dark time in my life, as I rapidly found myself writing for a variety of unsavory clients and causes in order to pay my rent every month. In completing these assignments, I began to integrate algorithms into my writing process to improve my productivity. (At the time, I didn’t think about these techniques as algorithmic, but it’s obvious in retrospect.) For example, if I had to write 12 letters, I’d write them in a spreadsheet with a paragraph in each cell. Each letter would exist in a column, and I would write across the rows—first I’d write all the first paragraphs as one group, then all the second paragraphs, then all the thirds, and so on. If I had to write a similar group of letters the next day for the same client, I would use an Excel macro to randomly shuffle the cells, then edit the paragraphs for cohesion and turn the results in as an entirely new batch of letters. Writing this way, I found I could complete an 8-hour day of work in about 2 hours. I used the rest of my time to work on a novel that’s still not finished (but that’s a story for another time). With help from some friends, I turned the technique into a game we called The Diagonalization Argument after Georg Cantor’s 1891 mathematical proof of the same name. In early 2014, a client asked me to write reviews of all the guides available online to learn the Python programming language. One guide stood out above all others, in the sheer number of times I saw users reference it on various online forums and in the countless glowing reviews it had earned across the Internet: Learn Python the Hard Way by Zed Shaw So, to make my reviews better, I decided I might as well try to learn Python. My past attempts at learning to code had failed due to lack of commitment, lack of interest, or lack of a good project to get started. But this time was different somehow—Zed’s guide worked for me, and just like that I found myself completely and hopelessly addicted to programming. As a writer, I gravitated immediately to the broad and expanding world of natural language processing and generation. My first few projects were simple poetry generators. And once I moved to New York City and started ITP, I discovered a local community of likeminded individuals leveraging computation to produce and enhance textual work. I hosted a Code Poetry Slam in November 2014 and began attending Todd Anderson’s monthly WordHack events at Babycastles. In early 2015, I developed and launched word.camera, a web app and set of physical devices that use the Clarifai API to tag images with nouns, ConceptNet to find related words, and a template system to string the results together into descriptive (though often bizarre) prose poems related to the captured photographs. The project was about redefining the photographic experience, and it earned more attention than I expected [1,2,3]. In November, I was invited to exhibit this work at IDFA DocLab in Amsterdam. At that point, it became obvious that word.camera (or some extension thereof) would become my ITP thesis project. And while searching for ways to improve its output, I began to experiment with training my own neural networks rather than using those others had trained via APIs. As I mentioned above, I started using NYU’s High Performance Computing facilities in December. This supercomputing cluster includes a staggering array of computational resources — in particular, at least 32 Nvidia Tesla K80 GPUs, each with 24 GB of GPU memory. While GPUs aren’t strictly required to train deep neural networks, the massively parallel processes involved make them all but a necessity for training a larger model that will perform well in a reasonable amount of time. Using two of Andrej Karpathy’s repositories, NeuralTalk2 and Char-RNN respectively, I trained an image captioning model and a number of models for generating text. As a result of having free access to the largest GPUs in the world, I was able to start training very large models right away. NeuralTalk2 uses a convolutional neural network to classify images, then transfers that classification data to a recurrent neural network that generates a brief caption. For my first attempt at training a NeuralTalk2 model, I wanted to do something less traditional than simply captioning images. In my opinion, the idea of machine “image captioning” is problematic because it’s so limited in scope. Fundamentally, a machine that can caption images is a machine that can describe or relate to what it sees in a highly intelligent way. I do understand that image captioning is an important benchmark for machine intelligence. However, I also believe that thinking such a machine’s primary use case will be to replace human image captioning represents a highly restrictive and narrow point of view. So I tried training a model on frames and corresponding captions from every episode of the TV show The X-Files. My idea was to create a model that, if given an image, would generate a plausible line of dialogue from what it saw. Unfortunately, it simply did not work—most likely due to the dialogue for a particular scene bearing no direct relationship to that scene’s imagery. Rather than generating a different line of dialogue for different images, the machine seemed to want to assign the same line to every image indiscriminately. Strangely, these repetitive lines tended to say things like I don’t know, I’m not sure what you want, and I don’t know what to do. (One of my faculty advisors, Patrick Hebron, jokingly suggested this may be a sign of metacognition—needless to say, I was slightly creeped out but excited to continue these explorations.) I tried two other less-than-traditional approaches with NeuralTalk2: training on Reddit image posts and corresponding comments, and training on pictures of recreational drugs and corresponding Erowid experience reports. Both worked better than my X-Files experiment, but neither produced particularly interesting results. So I resigned myself to training a traditional image captioning model using the Microsoft Common Objects in Context (MSCOCO) caption set. In terms of objects represented, MSCOCO is far from exhaustive, but it does contain over 120,000 images with 5 captions each, which is more than I could’ve expected to produce on my own from any source. Furthermore, I figured I could always do something less traditional with such a model once trained. I made just one adjustment to Karpathy’s default training parameters: decreased the word-frequency threshold from five to three. By default, NeuralTalk2 ignores any word that appears fewer than five times in the caption corpus it trains on. I guessed that reducing this threshold would result in some extra verbosity in the generated captions, possibly at the expense of accuracy, as a more verbose model might describe details that were not actually present in an image. However, after about five days of training, I had produced a model that exceeded 0.9 CIDEr in tests, which is about as good as Karpathy suggested the model could get in his documentation. As opposed to NeuralTalk2, which is designed to caption images, Karpathy’s Char-RNN employs a character-level LSTM recurrent neural network simply for generating text. A recurrent neural network is fundamentally a linear pattern machine. Given a character (or set of characters) as a seed, a Char-RNN model will predict which character would come next based on what it has learned from its input corpus. By doing this again and again, the model can generate text in the same manner as a Markov chain, though its internal processes are far more sophisticated. LSTM stands for Long Short-Term Memory, which remains a popular architecture for recurrent neural networks. Unlike a no-frills vanilla RNN, an LSTM protects its fragile underlying neural net with “gates” that determine which connections will persist in the machine’s weight matrices. (I’ve been told that others are using something called a GRU, but I have yet to investigate this architecture.) I trained my first text generating LSTM on the same prose corpus I used for word.camera’s literary epitaphs. After about 18 hours, I was getting results like this: This paragraph struck me as highly poetic, compared to what I’d seen in the past from a computer. The language wasn’t entirely sensical, but it certainly conjured imagery and employed relatively solid grammar. Furthermore, it was original. Originality has always been important to me in computer generated text—because what good is a generator if it just plagiarizes your input corpus? This is a major issue with high order Markov chains, but due to its more sophisticated internal mechanisms, the LSTM didn’t seem to have the same tendency. Unfortunately, much of the prose-trained model output that contained less poetic language was also less interesting than the passage above. But given that I could produce poetic language with a prose-trained model, I wondered what results I could get from a poetry-trained model. The output above comes from the first model I trained on poetry. I used the most readily available books I could find, mostly those of poets from the 19th century and earlier whose work had entered the public domain. The consistent line breaks and capitalization schemes were encouraging. But I still wasn’t satisfied with the language—due to the predominant age of the corpus, it seemed too ornate and formal. I wanted more modern-sounding poetic language, and so I knew I had to train a model on modern poetry. I assembled a corpus of all the modern poetry books I could find online. It wasn’t nearly as easy as assembling the prior corpus—unfortunately, I can’t go into detail on how I got all the books for fear of being sued. The results were much closer to what I was looking for in terms of language. But they were also inconsistent in quality. At the time, I believed this was because the corpus was too small, so I began to supplement my modern poetry corpus with select prose works to increase its size. It remains likely that this was the case. However, I had not yet discovered the seeding techniques I would later learn can dramatically improve LSTM output. Another idea occurred to me: I could seed a poetic language LSTM model with a generated image caption to make a new, more poetic version of word.camera. Some of the initial results (see: left) were striking. I showed them to one of my mentors, Allison Parrish, who suggested that I find a way to integrate the caption throughout the poetic text, rather than just at the beginning. (I had showed her some longer examples, where the language had strayed quite far from the subject matter of the caption after a few lines.) I thought about how to accomplish this, and settled on a technique of seeding the poetic language LSTM multiple times with the same image caption at different temperatures. Temperature is a parameter, a number between zero and one, that controls the riskiness of a recurrent neural network’s character predictions. A low temperature value will result in text that’s repetitive but highly grammatical. Accordingly, high temperature results will be more innovative and surprising (the model may even invent its own words) while containing more mistakes. By iterating through temperature values with the same seed, the subject matter would remain consistent while the language varied, resulting in longer pieces that seemed more cohesive than anything I’d ever produced with a computer. As I refined the aforementioned technique, I trained more LSTM models, attempting to discover the best training parameters. The performance of a neural network model is measured by its loss, which drops during training and eventually should be as close to zero as possible. A model’s loss is a statistical measurement indicating how well a model can predict the character sequences in its own corpus. During training, there are two loss figures to monitor: the training loss, which is defined by how well the model predicts the part of the corpus it’s actually training on, and the validation loss, which is defined by how well the model predicts an unknown validation sample that was removed from the corpus prior to training. The goal of training a model is to reduce its validation loss as much as possible, because we want a model that accurately predicts unknown character sequences, not just those it’s already seen. To this end, there are a number of parameters to adjust, among which are: The training process largely consists of monitoring the validation loss as it drops across model checkpoints, and monitoring the difference between training loss and validation loss. As Karpathy writes in his Char-RNN documentation: In January, I released my code on GitHub along with a set of trained neural network models: an image captioning model and two poetic language LSTM models. In my GitHub README, I highlighted a few results I felt were particularly strong [1,2,3,4,5]. Unlike prior versions of word.camera that mostly relied on a strong connection between the image and the output, I found that I could still enjoy the result when the image caption was totally incorrect, and there often seemed to be some other accidental (or perhaps slightly-less-than-accidental) element connecting the image to the words. I then shifted my focus to developing a new physical prototype. With the prior version of word.camera, I believed one of the most important parts of the experience was its portability. That’s why I developed a mobile web app first, and why I ensured all the physical prototypes I built were fully portable. For the new version, I started with a physical prototype rather than a mobile web application because developing an app initially seemed infeasible due to computational requirements, though I have since thought of some possible solutions. Since this would be a rapid prototype, I decided to use a very small messenger bag as the case rather than fabricating my own. Also, my research suggested that some of Karpathy’s code may not run on the Raspberry Pi’s ARM architecture, so I needed a slightly larger computer that would require a larger power source. I decided to use an Intel NUC that I powered with a backup battery for a laptop. I mounted an ELP wide angle camera to the strap, alongside a set of controls (a rotary potentiometer and a button) that communicated with the main computer via an Arduino. Originally, I planned to dump the text output to a hacked Kindle, but ultimately decided the tactile nature of thermal printer paper would provide for a superior experience (and allow me to hand out the output on the street like I’d done with prior word.camera models). I found a large format thermal printer model with built-in batteries that uses 4""-wide paper (previous printers I’d used had taken paper half as wide), and I was able to pick up a couple of them on eBay for less than $50 each. Based on a suggestion from my friend Anthony Kesich, I decided to add an “ascii image” of the photo above the text. In February, I was invited to speak at an art and machine learning symposium at Gray Area in San Francisco. In Amsterdam at IDFA in November, I had met Jessica Brillhart, who is a VR director on Google’s Cardboard team. In January, I began to collaborate with her and some other folks at Google on Deep Dream VR experiences with automated poetic voiceover. (If you’re unfamiliar with Deep Dream, check out this blog post from last summer along with the related GitHub repo and Wikipedia article.) We demonstrated these experiences at the event, which was also an auction to sell Deep Dream artwork to benefit the Gray Area Foundation. Mike Tyka, an artist whose Deep Dream work was prominently featured in the auction, had asked me to use my poetic language LSTM to generate titles for his artwork. I had a lot of fun doing this, and I thought the titles came out well—they even earned a brief mention in the WIRED article about the show. During my talk the day after the auction, I demonstrated my prototype. I walked onto the stage wearing my messenger bag, snapped a quick photo before I started speaking, and revealed the output at the end. I would have been more nervous about sharing the machine’s poetic output in front of so many people, but the poetry had already passed what was, in my opinion, a more genuine test of its integrity: a small reading at a library in Brooklyn alongside traditional poets. Earlier in February, I was invited to share some work at the Leonard Library in Williamsburg. The theme of the evening’s event was love and romance, so I generated several poems [1,2] from images I considered romantic. My reading was met with overwhelming approval from the other poets at the event, one of whom said that the poem I had generated from the iconic Times Square V-J Day kiss photograph by Alfred Eisenstaedt “messed [him] up” as it seemed to contain a plausible description of a flashback from the man’s perspective. I had been worried because, as I once heard Allison Parrish say, so much commentary about computational creative writing focuses on computers replacing humans—but as anyone who has worked with computers and language knows, that perspective (which Allison summarized as “Now they’re even taking the poet’s job!”) is highly uninformed. When we teach computers to write, the computers don’t replace us any more than pianos replace pianists—in a certain way, they become our pens, and we become more than writers. We become writers of writers. Nietzsche, who was the first philosopher to use a typewriter, famously wrote “Our writing tools are also working on our thoughts,” which media theorist Friedrich Kittler analyzes in his book Gramophone, Film, Typewriter (p. 200): If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense. I’m inclined to believe that such a transformation would be positive, as it would enable us to reach beyond our native writing capacities and produce work that might better reflect our wordless internal thoughts and notions. (I hesitate to repeat the piano/pianist analogy for fear of stomping out its impact, but I think it applies here too.) In producing fully automated writing machines, I am only attempting to demonstrate what is possible with a machine alone. In my research, I am ultimately striving to produce devices that allow humans to work in concert with machines to produce written work. My ambition is to augment our creativity, not to replace it. Another ambition of mine is to promote a new framework that I’ve been calling Narrated Reality. We already have Virtual Reality (VR) and Augmented Reality (AR), so it only makes sense to provide another option (NR?)—perhaps one that’s less visual and more about supplementing existing experiences with expressive narration. That way, we can enjoy our experiences while we’re having them, then revisit them later in an augmented format. For my ITP thesis, I had originally planned to produce one general-purpose device that used photographs, GPS coordinates (supplemented with Foursquare locations), and the time to narrate everyday experiences. However, after receiving some sage advice from Taeyoon Choi, I have decided to split that project into three devices: a camera, a compass, and a clock that respectively use image, location, and time to realize Narrated Reality. Along with designing and building those devices, I am in the process of training a library of interchangeable LSTM models in order to experience a variety of options with each device in this new space. After training a number of models on fiction and poetry, I decided to try something different: I trained a model on the Oxford English Dictionary. The result was better than I ever could have anticipated: an automated Balderdash player that could generate plausible definitions for made up words. I made a Twitter bot so that people could submit their linguistic inventions, and a Tumblr blog for the complete, unabridged definitions. I was amazed by the machine’s ability to take in and parrot back strings of arbitrary characters it had never seen before, and how it often seemed to understand them in the context of actual words. The fictional definitions it created for real words were also frequently entertaining. My favorite of these was its definition for “love”—although a prior version of the model had defined love as “past tense of leave,” which I found equally amusing. One particularly fascinating discovery I made with this bot concerned the importance of a certain seeding technique that Kyle McDonald taught me. As discussed above, when you generate text with a recurrent neural network, you can provide a seed to get the machine started. For example, if you wanted to know the machine’s feelings on the meaning of life, you might seed your LSTM with the following text: And the machine would logically complete your sentence based on the patterns it had absorbed from its training corpus: However, to get better and more consistent results, it makes sense to prepend the seed with a pre-seed (another paragraph of text) to push the LSTM into a desired state. In practice, it’s good to use a high quality sample of output from the model you’re seeding with length approximately equal to the sequence length (see above) you set during training. This means the seed will now look something like this: And the raw output will look like this (though usually I remove the pre-seed when I present the output): The difference was more than apparent when I began using this technique with the dictionary model. Without the pre-seed, the bot would usually fail to repeat an unknown word within its generated definition. With the pre-seed, it would reliably parrot back whatever gibberish it had received. In the end, the Oxford English Dictionary model trained to a significantly lower final validation loss (< 0.75) than any other model I had trained, or have trained since. One commenter on Hacker News noted: After considering what to do next, I decided to try integrating dictionary definitions into the prose and poetry corpora I had been training before. Additionally, another Stanford PhD student named Justin Johnson released a new and improved version of Karpathy’s Char-RNN, Torch-RNN, which promised to use 7x less memory, which would in turn allow for me to train even larger models than I had been training before on the same GPUs. It took me an evening to get Torch-RNN working on NYU’s supercomputing cluster, but once I had it running I was immediately able to start training models four times as large as those I’d trained on before. My initial models had 20–25 million parameters, and now I was training with 80–85 million, with some extra room to increase batch size and sequence length parameters. The results I got from the first model were stunning—the corpus was about 45% poetry, 45% prose, and 10% dictionary definitions, and the output appeared more prose-like while remaining somewhat cohesive and painting vivid imagery. Next, I decided to train a model on Noam Chomsky’s complete works. Most individuals have not produced enough publicly available text (25–100 MB raw text, or 50–200 novels) to train an LSTM this size. Noam Chomsky is an exception, and the corpus of his writing I was able to assemble weighs in at a hefty 41.2 MB. (This project was complicated by the fact that I worked for Noam Chomsky as an undergraduate at MIT, but that’s a story for another time.) Here is a sample of the output from that model: Unfortunately, I’ve had trouble making it say anything interesting about language, as it prefers to rattle on and on about the U.S. and Israel and Palestine. Perhaps I’ll have to train the next model on academic papers alone and see what happens. Most recently, I’ve been training machines on movie screenplays, and getting some interesting results. If you train an LSTM on continuous dialogue, you can ask the model questions and receive plausible responses. I promised myself I wouldn’t write more than 5000 words for this article, and I’ve already passed that threshold. So, rather than attempting some sort of eloquent conclusion, I’ll leave you with this brief video. There’s much more to come in the near future. Stay tuned. Edit 6/9/16: Check out Part II! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. not a poet | new forms & interfaces for written language, narrated reality, &c. AMI is a program at Google that brings together artists and engineers to realize projects using Machine Intelligence. Works are developed together alongside artists’ current practices and shown at galleries, biennials, festivals, or online. " Eric Elliott,947,9,https://medium.com/javascript-scene/how-to-build-a-neuron-exploring-ai-in-javascript-pt-1-c2726f1f02b2?source=tag_archive---------7----------------,How to Build a Neuron: Exploring AI in JavaScript Pt 1,"Years ago, I was working on a project that needed to be adaptive. Essentially, the software needed to learn and get better at a frequently repeated task over time. I’d read about neural networks and some early success people had achieved with them, so I decided to try it out myself. That marked the beginning of a life-long fascination with AI. AI is a really big deal. There are a small handful of technologies that will dramatically change the world over the course of the next 25 years. Three of the biggest disruptors rely deeply on AI: Self driving cars alone will disrupt more than 10 million jobs in America, radically improve transportation and shipping efficiency, and may lead to a huge change in car ownership as we outsource transportation and the pains of car ownership and maintenance to apps like Uber. You’ve probably heard about Google’s self driving cars, but Tesla, Mercedes, BMW and other car manufacturers are also making big bets on self driving technology. Regulations, not technology, are the primary obstacles for drone-based commercial services such as Amazon air, and just a few days ago, the FAA relaxed restrictions on commercial drone flights. It’s still not legal for Amazon to deliver packages to your door with drones, but that will soon change, and when that happens, commerce will never be the same. Of course half a million consumer drone sales over the last holiday season implies that drones are going to change a lot more than commerce. Expect to see a lot more of them hovering obnoxiously in every metro area in the world in the coming years. Augmented and virtual reality will fundamentally transform what it means to be human. As our senses are augmented by virtual constructs mixed seamlessly with the real world, we’ll find new ways to work, new ways to play, and new ways to interact with each other, including AR assisted learning, telepresence, and radical new experiences we haven’t dreamed of, yet. All of these technologies require our gadgets to have an awareness of the surrounding environment, and the ability to respond behaviorally to environmental inputs. Self driving cars need to see obstacles and make corrections to avoid them. Drones need to detect collision hazards, wind, and the ground to land on. Room scale VR needs to alert you of the room boundaries so you don’t wander into walls, and AR devices need to detect tables, chairs, and desks, and walls, and allow virtual elements and characters to interact with them. Processing sensory inputs and figuring out what they mean is one of the most important jobs that our brain is responsible for. How does the human brain deal with the complexity of that job? With neurons. Taken alone, a single neuron doesn’t do anything particularly interesting, but when combined together, neural networks are responsible for our ability to recognize the world around us, solve problems, and interact with our environment and the people around us. Neural networks are the mechanism that allows us to use language, build tools, catch balls, type, read this article, remember things, and basically do all the things we consider to be “thinking”. Recently, scientists have been scanning sections of small animal brains on the road to whole brain emulation. For example, a molecular-level model of the 302 neurons in the C. elegans roundworm. The blue brain project is an attempt to do the same thing with a human brain. The research uses microscopes to scan slices of living human brain tissue. It’s an ambitious project that is still in its infancy a decade after it launched, but nobody expects it to be finished tomorrow. We are still a long way from whole brain emulation for anything but the simplest organisms, but eventually, we may be able to emulate a whole human brain on a computer at the molecular level. Before we try to emulate even basic neuron functionality ourselves, we should learn more about how neurons work. A neuron is a cell that collects input signals (electrical potentials) from synaptic terminals (typically from dendrites, but sometimes directly on the cell membrane). When those signals sum past a certain threshold potential at the axon hillock trigger zone, it triggers an output signal, called an action potential. The action potential travels along the output nerve fiber, called an axon. The axon splits into collateral branches which can carry the output signal to different parts of the neural network. Each axon branch terminates by splitting into clusters of tiny terminal branches, which interface with other neurons through synapses. Synapse is the word used to describe the transmission mechanism from one neuron to the next. There are two kinds of synapse receptors on the postsynaptic terminal wall: ion channels and metabolic channels. Ion channels are fast (tens of milliseconds), and can either excite or inhibit the potential in the postsynaptic neuron, by opening channels for positively or negatively charged ions to enter the cell, respectively. In an ionotropic transmission, the neurotransmitter is released from the presynaptic neuron into the synaptic cleft — a tiny gap between the terminals of the presynaptic neuron and the postsynaptic neuron. It binds to receptors on the postsynaptic terminal wall, which causes them to open, allowing electrically charged ions to flow into the postsynaptic cell, causing a change to the cell’s potential. Metabolic channels are slower and more controlled than ion channels. In chemical transmissions, the action potential triggers the release of chemical transmitters from the presynaptic terminal into the synaptic cleft. Those chemical transmitters bind to metabolic receptors which do not have ion channels of their own. That binding triggers chemical reactions on the inside of the cell wall to release G-proteins which can open ion channels connected to different receptors. As the G-proteins must first diffuse and rebind to neighboring channels, this process naturally takes longer. The duration of metabolic effect can vary from about 100ms to several minutes, depending on how long it takes for neurotransmitters to be absorbed, released, diffused, or recycled back into the presynaptic terminal. Like ion channels, the signal can be either exciting or inhibitory to the postsynaptic neuron potential. There is also another type of synapse, called an electrical synapse. Unlike the chemical synapses described above, which rely on chemical neurotransmitters and receptors at axon terminals, an electrical synapse connects dendrites from one cell directly to dendrites of another cell by a gap junction, which is a channel that allows ions and other small molecules to pass directly between the cells, effectively creating one large neuron with multiple axons. Cells connected by electrical synapses almost always fire simultaneously. When any connected cell fires, all connected cells fire with it. However, some gap junctions are one way. Among other things, electrical synapses connect cells that control muscle groups such as the heart, where it’s important that all related cells cooperate, creating simultaneous muscle contractions. Different synapses can have different strengths (called weights). A synapse weight can change over time through a process known as synaptic plasticity. It is believed that changes in synapse connection strength is how we form memory. In other words, in order to learn and form memories, our brain literally rewires itself. An increase in synaptic weight is called Long Term Potentiation (LTP). A decrease in synaptic weight is called Long Term Depression (LTD). If the postsynaptic neuron tends to fire a lot when the presynaptic neuron fires, the synaptic weight increases. If the cells don’t tend to fire together often, the connection weakens. In other words: The key to synaptic plasticity is hidden in a pair of 20ms windows: If the presynaptic neuron fires before the postsynaptic neuron within 20ms, the weight increases (LTP). If the presynaptic neuron fires after the postsynaptic neuron within 20ms, the weight decreases (LTD). This process is called spike-timing-dependent plasticity. Spike-timing-dependent plasticity was discovered in the 1990’s and is still being explored, but it is believed that action potential backpropagation from the cell’s axon to the dendrites is involved in the LTP process. During a typical forward-propagating event, glutamate will be released from the presynaptic terminal, which binds to AMPA receptors in the postsynaptic terminal wall, allowing positively charged sodium ions (Na+) into the cell. If a large enough depolarization event occurs inside the cell (perhaps a backpropagation potential from the axon trigger point), electrostatic repulsion will open a magnesium block in NMDA receptors, allowing even more sodium to flood the cell along with calcium (Ca2+). At the same time, potassium (K+) flows out of the cell. These events themselves only last tens of milliseconds, but they have indirect lasting effects. An influx of calcium causes extra AMPA receptors to be inserted into the cell membrane, which will allow more sodium ions into the cell during future action potential events from the presynaptic neuron. A similar process works in reverse to trigger LTD. During LTP events, a special class of proteins called growth factors can also form, which can cause new synapses to grow, strengthening the bond between the two cells. The impact of new synapse growth can be permanent, assuming that the neurons continue to fire together frequently. Many artificial neurons act less like neurons and more like transistors with two simple states: on or off. If enough upstream neurons are on rather than off, the neuron is on. Otherwise, it’s off. Other neural nets use input values from -1 to +1. The basic math looks a little like the following: This is a good idea if you want to conserve CPU power so you can emulate a lot more neurons, and we’ve been able to use these basic principles to accomplish very simple pattern recognition tasks, such as optical character recognition (OCR) using pre-trained networks. However, there’s a problem. As I’ve described above, real neurons don’t behave that way. Instead, synapses transmit fluctuating continuous value potentials over time through the soma (cell body) to the axon hillock trigger zone where the sum of the signal may or may not trigger an action potential at any given moment in time. If the potential in the soma remains high, pulses may continue as the cell triggers at high frequency (once every few milliseconds). Lots of variables influence the process, the trigger frequencies, and the pattern of action potential bursts. With the model presented above, how would you determine whether or not triggers occurred within the LTP/LTD windows? What critical element is our basic model missing? Time. But that’s a story for a different article. Stay tuned for part 2. Eric Elliott is the author of “Programming JavaScript Applications” (O’Reilly), and “Learn JavaScript with Eric Elliott”. He has contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more. He spends most of his time in the San Francisco Bay Area with the most beautiful woman in the world. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Make some magic. #JavaScript To submit, DM your proposal to @JS_Cheerleader on Twitter " Dhruv Parthasarathy,665,11,https://medium.com/@dhruvp/how-to-write-a-neural-network-to-play-pong-from-scratch-956b57d4f6e0?source=tag_archive---------8----------------,Write an AI to win at Pong from scratch with Reinforcement Learning,"There’s a huge difference between reading about Reinforcement Learning and actually implementing it. In this post, you’ll implement a Neural Network for Reinforcement Learning and see it learn more and more as it finally becomes good enough to beat the computer in Pong! You can play around with other such Atari games at the OpenAI Gym. By the end of this post, you’ll be able to do the following: The code and the idea are all tightly based on Andrej Karpathy’s blog post. The code in me_pong.py is intended to be a simpler to follow version of pong.py which was written by Dr. Karpathy. To follow along, you’ll need to know the following: If you want a deeper dive into the material at hand, read the blog post on which all of this is based. This post is meant to be a simpler introduction to that material. Great! Let’s get started. We are given the following: Can we use these pieces to train our agent to beat the computer? Moreover, can we make our solution generic enough so it can be reused to win in games that aren’t pong? Indeed, we can! Andrej does this by building a Neural Network that takes in each image and outputs a command to our AI to move up or down. We can break this down a bit more into the following steps: Our Neural Network, based heavily on Andrej’s solution, will do the following: Ok now that we’ve described the problem and its solution, let’s get to writing some code! We’re now going to follow the code in me_pong.py. Please keep it open and read along! The code starts here: First, let’s use OpenAI Gym to make a game environment and get our very first image of the game. Next, we set a bunch of parameters based off of Andrej’s blog post. We aren’t going to worry about tuning them but note that you can probably get better performance by doing so. The parameters we will use are: Then, we set counters, initial values, and the initial weights in our Neural Network. Weights are stored in matrices. Layer 1 of our Neural Network is a 200 x 6400 matrix representing the weights for our hidden layer. For layer 1, element w1_ij represents the weight of neuron i for input pixel j in layer 1. Layer 2 is a 200 x 1 matrix representing the weights of the output of the hidden layer on our final output. For layer 2, element w2_i represents the weights we place on the activation of neuron i in the hidden layer. We initialize each layer’s weights with random numbers for now. We divide by the square root of the number of the dimension size to normalize our weights. Next, we set up the initial parameters for RMSProp (a method for updating weights that we will discuss later). Don’t worry too much about understanding what you see below. I’m mainly bringing it up here so we can continue to follow along the main code block. We’ll need to collect a bunch of observations and intermediate values across the episode and use those to compute the gradient at the end based on the result. The below sets up the arrays where we’ll collect all that information. Ok we’re all done with the setup! If you were following, it should look something like this: Phew. Now for the fun part! The crux of our algorithm is going to live in a loop where we continually make a move and then learn based on the results of the move. We’ll put everything in a while block for now but in reality you might set up a break condition to stop the process. The first step to our algorithm is processing the image of the game that OpenAI Gym passed us. We really don’t care about the entire image - just certain details. We do this below: Let’s dive into preprocess_observations to see how we convert the image OpenAI Gym gives us into something we can use to train our Neural Network. The basic steps are: Now that we’ve preprocessed the observations, let’s move on to actually sending the observations through our neural net to generate the probability of telling our AI to move up. Here are the steps we’ll take: How exactly does apply_neural_nets take observations and weights and generate a probability of going up? This is just the forward pass of the Neural Network. Let’s look at the code below for more information: As you can see, it’s not many steps at all! Let’s go step by step: Let’s return to the main algorithm and continue on. Now that we have obtained a probability of going up, we need to now record the results for later learning and choose an action to tell our AI to implement: We choose an action by flipping an imaginary coin that lands “up” with probability up_probability and down with 1 - up_probability. If it lands up, we choose tell our AI to go up and if not, we tell it to go down. We also Having done that, we pass the action to OpenAI Gym via env.step(action). Ok we’ve covered the first half of the solution! We know what action to tell our AI to take. If you’ve been following along, your code should look like this: Now that we’ve made our move, it’s time to start learning so we figure out the right weights in our Neural Network! Learning is all about seeing the result of the action (i.e. whether or not we won the round) and changing our weights accordingly. The first step to learning is asking the following question: Mathematically, this is just the derivative of our result with respect to the outputs of our final layer. If L is the value of our result to us and f is the function that gives us the activations of our final layer, this derivative is just ∂L/∂f. In a binary classification context (i.e. we just have to tell the AI one of two actions, up or down), this derivative turns out to be Note that σ in the above equation represents the sigmoid function. Read the Attribute Classification section here for more information about how we get the above derivative. We simplify this further below: After one action(moving the paddle up or down), we don’t really have an idea of whether or not this was the right action. So we’re going to cheat and treat the action we end up sampling from our probability as the correct action. Our predicion for this round is going to be the probability of going up we calculated. Using that, we have that ∂L/∂f can be computed by Awesome! We have the gradient per action. The next step is to figure out how we learn after the end of an episode (i.e. when we or our opponent miss the ball and someone gets a point). We do this by computing the policy gradient of the network at the end of each episode. The intuition here is that if we won the round, we’d like our network to generate more of the actions that led to us winning. Alternatively, if we lose, we’re going to try and generate less of these actions. OpenAI Gym provides us the handy done variable to tell us when an episode finishes (i.e. we missed the ball or our opponent missed the ball). When we notice we are done, the first thing we do is compile all our observations and gradient calculations for the episode. This allows us to apply our learnings over all the actions in the episode. Next, we want to learn in such a way that actions taken towards the end of an episode more heavily influence our learning than actions taken at the beginning. This is called discounting. Think about it this way - if you moved up at the first frame of the episode, it probably had very little impact on whether or not you win. However, closer to the end of the episode, your actions probably have a much larger effect as they determine whether or not your paddle reaches the ball and how your paddle hits the ball. We’re going to take this weighting into account by discounting our rewards such that rewards from earlier frames are discounted a lot more than rewards for later frames. After this, we’re going to finally use backpropagation to compute the gradient (i.e. the direction we need to move our weights to improve). Let’s dig in a bit into how the policy gradient for the episode is computed. This is one of the most important parts of Reinforcement Learning as it’s how our agent figures out how to improve over time. To begin with, if you haven’t already, read this excerpt on backpropagation from Michael Nielsen’s excellent free book on Deep Learning. As you’ll see in that excerpt, there are four fundamental equations of backpropogation, a technique for computing the gradient for our weights. Our goal is to find ∂C/∂w1 (BP4), the derivative of the cost function with respect to the first layer’s weights, and ∂C/∂w2, the derivative of the cost function with respect to the second layer’s weights. These gradients will help us understand what direction to move our weights in for the greatest improvement. To begin with, let’s start with ∂C/∂w2. If a^l2 is the activations of the hidden layer (layer 2), we see that the formula is: Indeed, this is exactly what we do here: Next, we need to calculate ∂C/∂w1. The formula for that is: and we also know that a^l1 is just our observation_values. So all we need now is δ^l2. Once we have that, we can calculate ∂C/∂w1 and return. We do just that below: If you’ve been following along, your function should look like this: With that, we’ve finished backpropagation and computed our gradients! After we have finished batch_size episodes, we finally update our weights for our Neural Network and implement our learnings. To update the weights, we simply apply RMSProp, an algorithm for updating weights described by Sebastian Reuder here. We implement this below: This is the step that tweaks our weights and allows us to get better over time. This is basically it! Putting it altogether it should look like this. You just coded a full Neural Network for playing Pong! Uncomment env.render() and run it for 3–4 days to see it finally beat the computer! You’ll need to do some pickling as done in Andrej Karpathy’s solution to be able to visualize your results when you win. According to the blog post, this algorithm should take around 3 days of training on a Macbook to start beating the computer. Consider tweaking the parameters or using Convolutional Neural Nets to boost the performance further. If you want a further primer into Neural Networks and Reinforcement Learning, there are some great resources to learn more (I work at Udacity as the Director of Machine Learning programs): From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. " Waleed Abdulla,507,12,https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6?source=tag_archive---------9----------------,Traffic Sign Recognition with TensorFlow – Waleed Abdulla – Medium,"This is part 1 of a series about building a deep learning model to recognize traffic signs. It’s intended to be a learning experience, for myself and for anyone else who likes to follow along. There are a lot of resources that cover the theory and math of neural networks, so I’ll focus on the practical aspects instead. I’ll describe my own experience building this model and share the source code and relevant materials. This is suitable for those who know Python and the basics of machine learning already, but want hands on experience and to practice building a real application. In this part, I’ll talk about image classification and I’ll keep the model as simple as possible. In later parts, I’ll cover convolutional networks, data augmentation, and object detection. The source code is available in this Jupyter notebook. I’m using Python 3.5 and TensorFlow 0.12. If you prefer to run the code in Docker, you can use my Docker image that contains many popular deep learning tools. Run it with this command: Note that my project directory is in ~/traffic and I’m mapping it to the /traffic directory in the Docker container. Modify this if you’re using a different directory. My first challenge was finding a good training dataset. Traffic sign recognition is a well studied problem, so I figured I’ll find something online. I started by googling “traffic sign dataset” and found several options. I picked the Belgian Traffic Sign Dataset because it was big enough to train on, and yet small enough to be easy to work with. You can download the dataset from http://btsd.ethz.ch/shareddata/. There are a lot of datasets on that page, but you only need the two files listed under BelgiumTS for Classification (cropped images): After expanding the files, this is my directory structure. Try to match it so you can run the code without having to change the paths: Each of the two directories contain 62 subdirectories, named sequentially from 00000 to 00061. The directory names represent the labels, and the images inside each directory are samples of each label. Or, if you prefer to sound more formal: do Exploratory Data Analysis. It’s tempting to skip this part, but I’ve found that the code I write to examine the data ends up being used a lot throughout the project. I usually do this in Jupyter notebooks and share them with the team. Knowing your data well from the start saves you a lot of time later. The images in this dataset are in an old .ppm format. So old, in fact, that most tools don’t support it. Which meant that I couldn’t casually browse the folders to take a look at the images. Luckily, the Scikit Image library recognizes this format. This code will load the data and return two lists: images and labels. This is a small dataset so I’m loading everything into RAM to keep it simple. For larger datasets, you’d want to load the data in batches. After loading the images into Numpy arrays, I display a sample image of each label. See code in the notebook. This is our dataset: Looks like a good training set. The image quality is great, and there are a variety of angles and lighting conditions. More importantly, the traffic signs occupy most of the area of each image, which allows me to focus on object classification and not have to worry about finding the location of the traffic sign in the image (object detection). I’ll get to object detection in a future post. The first thing I noticed from the samples above is that images are square-ish, but have different aspect ratios. My neural network will take a fixed-size input, so I have some preprocessing to do. I’ll get to that soon, but first let’s pick one label and see more of its images. Here is an example of label 32: It looks like the dataset considers all speed limit signs to be of the same class, regardless of the numbers on them. That’s fine, as long as we know about it beforehand and know what to expect. That’s why understanding your dataset is so important and can save you a lot of pain and confusion later. I’ll leave exploring the other labels to you. Labels 26 and 27 are interesting to check. They also have numbers in red circles, so the model will have to get really good to differentiate between them. Most image classification networks expect images of a fixed size, and our first model will do as well. So we need to resize all the images to the same size. But since the images have different aspect ratios, then some of them will be stretched vertically or horizontally. Is that a problem? I think it’s not in this case, because the differences in aspect ratios are not that large. My own criteria is that if a person can recognize the images when they’re stretched then the model should be able to do so as well. What are the sizes of the images anyway? Let’s print a few examples: The sizes seem to hover around 128x128. I could use that size to preserve as much information as possible, but in early development I prefer to use a smaller size because it leads to faster training, which allows me to iterate faster. I experimented with 16x16 and 20x20, but they were too small. I ended up picking 32x32 which is easy to recognize (see below) and reduces the size of the model and training data by a factor of 16 compared to 128x128. I’m also in the habit of printing the min() and max() values often. It’s a simple way to verify the range of the data and catch bugs early. This tells me that the image colors are the standard range of 0–255. We’re getting to the interesting part! Continuing the theme of keeping it simple, I started with the simplest possible model: A one layer network that consists of one neuron per label. This network has 62 neurons and each neuron takes the RGB values of all pixels as input. Effectively, each neuron receives 32*32*3=3072 inputs. This is a fully-connected layer because every neuron connects to every input value. You’re probably familiar with its equation: I start with a simple model because it’s easy to explain, easy to debug, and fast to train. Once this works end to end, expanding on it is much easier than building something complex from the start. TensorFlow encapsulates the architecture of a neural network in an execution graph. The graph consists of operations (Ops for short) such as Add, Multiply, Reshape, ...etc. These ops perform actions on data in tensors (multidimensional arrays). I’ll go through the code to build the graph step by step below, but here is the full code if you prefer to scan it first: First, I create the Graph object. TensorFlow has a default global graph, but I don’t recommend using it. Global variables are bad in general because they make it too easy to introduce bugs. I prefer to create the graph explicitly. Then I define Placeholders for the images and labels. The placeholders are TensorFlow’s way of receiving input from the main program. Notice that I create the placeholders (and all other ops) inside the block of with graph.as_default(). This is so they become part of my graph object rather than the global graph. The shape of the images_ph placeholder is [None, 32, 32, 3]. It stands for [batch size, height, width, channels] (often shortened as NHWC) . The None for batch size means that the batch size is flexible, which means that we can feed different batch sizes to the model without having to change the code. Pay attention to the order of your inputs because some models and frameworks might use a different arrangement, such as NCHW. Next, I define the fully connected layer. Rather than implementing the raw equation, y = xW + b, I use a handy function that does that in one line and also applies the activation function. It expects input as a one-dimensional vector, though. So I flatten the images first. I’m using the ReLU activation function here: It simply converts all negative values to zeros. It’s been shown to work well in classification tasks and trains faster than sigmoid or tanh. For more background, check here and here. The output of the fully connected layer is a logits vector of length 62 (technically, it’s [None, 62] because we’re dealing with a batch of logits vectors). A row in the logits tensor might look like this: [0.3, 0, 0, 1.2, 2.1, .01, 0.4, ....., 0, 0]. The higher the value, the more likely that the image represents that label. Logits are not probabilities, though — They can have any value, and they don’t add up to 1. The actual absolute values of the logits are not important, just their values relative to each other. It’s easy to convert logits to probabilities using the softmax function if needed (it’s not needed here). In this application, we just need the index of the largest value, which corresponds to the id of the label. The argmax op does that. The argmax output will be integers in the range 0 to 61. Choosing the right loss function is an area of research in and of itself, which I won’t delve into it here other than to say that cross-entropy is the most common function for classification tasks. If you’re not familiar with it, there is a really good explanation here and here. Cross-entropy is a measure of difference between two vectors of probabilities. So we need to convert labels and the logits to probability vectors. The function sparse_softmax_cross_entropy_with_logits() simplifies that. It takes the generated logits and the groundtruth labels and does three things: converts the label indexes of shape [None] to logits of shape [None, 62] (one-hot vectors), then it runs softmax to convert both prediction logits and label logits to probabilities, and finally calculates the cross-entropy between the two. This generates a loss vector of shape [None] (1D of length = batch size), which we pass through reduce_mean() to get one single number that represents the loss value. Choosing the optimization algorithm is another decision to make. I usually use the ADAM optimizer because it’s been shown to converge faster than simple gradient descent. This post does a great job comparing different gradient descent optimizers. The last node in the graph is the initialization op, which simply sets the values of all variables to zeros (or to random values or whatever the variables are set to initialize to). Notice that the code above doesn’t execute any of the ops yet. It’s just building the graph and describing its inputs. The variables we defined above, such as init, loss, predicted_labels don’t contain numerical values. They are references to ops that we’ll execute next. This is where we iteratively train the model to minimize the loss function. Before we start training, though, we need to create a Session object. I mentioned the Graph object earlier and how it holds all the Ops of the model. The Session, on the other hand, holds the values of all the variables. If a graph holds the equation y=xW+b then the session holds the actual values of these variables. Usually the first thing to run after starting a session is the initialization op, init, to initialize the variables. Then we start the training loop and run the train op repeatedly. While not necessary, it’s useful to run the loss op as well to print its values and monitor the progress of the training. In case you’re wondering, I set the loop to 201 so that the i % 10 condition is satisfied in the last round and prints the last loss value. The output should look something like this: Now we have a trained model in memory in the Session object. To use it, we call session.run() just like in the training code. The predicted_labels op returns the output of the argmax() function, so that’s what we need to run. Here I classify 10 random images and print both, the predictions and the groundtruth labels for comparison. In the notebook, I include a function to visualize the results as well. It generates something like this: The visualization shows that the model is working , but doesn’t quantify how accurate it is. And you might’ve noticed that it’s classifying the training images, so we don’t know yet if the model generalizes to images that it hasn’t seen before. Next, we calculate a better evaluation metric. To properly measure how the model generalizes to data it hasn’t seen, I do the evaluation on test data that I didn’t use in training. The BelgiumTS dataset makes this easy by providing two separate sets, one for training and one for testing. In the notebook I load the test set, resize the images to 32x32, and then calculate the accuracy. This is the relevant part of the code that calculates the accuracy. The accuracy I get in each run ranges between 0.40 and 0.70 depending on whether the model lands on a local minimum or a global minimum. This is expected when running a simple model like this one. In a future post I’ll talk about ways to improve the consistency of the results. Congratulations! We have a working simple neural network. Given how simple this neural network is, training takes just a minute on my laptop so I didn’t bother saving the trained model. In the next part, I’ll add code to save and load trained models and expand to use multiple layers, convolutional networks, and data augmentation. Stay tuned! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Startups, deep learning, computer vision. " Stefan Kojouharov,14.2K,7,https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------0----------------,"Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data","Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity. " Avinash Sharma V,6.9K,10,https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0?source=tag_archive---------1----------------,Understanding Activation Functions in Neural Networks,"Recently, a colleague of mine asked me a few questions like “why do we have so many activation functions?”, “why is that one works better than the other?”, ”how do we know which one to use?”, “is it hardcore maths?” and so on. So I thought, why not write an article on it for those who are familiar with neural network only at a basic level and is therefore, wondering about activation functions and their “why-how-mathematics!”. NOTE: This article assumes that you have a basic knowledge of an artificial “neuron”. I would recommend reading up on the basics of neural networks before reading this article for better understanding. So what does an artificial neuron do? Simply put, it calculates a “weighted sum” of its input, adds a bias and then decides whether it should be “fired” or not ( yeah right, an activation function does this, but let’s go with the flow for a moment ). So consider a neuron. Now, the value of Y can be anything ranging from -inf to +inf. The neuron really doesn’t know the bounds of the value. So how do we decide whether the neuron should fire or not ( why this firing pattern? Because we learnt it from biology that’s the way brain works and brain is a working testimony of an awesome and intelligent system ). We decided to add “activation functions” for this purpose. To check the Y value produced by a neuron and decide whether outside connections should consider this neuron as “fired” or not. Or rather let’s say — “activated” or not. The first thing that comes to our minds is how about a threshold based activation function? If the value of Y is above a certain value, declare it activated. If it’s less than the threshold, then say it’s not. Hmm great. This could work! Activation function A = “activated” if Y > threshold else not Alternatively, A = 1 if y> threshold, 0 otherwise Well, what we just did is a “step function”, see the below figure. Its output is 1 ( activated) when value > 0 (threshold) and outputs a 0 ( not activated) otherwise. Great. So this makes an activation function for a neuron. No confusions. However, there are certain drawbacks with this. To understand it better, think about the following. Suppose you are creating a binary classifier. Something which should say a “yes” or “no” ( activate or not activate ). A Step function could do that for you! That’s exactly what it does, say a 1 or 0. Now, think about the use case where you would want multiple such neurons to be connected to bring in more classes. Class1, class2, class3 etc. What will happen if more than 1 neuron is “activated”. All neurons will output a 1 ( from step function). Now what would you decide? Which class is it? Hmm hard, complicated. You would want the network to activate only 1 neuron and others should be 0 ( only then would you be able to say it classified properly/identified the class ). Ah! This is harder to train and converge this way. It would have been better if the activation was not binary and it instead would say “50% activated” or “20% activated” and so on. And then if more than 1 neuron activates, you could find which neuron has the “highest activation” and so on ( better than max, a softmax, but let’s leave that for now ). In this case as well, if more than 1 neuron says “100% activated”, the problem still persists.I know! But..since there are intermediate activation values for the output, learning can be smoother and easier ( less wiggly ) and chances of more than 1 neuron being 100% activated is lesser when compared to step function while training ( also depending on what you are training and the data ). Ok, so we want something to give us intermediate ( analog ) activation values rather than saying “activated” or not ( binary ). The first thing that comes to our minds would be Linear function. A = cx A straight line function where activation is proportional to input ( which is the weighted sum from neuron ). This way, it gives a range of activations, so it is not binary activation. We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that. So that is ok too. Then what is the problem with this? If you are familiar with gradient descent for training, you would notice that for this function, derivative is a constant. A = cx, derivative with respect to x is c. That means, the gradient has no relationship with X. It is a constant gradient and the descent is going to be on constant gradient. If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !!! This is not that good! ( not always, but bear with me ). There is another problem too. Think about connected layers. Each layer is activated by a linear function. That activation in turn goes into the next level as input and the second layer calculates weighted sum on that input and it in turn, fires based on another linear activation function. No matter how many layers we have, if all are linear in nature, the final activation function of last layer is nothing but just a linear function of the input of first layer! Pause for a bit and think about it. That means these two layers ( or N layers ) can be replaced by a single layer. Ah! We just lost the ability of stacking layers this way. No matter how we stack, the whole network is still equivalent to a single layer with linear activation ( a combination of linear functions in a linear manner is still another linear function ). Let’s move on, shall we? Well, this looks smooth and “step function like”. What are the benefits of this? Think about it for a moment. First things first, it is nonlinear in nature. Combinations of this function are also nonlinear! Great. Now we can stack layers. What about non binary activations? Yes, that too!. It will give an analog activation unlike step function. It has a smooth gradient too. And if you notice, between X values -2 to 2, Y values are very steep. Which means, any small changes in the values of X in that region will cause values of Y to change significantly. Ah, that means this function has a tendency to bring the Y values to either end of the curve. Looks like it’s good for a classifier considering its property? Yes ! It indeed is. It tends to bring the activations to either side of the curve ( above x = 2 and below x = -2 for example). Making clear distinctions on prediction. Another advantage of this activation function is, unlike linear function, the output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function. So we have our activations bound in a range. Nice, it won’t blow up the activations then. This is great. Sigmoid functions are one of the most widely used activation functions today. Then what are the problems with this? If you notice, towards either end of the sigmoid function, the Y values tend to respond very less to changes in X. What does that mean? The gradient at that region is going to be small. It gives rise to a problem of “vanishing gradients”. Hmm. So what happens when the activations reach near the “near-horizontal” part of the curve on either sides? Gradient is small or has vanished ( cannot make significant change because of the extremely small value ). The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ). There are ways to work around this problem and sigmoid is still very popular in classification problems. Another activation function that is used is the tanh function. Hm. This looks very similar to sigmoid. In fact, it is a scaled sigmoid function! Ok, now this has characteristics similar to sigmoid that we discussed above. It is nonlinear in nature, so great we can stack layers! It is bound to range (-1, 1) so no worries of activations blowing up. One point to mention is that the gradient is stronger for tanh than sigmoid ( derivatives are steeper). Deciding between the sigmoid or tanh will depend on your requirement of gradient strength. Like sigmoid, tanh also has the vanishing gradient problem. Tanh is also a very popular and widely used activation function. Later, comes the ReLu function, A(x) = max(0,x) The ReLu function is as shown above. It gives an output x if x is positive and 0 otherwise. At first look this would look like having the same problems of linear function, as it is linear in positive axis. First of all, ReLu is nonlinear in nature. And combinations of ReLu are also non linear! ( in fact it is a good approximator. Any function can be approximated with combinations of ReLu). Great, so this means we can stack layers. It is not bound though. The range of ReLu is [0, inf). This means it can blow up the activation. Another point that I would like to discuss here is the sparsity of the activation. Imagine a big neural network with a lot of neurons. Using a sigmoid or tanh will cause almost all neurons to fire in an analog way ( remember? ). That means almost all activations will be processed to describe the output of a network. In other words the activation is dense. This is costly. We would ideally want a few neurons in the network to not activate and thereby making the activations sparse and efficient. ReLu give us this benefit. Imagine a network with random initialized weights ( or normalised ) and almost 50% of the network yields 0 activation because of the characteristic of ReLu ( output 0 for negative values of x ). This means a fewer neurons are firing ( sparse activation ) and the network is lighter. Woah, nice! ReLu seems to be awesome! Yes it is, but nothing is flawless.. Not even ReLu. Because of the horizontal line in ReLu( for negative X ), the gradient can go towards 0. For activations in that region of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input ( simply because gradient is 0, nothing changes ). This is called dying ReLu problem. This problem can cause several neurons to just die and not respond making a substantial part of the network passive. There are variations in ReLu to mitigate this issue by simply making the horizontal line into non-horizontal component . for example y = 0.01x for x<0 will make it a slightly inclined line rather than horizontal line. This is leaky ReLu. There are other variations too. The main idea is to let the gradient be non zero and recover during training eventually. ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. That is a good point to consider when we are designing deep neural nets. Now, which activation functions to use. Does that mean we just use ReLu for everything we do? Or sigmoid or tanh? Well, yes and no. When you know the function you are trying to approximate has certain characteristics, you can choose an activation function which will approximate the function faster leading to faster training process. For example, a sigmoid works well for a classifier ( see the graph of sigmoid, doesn’t it show the properties of an ideal classifier? ) because approximating a classifier function as combinations of sigmoid is easier than maybe ReLu, for example. Which will lead to faster training process and convergence. You can use your own custom functions too!. If you don’t know the nature of the function you are trying to learn, then maybe i would suggest start with ReLu, and then work backwards. ReLu works most of the time as a general approximator! In this article, I tried to describe a few activation functions used commonly. There are other activation functions too, but the general idea remains the same. Research for better activation functions is still ongoing. Hope you got the idea behind activation function, why they are used and how do we decide which one to use. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Musings of an AI, Deep Learning, Mathematics addict " Elle O'Brien,2.3K,6,https://towardsdatascience.com/romance-novels-generated-by-artificial-intelligence-1b31d9c872b2?source=tag_archive---------2----------------,"Romance Novels, Generated by Artificial Intelligence","I’ve always been fascinated with romance novels — the kind they sell at the drugstore for a couple of dollars, usually with some attractive, soft-lit couples on the cover. So when I started futzing around with text-generating neural networks a few weeks ago, I developed an urgent curiosity to discover what artificial intelligence could contribute to the ever-popular genre. Maybe one day there will be entire books written by computers. For now, let’s start with titles. I gathered over 20,000 Harlequin Romance novel titles and gave them to a neural network, a type of artificial intelligence that learns the structure of text. It’s powerful enough to string together words in a way that seems almost human. 90% human. The other 10% is all wackiness. I was not disappointed with what came out. I even photoshopped some of my favorites into existence (the author names are synthesized from machine learning, too). Let’s have a look by theme: A common theme in romance novels is pregnancy, and the word “baby” had a strong showing in the titles I trained the neural network on. Naturally, the neural network came up with a lot of baby-themed titles: There’s an unusually high concentration of sheikhs, vikings, and billionaires in the Harlequin world. Likewise, the neural network generated some colorful new bachelor-types: I have so many questions. How is the prince pregnant? What sort of consulting does the count do? Who is Butterfly Earl? And what makes the sheikh’s desires so convenient? Although there are exceptions, most romance novels end in happily-ever-afters. A lot of them even start with an unexpected wedding — a marriage of convenience, or a stipulation of a business contract, or a sham that turns into real love. The neural network seems to have internalized something about matrimony: Doctors and surgeons are common paramours for mistresses headed towards the marriage valley: Christmas is a magical time for surgeons, sheikhs, playboys, dads, consultants, and the women who love them: What or where is Knith? I just like Mission: Christmas... This neural network has never seen the big Montana sky, but it has some questionable ideas about cowboys: The neural network generated some decidedly PG-13 titles: They can’t all live happily ever after. Some of the generated titles sounded like M. Night Shyamalan was a collaborator: How did the word “fear” get in there? It’s possible the network generated it without having “fear” in the training set, but a subset of the Harlequin empire is geared towards paranormal and gothic romance that might have included the word (*Note: I checked, and there was “Veil of Fear” published in 2012). To wrap it up, some of the adorable failures and near-misses generated by the neural network: I hope you’ve enjoyed computer-generated romance novel titles half as much as I have. Maybe someone out there can write about the Virgin Viking, or the Consultant Count, or the Baby Surgeon Seduction. I’d buy it. I built a webscraper in Python (thanks, Beautiful Soup!) that grabbed about 20,000 romance novel titles published under the Harlequin brand off of FictionDB.com. Harlequin is, to me, synonymous with the romance genre, although it comprises only a fraction (albeit a healthy one) of the entire market. I fed this list of book titles into a recurrent neural network, using software I got from GitHub, and waited a few hours for the magic to happen. The model I fit was a 3-layer, 256-node recurrent neural network. I also trained the network on the author list in to create some new pen names. For more about the neural network I used, have a look at the fabulous work of Andrej Karpathy. I discovered that “Surgery by the Sea” is actually a real novel, written by Sheila Douglas and published in 1979! So, this one isn’t an original neural network creation. Because the training set is rather small (only about 1 MB of text data), it’s to be expected that sometimes, the machine will spit out one of the titles it was trained on. One of the more challenging aspects of this project was discerning when that happened, since the real published titles can be more surprising than anything born out of artificial intelligence. For example: “The $4.98 Daddy” and “6'1” Grinch” are both real. In fact, the very first romance novel published by Harlequin was called “The Manatee”. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computational scientist, software developer, science writer Sharing concepts, ideas, and codes. " Slav Ivanov,4.4K,10,https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------3----------------,37 Reasons why your Neural Network is not working – Slav,"The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer. Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)? A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too. A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response: If the steps above don’t do it, start going down the following big list and verify things one by one. Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK. Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong. Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it. Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels. Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data. This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off. The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels. If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together. Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches. If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more. This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset. This paper points out that having a very large batch can reduce the generalization ability of the model. Thanks to @hengcherkeng for this one: Did you standardize your input to have zero mean and unit variance? Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit. If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]? CS231n points out a common pitfall: Also, check for different preprocessing in each sample or batch. This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only. Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. After this, try increasing the regularization strength which should increase the loss. If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way. If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t. If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights. Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy. Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers. If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network. If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3. Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class. If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps. Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search. Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting. Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum. Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers. A low learning rate will cause your model to converge very slowly. A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution. Play around with your current learning rate by multiplying it by 0.1 or 10. Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it: Did I miss anything? Is anything wrong? Let me know by leaving a reply below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " Slav Ivanov,2.9K,9,https://blog.slavv.com/picking-a-gpu-for-deep-learning-3d4795c273b9?source=tag_archive---------4----------------,Picking a GPU for Deep Learning – Slav,"Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was building my personal Deep Learning box, I reviewed all the GPUs on the market. In this article, I’m going to share my insights about choosing the right graphics processor. Also, we’ll go over: Deep Learning (DL) is part of the field of Machine Learning (ML). DL works by approximating a solution to a problem using neural networks. One of the nice properties of about neural networks is that they find patterns in the data (features) by themselves. This is opposed to having to tell your algorithm what to look for, as in the olde times. However, often this means the model starts with a blank state (unless we are transfer learning). To capture the nature of the data from scratch the neural net needs to process a lot of information. There are two ways to do so — with a CPU or a GPU. The main computational module in a computer is the Central Processing Unit (better known as CPU). It is designed to do computation rapidly on a small amount of data. For example, multiplying a few numbers on a CPU is blazingly fast. But it struggles when operating on a large amount of data. E.g., multiplying matrices of tens or hundreds thousand numbers. Behind the scenes, DL is mostly comprised of operations like matrix multiplication. Amusingly, 3D computer games rely on these same operations to render that beautiful landscape you see in Rise of the Tomb Raider. Thus, GPUs were developed to handle lots of parallel computations using thousands of cores. Also, they have a large memory bandwidth to deal with the data for these computations. This makes them the ideal commodity hardware to do DL on. Or at least, until ASICs for Machine Learning like Google’s TPU make their way to market. For me, the most important reason for picking a powerful graphics processor is saving time while prototyping models. If the networks train faster the feedback time will be shorter. Thus, it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. See Tim Dettmers’ answer to “Why are GPUs well-suited to deep learning?” on Quora for a better explanation. Also for an in-depth, albeit slightly outdated GPUs comparison see his article “Which GPU(s) to Get for Deep Learning”. There are main characteristics of a GPU related to DL are: There are two reasons for having multiple GPUs: you want to train several models at once, or you want to do distributed training of a single model. We’ll go over each one. Training several models at once is a great technique to test different prototypes and hyperparameters. It also shortens your feedback cycle and lets you try out many things at once. Distributed training, or training a single network on several video cards is slowly but surely gaining traction. Nowadays, there are easy to use approaches to this for Tensorflow and Keras (via Horovod), CNTK and PyTorch. The distributed training libraries offer almost linear speed-ups to the number of cards. For example, with 2 GPUs you get 1.8x faster training. PCIe Lanes (Updated): The caveat to using multiple video cards is that you need to be able to feed them with data. For this purpose, each GPU should have 16 PCIe lanes available for data transfer. Tim Dettmers points out that having 8 PCIe lanes per card should only decrease performance by “0–10%” for two GPUs. For a single card, any desktop processor and chipset like Intel i5 7500 and Asus TUF Z270 will use 16 lanes. However, for two GPUs, you can go 8x/8x lanes or get a processor AND a motherboard that support 32 PCIe lanes. 32 lanes are outside the realm of desktop CPUs. An Intel Xeon with a MSI — X99A SLI PLUS will do the job. For 3 or 4 GPUs, go with 8x lanes per card with a Xeon with 24 to 32 PCIe lanes. To have 16 PCIe lanes available for 3 or 4 GPUs, you need a monstrous processor. Something in the class of or AMD ThreadRipper (64 lanes) with a corresponding motherboard. Also, for more GPUs you need a faster processor and hard disk to be able to feed them data quickly enough, so they don’t sit idle. Nvidia has been focusing on Deep Learning for a while now, and the head start is paying off. Their CUDA toolkit is deeply entrenched. It works with all major DL frameworks — Tensoflow, Pytorch, Caffe, CNTK, etc. As of now, none of these work out of the box with OpenCL (CUDA alternative), which runs on AMD GPUs. I hope support for OpenCL comes soon as there are great inexpensive GPUs from AMD on the market. Also, some AMD cards support half-precision computation which doubles their performance and VRAM size. Currently, if you want to do DL and want to avoid major headaches, choose Nvidia. Your GPU needs a computer around it: Hard Disk: First, you need to read the data off the disk. An SSD is recommended here, but an HDD can work as well. CPU: That data might have to be decoded by the CPU (e.g. jpegs). Fortunately, any mid-range modern processor will do just fine. Motherboard: The data passes via the motherboard to reach the GPU. For a single video card, almost any chipset will work. If you are planning on working with multiple graphic cards, read this section. RAM: It is recommended to have 2 gigabytes of memory for every gigabyte of video card RAM. Having more certainly helps in some situations, like when you want to keep an entire dataset in memory. Power supply: It should provide enough power for the CPU and the GPUs, plus 100 watts extra. You can get all of this for $500 to $1000. Or even less if you buy a used workstation. Here is performance comparison between all cards. Check the individual card profiles below. Notably, the performance of Titan XP and GTX 1080 Ti is very close despite the huge price gap between them. The price comparison reveals that GTX 1080 Ti, GTX 1070 and GTX 1060 have great value for the compute performance they provide. All the cards are in the same league value-wise, except Titan XP. The king of the hill. When every GB of VRAM matters, this card has more than any other on the (consumer) market. It’s only a recommended buy if you know why you want it. For the price of Titan X, you could get two GTX 1080s, which is a lot of power and 16 GBs of VRAM. This card is what I currently use. It’s a great high-end option, with lots of RAM and high throughput. Very good value. I recommend this GPU if you can afford it. It works great for Computer Vision or Kaggle competitions. Quite capable mid to high-end card. The price was reduced from $700 to $550 when 1080 Ti was introduced. 8 GB is enough for most Computer Vision tasks. People regularly compete on Kaggle with these. The newest card in Nvidia’s lineup. If 1080 is over budget, this will get you the same amount of VRAM (8 GB). Also, 80% of the performance for 80% of the price. Pretty sweet deal. It’s hard to get these nowadays because they are used for cryptocurrency mining. With a considerable amount of VRAM for this price but somewhat slower. If you can get it (or a couple) second-hand at a good price, go for it. It’s quite cheap but 6 GB VRAM is limiting. That’s probably the minimum you want to have if you are doing Computer Vision. It will be okay for NLP and categorical data models. Also available as P106–100 for cryptocurrency mining, but it’s the same card without a display output. The entry-level card which will get you started but not much more. Still, if you are unsure about getting in Deep Learning, this might be a cheap way to get your feet wet. Titan X Pascal It used to be the best consumer GPU Nvidia had to offer. Made obsolete by 1080 Ti, which has the same specs and is 40% cheaper. Tesla GPUsThis includes K40, K80 (which is 2x K40 in one), P100, and others. You might already be using these via Amazon Web Services, Google Cloud Platform, or another cloud provider. In my previous article, I did some benchmarks on GTX 1080 Ti vs. K40. The 1080 performed five times faster than the Tesla card and 2.5x faster than K80. K40 has 12 GB VRAM and K80 a whopping 24 GBs. In theory, the P100 and GTX 1080 Ti should be in the same league performance-wise. However, this cryptocurrency comparison has P100 lagging in every benchmark. It is worth noting that you can do half-precision on P100, effectively doubling the performance and VRAM size. On top of all this, K40 goes for over $2000, K80 for over $3000, and P100 is about $4500. And they get still get eaten alive by a desktop-grade card. Obviously, as it stands, I don’t recommend getting them. All the specs in the world won’t help you if you don’t know what you are looking for. Here are my GPU recommendations depending on your budget: I have over $1000: Get as many GTX 1080 Ti or GTX 1080 as you can. If you have 3 or 4 GPUs running in the same box, beware of issues with feeding them with data. Also keep in mind the airflow in the case and the space on the motherboard. I have $700 to $900: GTX 1080 Ti is highly recommended. If you want to go multi-GPU, get 2x GTX 1070 (if you can find them) or 2x GTX 1070 Ti. Kaggle, here I come! I have $400 to $700: Get the GTX 1080 or GTX 1070 Ti. Maybe 2x GTX 1060 if you really want 2 GPUs. However, know that 6 GB per model can be limiting. I have $300 to $400: GTX 1060 will get you started. Unless you can find a used GTX 1070. I have less than $300: Get GTX 1050 Ti or save for GTX 1060 if you are serious about Deep Learning. Deep Learning has the great promise of transforming many areas of our life. Unfortunately, learning to wield this powerful tool, requires good hardware. Hopefully, I’ve given you some clarity on where to start in this quest. Disclosure: The above are affiliate links, to help me pay for, well, more GPUs. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning. " gk_,1.8K,6,https://machinelearnings.co/text-classification-using-neural-networks-f5cd7b8765c6?source=tag_archive---------5----------------,Text Classification using Neural Networks – Machine Learnings,"Understanding how chatbots work is important. A fundamental piece of machinery inside a chat-bot is the text classifier. Let’s look at the inner workings of an artificial neural network (ANN) for text classification. We’ll use 2 layers of neurons (1 hidden layer) and a “bag of words” approach to organizing our training data. Text classification comes in 3 flavors: pattern matching, algorithms, neural nets. While the algorithmic approach using Multinomial Naive Bayes is surprisingly effective, it suffers from 3 fundamental flaws: As with its ‘Naive’ counterpart, this classifier isn’t attempting to understand the meaning of a sentence, it’s trying to classify it. In fact so called “AI chat-bots” do not understand language, but that’s another story. Let’s examine our text classifier one section at a time. We will take the following steps: The code is here, we’re using iPython notebook which is a super productive way of working on data science projects. The code syntax is Python. We begin by importing our natural language toolkit. We need a way to reliably tokenize sentences into words and a way to stem words. And our training data, 12 sentences belonging to 3 classes (‘intents’). We can now organize our data structures for documents, classes and words. Notice that each word is stemmed and lower-cased. Stemming helps the machine equate words like “have” and “having”. We don’t care about case. Our training data is transformed into “bag of words” for each sentence. The above step is a classic in text classification: each training sentence is reduced to an array of 0’s and 1’s against the array of unique words in the corpus. is stemmed: then transformed to input: a 1 for each word in the bag (the ? is ignored) and output: the first class Note that a sentence could be given multiple classes, or none. Make sure the above makes sense and play with the code until you grok it. Next we have our core functions for our 2-layer neural network. If you are new to artificial neural networks, here is how they work. We use numpy because we want our matrix multiplication to be fast. We use a sigmoid function to normalize values and its derivative to measure the error rate. Iterating and adjusting until our error rate is acceptably low. Also below we implement our bag-of-words function, transforming an input sentence into an array of 0’s and 1’s. This matches precisely with our transform for training data, always crucial to get this right. And now we code our neural network training function to create synaptic weights. Don’t get too excited, this is mostly matrix multiplication — from middle-school math class. We are now ready to build our neural network model, we will save this as a json structure to represent our synaptic weights. You should experiment with different ‘alpha’ (gradient descent parameter) and see how it affects the error rate. This parameter helps our error adjustment find the lowest error rate: synapse_0 += alpha * synapse_0_weight_update We use 20 neurons in our hidden layer, you can adjust this easily. These parameters will vary depending on the dimensions and shape of your training data, tune them down to ~10^-3 as a reasonable error rate. The synapse.json file contains all of our synaptic weights, this is our model. This classify() function is all that’s needed for the classification once synapse weights have been calculated: ~15 lines of code. The catch: if there’s a change to the training data our model will need to be re-calculated. For a very large dataset this could take a non-insignificant amount of time. We can now generate the probability of a sentence belonging to one (or more) of our classes. This is super fast because it’s dot-product calculation in our previously defined think() function. Experiment with other sentences and different probabilities, you can then add training data and improve/expand the model. Notice the solid predictions with scant training data. Some sentences will produce multiple predictions (above a threshold). You will need to establish the right threshold level for your application. Not all text classification scenarios are the same: some predictive situations require more confidence than others. The last classification shows some internal details: Notice the bag-of-words (bow) for the sentence, 2 words matched our corpus. The neural-net also learns from the 0’s, the non-matching words. A low-probability classification is easily shown by providing a sentence where ‘a’ (common word) is the only match, for example: Here you have a fundamental piece of machinery for building a chat-bot, capable of handling a large # of classes (‘intents’) and suitable for classes with limited or extensive training data (‘patterns’). Adding one or more responses to an intent is trivial. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Philosopher, Entrepreneur, Investor Understand how machine learning and artificial intelligence will change your work & life. " nafrondel,1.7K,5,https://medium.com/@nafrondel/you-requested-someone-with-a-degree-in-this-holds-up-hand-d4bf18e96ff?source=tag_archive---------6----------------,You requested someone with a degree in this? *Holds up hand*,"You requested someone with a degree in this? *Holds up hand* So there are two main schools of Artificial Intelligence — Symbolic and non-symbolic. Symbolic says the best way to make AI is to make an expert AI — e.g. if you want a doctor AI, you feed it medical text books and it answers questions by looking it up in the text book. Non-symbolic says the best way to make AI is to decide that computers are better at understanding in computer, so give the information to the AI and let it turn that in to something it understands. As a bit of an apt aside — consider the Chinese room thought experiment. Imagine you put someone in a room with shelves full of books. The books are filled with symbols and look up tables and the person inside is told “You will be given a sheet of paper with symbols on. Use the books in the room to look up the symbols to write in reply.” Then a person outside the room posts messages in to the room in Mandarin and gets messages back in Mandarin. The person inside the room doesn’t understand Mandarin, the knowledge is all in the books, but to the person outside the room it looks like they understand Mandarin. That is how symbolic AI works. It has no inate knowledge of the subject mater, it just follows instructions. Even if some if those instructions are to update the books. Non-symbolic AI says that it’d be better if the AI wrote the books itself. So looking back at the Chinese Room, this is like teaching the person in the room Mandarin, and the books are their study notes. The trouble is, teaching someone Mandarin takes time and effort as we’re starting with a blank slate here. But consider that it takes decades to teach a child their first language, yet it takes only a little more effort to teach them a second language. So back to the AI — once we teach it one language, we want it to be like the child. We want it to be easy for it to learn a second language. This is where Artificial Neural Networks come in. These are our blank slate children. They’re made up of three parts: Inputs, neurones, outputs. The neurones are where the magic happens — they’re modelled on brains. They’re a blob of neurones that can connect up to one another or cut links so they can join one bit of the brain up to another and let a signal go from one place to another. This is what joins the input up to the output. And in the pavlovian way, when something good happens, the brain remembers by strengthening the link between neurones. But just like a baby, these start out pretty much random so all you get out is baby babble. But we don’t want baby babble, we have to teach it how to get from dog to chien, not dog to goobababaa. When teaching the ANN, you give it an input, and if the output is wrong, give it a tap on the nose and the neurones remember “whatever we just did was wrong, don’t do it again” by decreasing the value it has on the links between the neurones that led to the wrong answer and of it gets it right, give it a rub on the head and it does the opposite, it increases the numbers, meaning it’ll be more likely to take that path next time. This means that over time, it’ll join up the input Dog to the output Chien. So how does this explain the article? Well. ANNs work in both directions, we can give it outputs and it’ll give us back inputs by following the path of neurones back in the opposite direction. So by teaching it Dog means Chien, it also knows Chien could mean Dog. That also means we can teach it that Perro means Dog when we’re speaking Spanish. So when we teach it, the fastest way for it to go from Perro to Dog is to follow the same path that took Chien to Dog. Meaning over time it will pull the neurones linking Chien and Dog closer to Perro as well, which links Perro to Chien as well. This three way link in the middle of Perro, Dog and Chien is the language the google AI is creating for itself. Backing up a bit to our imaginary child learning a new language, when they learn their first language (e.g. English), they don’t write an English dictionary in their head, they hear the words and map them to an idea that the words represent. This is why people frequently misquote films, they remember what the quote meant, not what the words were. So when the child learns a second language, they hear Chien as being French, but map it to the idea of dog. Then when they hear Perro they hear it as Spanish but map that to the idea of dog too. This means the child only has to learn about the idea of a dog once, but can then link that idea up to many languages or synonyms for dog. And this is what the Google AI is doing. Instead of thinking if dog=chien, and chien=perro, perro must = dog, it thinks dog=0x3b chien =0x3b perro=0x3b. Where 0x3b is the idea of dog, meaning it can then turn 0x3b in to whichever language you ask for. Tl;Dr: It wasn’t big news because Artificial Neural Networks have been doing this since they were invented in the 40s. And the entire non-symbolic branch of AI is all about having computers invent their own language to understand and learn things. P.S. It really is smart enough to warrant that excitement! Most people have no idea how much they rely on AI. From the relatively simple AI that runs their washing machine, to the AI that reads the address hand written on mail and then figures out the best way to deliver it. These are real everyday machines making decisions for us. Even your computer mouse has AI in it to determine what you wanted to point at rather than what you actually pointed at (on a 1080p screen, there are 2 million points you could click on, it’s not by accident that it’s pretty easy to pick the correct one). Mobile phones constantly run AI to decide which phone tower to connect to, while the backbone of the internet is a huge interconnected AI deciding the fastest way to get data from one computer to another. Thinking, decision making AI is in our hands, beneath our feet, in our cars and almost every electronic device we have. The robots have already taken over ;) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. " Neelabh Pant,2K,11,https://blog.statsbot.co/time-series-prediction-using-recurrent-neural-networks-lstms-807fa6ca7f?source=tag_archive---------7----------------,A Guide For Time Series Prediction Using Recurrent Neural Networks (LSTMs),"The Statsbot team has already published the article about using time series analysis for anomaly detection. Today, we’d like to discuss time series prediction with a long short-term memory model (LSTMs). We asked a data scientist, Neelabh Pant, to tell you about his experience of forecasting exchange rates using recurrent neural networks. As an Indian guy living in the US, I have a constant flow of money from home to me and vice versa. If the USD is stronger in the market, then the Indian rupee (INR) goes down, hence, a person from India buys a dollar for more rupees. If the dollar is weaker, you spend less rupees to buy the same dollar. If one can predict how much a dollar will cost tomorrow, then this can guide one’s decision making and can be very important in minimizing risks and maximizing returns. Looking at the strengths of a neural network, especially a recurrent neural network, I came up with the idea of predicting the exchange rate between the USD and the INR. There are a lot of methods of forecasting exchange rates such as: In this article, we’ll tell you how to predict the future exchange rate behavior using time series analysis and by making use of machine learning with time series. Let us begin by talking about sequence problems. The simplest machine learning problem involving a sequence is a one to one problem. In this case, we have one data input or tensor to the model and the model generates a prediction with the given input. Linear regression, classification, and even image classification with convolutional network fall into this category. We can extend this formulation to allow for the model to make use of the pass values of the input and the output. It is known as the one to many problem. The one to many problem starts like the one to one problem where we have an input to the model and the model generates one output. However, the output of the model is now fed back to the model as a new input. The model now can generate a new output and we can continue like this indefinitely. You can now see why these are known as recurrent neural networks. A recurrent neural network deals with sequence problems because their connections form a directed cycle. In other words, they can retain state from one iteration to the next by using their own output as input for the next step. In programming terms this is like running a fixed program with certain inputs and some internal variables. The simplest recurrent neural network can be viewed as a fully connected neural network if we unroll the time axes. In this univariate case only two weights are involved. The weight multiplying the current input xt, which is u, and the weight multiplying the previous output yt-1, which is w. This formula is like the exponential weighted moving average (EWMA) by making its pass values of the output with the current values of the input. One can build a deep recurrent neural network by simply stacking units to one another. A simple recurrent neural network works well only for a short-term memory. We will see that it suffers from a fundamental problem if we have a longer time dependency. As we have talked about, a simple recurrent network suffers from a fundamental problem of not being able to capture long-term dependencies in a sequence. This is a problem because we want our RNNs to analyze text and answer questions, which involves keeping track of long sequences of words. In late ’90s, LSTM was proposed by Sepp Hochreiter and Jurgen Schmidhuber, which is relatively insensitive to gap length over alternatives RNNs, hidden markov models, and other sequence learning methods in numerous applications. This model is organized in cells which include several operations. LSTM has an internal state variable, which is passed from one cell to another and modified by Operation Gates. 1. Forget Gate It is a sigmoid layer that takes the output at t-1 and the current input at time t and concatenates them into a single tensor and applies a linear transformation followed by a sigmoid. Because of the sigmoid, the output of this gate is between 0 and 1. This number is multiplied with the internal state and that is why the gate is called a forget gate. If ft=0 then the previous internal state is completely forgotten, while if ft=1 it will be passed through unaltered. 2. Input Gate The input gate takes the previous output and the new input and passes them through another sigmoid layer. This gate returns a value between 0 and 1. The value of the input gate is multiplied with the output of the candidate layer. This layer applies a hyperbolic tangent to the mix of input and previous output, returning a candidate vector to be added to the internal state. The internal state is updated with this rule: .The previous state is multiplied by the forget gate and then added to the fraction of the new candidate allowed by the output gate. 3. Output Gate This gate controls how much of the internal state is passed to the output and it works in a similar way to the other gates. These three gates described above have independent weights and biases, hence the network will learn how much of the past output to keep, how much of the current input to keep, and how much of the internal state to send out to the output. In a recurrent neural network, you not only give the network the data, but also the state of the network one moment before. For example, if I say “Hey! Something crazy happened to me when I was driving” there is a part of your brain that is flipping a switch that’s saying “Oh, this is a story Neelabh is telling me. It is a story where the main character is Neelabh and something happened on the road.” Now, you carry a little part of that one sentence I just told you. As you listen to all my other sentences you have to keep a bit of information from all past sentences around in order to understand the entire story. Another example is video processing, where you would again need a recurrent neural network. What happens in the current frame is heavily dependent upon what was in the last frame of the movie most of the time. Over a period of time, a recurrent neural network tries to learn what to keep and how much to keep from the past, and how much information to keep from the present state, which makes it so powerful as compared to a simple feed forward neural network. I was impressed with the strengths of a recurrent neural network and decided to use them to predict the exchange rate between the USD and the INR. The dataset used in this project is the exchange rate data between January 2, 1980 and August 10, 2017. Later, I’ll give you a link to download this dataset and experiment with it. The dataset displays the value of $1 in rupees. We have a total of 13,730 records starting from January 2, 1980 to August 10, 2017. Over the period, the price to buy $1 in rupees has been rising. One can see that there was a huge dip in the American economy during 2007–2008, which was hugely caused by the great recession during that period. It was a period of general economic decline observed in world markets during the late 2000s and early 2010s. This period was not very good for the world’s developed economies, particularly in North America and Europe (including Russia), which fell into a definitive recession. Many of the newer developed economies suffered far less impact, particularly China and India, whose economies grew substantially during this period. Now, to train the machine we need to divide the dataset into test and training sets. It is very important when you do time series to split train and test with respect to a certain date. So, you don’t want your test data to come before your training data. In our experiment, we will define a date, say January 1, 2010, as our split date. The training data is the data between January 2, 1980 and December 31, 2009, which are about 11,000 training data points. The test dataset is between January 1, 2010 and August 10, 2017, which are about 2,700 points. The next thing to do is normalize the dataset. You only need to fit and transform your training data and just transform your test data. The reason you do that is you don’t want to assume that you know the scale of your test data. Normalizing or transforming the data means that the new scale variables will be between zero and one. A fully Connected Model is a simple neural network model which is built as a simple regression model that will take one input and will spit out one output. This basically takes the price from the previous day and forecasts the price of the next day. As a loss function, we use mean squared error and stochastic gradient descent as an optimizer, which after enough numbers of epochs will try to look for a good local optimum. Below is the summary of the fully connected layer. After training this model for 200 epochs or early_callbacks (whichever came first), the model tries to learn the pattern and the behavior of the data. Since we split the data into training and testing sets we can now predict the value of testing data and compare them with the ground truth. As you can see, the model is not good. It essentially is repeating the previous values and there is a slight shift. The fully connected model is not able to predict the future from the single previous value. Let us now try using a recurrent neural network and see how well it does. The recurrent model we have used is a one layer sequential model. We used 6 LSTM nodes in the layer to which we gave input of shape (1,1), which is one input given to the network with one value. The last layer is a dense layer where the loss is mean squared error with stochastic gradient descent as an optimizer. We train this model for 200 epochs with early_stopping callback. The summary of the model is shown above. This model has learned to reproduce the yearly shape of the data and doesn’t have the lag it used to have with a simple feed forward neural network. It is still underestimating some observations by certain amounts and there is definitely room for improvement in this model. There can be a lot of changes to be made in this model to make it better. One can always try to change the configuration by changing the optimizer. Another important change I see is by using the Sliding Time Window method, which comes from the field of stream data management system. This approach comes from the idea that only the most recent data are important. One can show the model data from a year and try to make a prediction for the first day of the next year. Sliding time window methods are very useful in terms of fetching important patterns in the dataset that are highly dependent on the past bulk of observations. Try to make changes to this model as you like and see how the model reacts to those changes. I made the dataset available on my github account under deep learning in python repository. Feel free to download the dataset and play with it. I personally follow some of my favorite data scientists like Kirill Eremenko, Jose Portilla, Dan Van Boxel (better known as Dan Does Data), and many more. Most of them are available on different podcast stations where they talk about different current subjects like RNN, Convolutional Neural Networks, LSTM, and even the most recent technology, Neural Turing Machine. Try to keep up with the news of different artificial intelligence conferences. By the way, if you are interested, then Kirill Eremenko is coming to San Diego this November with his amazing team to give talks on Machine Learning, Neural Networks, and Data Science. LSTM models are powerful enough to learn the most important past behaviors and understand whether or not those past behaviors are important features in making future predictions. There are several applications where LSTMs are highly used. Applications like speech recognition, music composition, handwriting recognition, and even in my current research of human mobility and travel predictions. According to me, LSTM is like a model which has its own memory and which can behave like an intelligent human in making decisions. Thank you again and happy machine learning! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I love Data Science. Let’s build some intelligent bots together! ;) Data stories on machine learning and analytics. From Statsbot’s makers. " Eugenio Culurciello,2.2K,15,https://towardsdatascience.com/neural-network-architectures-156e5bad51ba?source=tag_archive---------8----------------,Neural Network Architectures – Towards Data Science,"Deep neural networks and Deep Learning are powerful and popular algorithms. And a lot of their success lays in the careful design of the neural network architecture. I wanted to revisit the history of neural network design in the last few years and in the context of Deep Learning. For a more in-depth analysis and comparison of all the networks reported here, please see our recent article. One representative figure from this article is here: Reporting top-1 one-crop accuracy versus amount of operations required for a single forward pass in multiple popular neural network architectures. It is the year 1994, and this is one of the very first convolutional neural networks, and what propelled the field of Deep Learning. This pioneering work by Yann LeCun was named LeNet5 after many previous successful iterations since the year 1988! The LeNet5 architecture was fundamental, in particular the insight that image features are distributed across the entire image, and convolutions with learnable parameters are an effective way to extract similar features at multiple location with few parameters. At the time there was no GPU to help training, and even CPUs were slow. Therefore being able to save parameters and computation was a key advantage. This is in contrast to using each pixel as a separate input of a large multi-layer neural network. LeNet5 explained that those should not be used in the first layer, because images are highly spatially correlated, and using individual pixel of the image as separate input features would not take advantage of these correlations. LeNet5 features can be summarized as: In overall this network was the origin of much of the recent architectures, and a true inspiration for many people in the field. In the years from 1998 to 2010 neural network were in incubation. Most people did not notice their increasing power, while many other researchers slowly progressed. More and more data was available because of the rise of cell-phone cameras and cheap digital cameras. And computing power was on the rise, CPUs were becoming faster, and GPUs became a general-purpose computing tool. Both of these trends made neural network progress, albeit at a slow rate. Both data and computing power made the tasks that neural networks tackled more and more interesting. And then it became clear... In 2010 Dan Claudiu Ciresan and Jurgen Schmidhuber published one of the very fist implementations of GPU Neural nets. This implementation had both forward and backward implemented on a a NVIDIA GTX 280 graphic processor of an up to 9 layers neural network. In 2012, Alex Krizhevsky released AlexNet which was a deeper and much wider version of the LeNet and won by a large margin the difficult ImageNet competition. AlexNet scaled the insights of LeNet into a much larger neural network that could be used to learn much more complex objects and object hierarchies. The contribution of this work were: At the time GPU offered a much larger number of cores than CPUs, and allowed 10x faster training time, which in turn allowed to use larger datasets and also bigger images. The success of AlexNet started a small revolution. Convolutional neural network were now the workhorse of Deep Learning, which became the new name for “large neural networks that can now solve useful tasks”. In December 2013 the NYU lab from Yann LeCun came up with Overfeat, which is a derivative of AlexNet. The article also proposed learning bounding boxes, which later gave rise to many other papers on the same topic. I believe it is better to learn to segment objects rather than learn artificial bounding boxes. The VGG networks from Oxford were the first to use much smaller 3×3 filters in each convolutional layers and also combined them as a sequence of convolutions. This seems to be contrary to the principles of LeNet, where large convolutions were used to capture similar features in an image. Instead of the 9×9 or 11×11 filters of AlexNet, filters started to become smaller, too dangerously close to the infamous 1×1 convolutions that LeNet wanted to avoid, at least on the first layers of the network. But the great advantage of VGG was the insight that multiple 3×3 convolution in sequence can emulate the effect of larger receptive fields, for examples 5×5 and 7×7. These ideas will be also used in more recent network architectures as Inception and ResNet. The VGG networks uses multiple 3x3 convolutional layers to represent complex features. Notice blocks 3, 4, 5 of VGG-E: 256×256 and 512×512 3×3 filters are used multiple times in sequence to extract more complex features and the combination of such features. This is effectively like having large 512×512 classifiers with 3 layers, which are convolutional! This obviously amounts to a massive number of parameters, and also learning power. But training of these network was difficult, and had to be split into smaller networks with layers added one by one. All this because of the lack of strong ways to regularize the model, or to somehow restrict the massive search space promoted by the large amount of parameters. VGG used large feature sizes in many layers and thus inference was quite costly at run-time. Reducing the number of features, as done in Inception bottlenecks, will save some of the computational cost. Network-in-network (NiN) had the great and simple insight of using 1x1 convolutions to provide more combinational power to the features of a convolutional layers. The NiN architecture used spatial MLP layers after each convolution, in order to better combine features before another layer. Again one can think the 1x1 convolutions are against the original principles of LeNet, but really they instead help to combine convolutional features in a better way, which is not possible by simply stacking more convolutional layers. This is different from using raw pixels as input to the next layer. Here 1×1 convolution are used to spatially combine features across features maps after convolution, so they effectively use very few parameters, shared across all pixels of these features! The power of MLP can greatly increase the effectiveness of individual convolutional features by combining them into more complex groups. This idea will be later used in most recent architectures as ResNet and Inception and derivatives. NiN also used an average pooling layer as part of the last classifier, another practice that will become common. This was done to average the response of the network to multiple are of the input image before classification. Christian Szegedy from Google begun a quest aimed at reducing the computational burden of deep neural networks, and devised the GoogLeNet the first Inception architecture. By now, Fall 2014, deep learning models were becoming extermely useful in categorizing the content of images and video frames. Most skeptics had given in that Deep Learning and neural nets came back to stay this time. Given the usefulness of these techniques, the internet giants like Google were very interested in efficient and large deployments of architectures on their server farms. Christian thought a lot about ways to reduce the computational burden of deep neural nets while obtaining state-of-art performance (on ImageNet, for example). Or be able to keep the computational cost the same, while offering improved performance. He and his team came up with the Inception module: which at a first glance is basically the parallel combination of 1×1, 3×3, and 5×5 convolutional filters. But the great insight of the inception module was the use of 1×1 convolutional blocks (NiN) to reduce the number of features before the expensive parallel blocks. This is commonly referred as “bottleneck”. This deserves its own section to explain: see “bottleneck layer” section below. GoogLeNet used a stem without inception modules as initial layers, and an average pooling plus softmax classifier similar to NiN. This classifier is also extremely low number of operations, compared to the ones of AlexNet and VGG. This also contributed to a very efficient network design. Inspired by NiN, the bottleneck layer of Inception was reducing the number of features, and thus operations, at each layer, so the inference time could be kept low. Before passing data to the expensive convolution modules, the number of features was reduce by, say, 4 times. This led to large savings in computational cost, and the success of this architecture. Let’s examine this in detail. Let’s say you have 256 features coming in, and 256 coming out, and let’s say the Inception layer only performs 3x3 convolutions. That is 256x256 x 3x3 convolutions that have to be performed (589,000s multiply-accumulate, or MAC operations). That may be more than the computational budget we have, say, to run this layer in 0.5 milli-seconds on a Google Server. Instead of doing this, we decide to reduce the number of features that will have to be convolved, say to 64 or 256/4. In this case, we first perform 256 -> 64 1×1 convolutions, then 64 convolution on all Inception branches, and then we use again a 1x1 convolution from 64 -> 256 features back again. The operations are now: For a total of about 70,000 versus the almost 600,000 we had before. Almost 10x less operations! And although we are doing less operations, we are not losing generality in this layer. In fact the bottleneck layers have been proven to perform at state-of-art on the ImageNet dataset, for example, and will be also used in later architectures such as ResNet. The reason for the success is that the input features are correlated, and thus redundancy can be removed by combining them appropriately with the 1x1 convolutions. Then, after convolution with a smaller number of features, they can be expanded again into meaningful combination for the next layer. Christian and his team are very efficient researchers. In February 2015 Batch-normalized Inception was introduced as Inception V2. Batch-normalization computes the mean and standard-deviation of all feature maps at the output of a layer, and normalizes their responses with these values. This corresponds to “whitening” the data, and thus making all the neural maps have responses in the same range, and with zero mean. This helps training as the next layer does not have to learn offsets in the input data, and can focus on how to best combine features. In December 2015 they released a new version of the Inception modules and the corresponding architecture This article better explains the original GoogLeNet architecture, giving a lot more detail on the design choices. A list of the original ideas are: Inception still uses a pooling layer plus softmax as final classifier. The revolution then came in December 2015, at about the same time as Inception v3. ResNet have a simple ideas: feed the output of two successive convolutional layer AND also bypass the input to the next layers! This is similar to older ideas like this one. But here they bypass TWO layers and are applied to large scales. Bypassing after 2 layers is a key intuition, as bypassing a single layer did not give much improvements. By 2 layers can be thought as a small classifier, or a Network-In-Network! This is also the very first time that a network of > hundred, even 1000 layers was trained. ResNet with a large number of layers started to use a bottleneck layer similar to the Inception bottleneck: This layer reduces the number of features at each layer by first using a 1x1 convolution with a smaller output (usually 1/4 of the input), and then a 3x3 layer, and then again a 1x1 convolution to a larger number of features. Like in the case of Inception modules, this allows to keep the computation low, while providing rich combination of features. See “bottleneck layer” section after “GoogLeNet and Inception”. ResNet uses a fairly simple initial layers at the input (stem): a 7x7 conv layer followed with a pool of 2. Contrast this to more complex and less intuitive stems as in Inception V3, V4. ResNet also uses a pooling layer plus softmax as final classifier. Additional insights about the ResNet architecture are appearing every day: And Christian and team are at it again with a new version of Inception. The Inception module after the stem is rather similar to Inception V3: They also combined the Inception module with the ResNet module: This time though the solution is, in my opinion, less elegant and more complex, but also full of less transparent heuristics. It is hard to understand the choices and it is also hard for the authors to justify them. In this regard the prize for a clean and simple network that can be easily understood and modified now goes to ResNet. SqueezeNet has been recently released. It is a re-hash of many concepts from ResNet and Inception, and show that after all, a better design of architecture will deliver small network sizes and parameters without needing complex compression algorithms. Our team set up to combine all the features of the recent architectures into a very efficient and light-weight network that uses very few parameters and computation to achieve state-of-the-art results. This network architecture is dubbed ENet, and was designed by Adam Paszke. We have used it to perform pixel-wise labeling and scene-parsing. Here are some videos of ENet in action. These videos are not part of the training dataset. The technical report on ENet is available here. ENet is a encoder plus decoder network. The encoder is a regular CNN design for categorization, while the decoder is a upsampling network designed to propagate the categories back into the original image size for segmentation. This worked used only neural networks, and no other algorithm to perform image segmentation. As you can see in this figure ENet has the highest accuracy per parameter used of any neural network out there! ENet was designed to use the minimum number of resources possible from the start. As such it achieves such a small footprint that both encoder and decoder network together only occupies 0.7 MB with fp16 precision. Even at this small size, ENet is similar or above other pure neural network solutions in accuracy of segmentation. A systematic evaluation of CNN modules has been presented. The found out that is advantageous to use: • use ELU non-linearity without batchnorm or ReLU with it. • apply a learned colorspace transformation of RGB. • use the linear learning rate decay policy. • use a sum of the average and max pooling layers. • use mini-batch size around 128 or 256. If this is too big for your GPU, decrease the learning rate proportionally to the batch size. • use fully-connected layers as convolutional and average the predictions for the final decision. • when investing in increasing training set size, check if a plateau has not been reach. • cleanliness of the data is more important then the size. • if you cannot increase the input image size, reduce the stride in the con- sequent layers, it has roughly the same effect. • if your network has a complex and highly optimized architecture, like e.g. GoogLeNet, be careful with modifications. Xception improves on the inception module and architecture with a simple and more elegant architecture that is as effective as ResNet and Inception V4. The Xception module is presented here: This network can be anyone’s favorite given the simplicity and elegance of the architecture, presented here: The architecture has 36 convolutional stages, making it close in similarity to a ResNet-34. But the model and code is as simple as ResNet and much more comprehensible than Inception V4. A Torch7 implementation of this network is available here An implementation in Keras/TF is availble here. It is interesting to note that the recent Xception architecture was also inspired by our work on separable convolutional filters. A new MobileNets architecture is also available since April 2017. This architecture uses separable convolutions to reduce the number of parameters. The separate convolution is the same as Xception above. Now the claim of the paper is that there is a great reduction in parameters — about 1/2 in case of FaceNet, as reported in the paper. Here is the complete model architecture: Unfortunately, we have tested this network in actual application and found it to be abysmally slow on a batch of 1 on a Titan Xp GPU. Look at a comparison here of inference time per image: Clearly this is not a contender in fast inference! It may reduce the parameters and size of network on disk, but is not usable. FractalNet uses a recursive architecture, that was not tested on ImageNet, and is a derivative or the more general ResNet. We believe that crafting neural network architectures is of paramount importance for the progress of the Deep Learning field. Our group highly recommends reading carefully and understanding all the papers in this post. But one could now wonder why we have to spend so much time in crafting architectures, and why instead we do not use data to tell us what to use, and how to combine modules. This would be nice, but now it is work in progress. Some initial interesting results are here. Note also that here we mostly talked about architectures for computer vision. Similarly neural network architectures developed in other areas, and it is interesting to study the evolution of architectures for all other tasks also. If you are interested in a comparison of neural network architecture and computational performance, see our recent paper. This post was inspired by discussions with Abhishek Chaurasia, Adam Paszke, Sangpil Kim, Alfredo Canziani and others in our e-Lab at Purdue University. I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more... If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I dream and build new technology Sharing concepts, ideas, and codes. " Gary Marcus,1.3K,27,https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1?source=tag_archive---------0----------------,In defense of skepticism about deep learning – Gary Marcus – Medium,"In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.” In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). I also urged the community to consider incorporating more innate structure into AI systems. Within a few days, thousands of people had weighed in over Twitter, some enthusiastic (“e.g, the best discussion of #DeepLearning and #AI I’ve read in many years”), some not (“Thoughtful... But mostly wrong nevertheless”). Because I think clarity around these issues is so important, I’ve compiled a list of fourteen commonly-asked queries. Where does unsupervised learning fit in? Why didn’t I say more nice things about deep learning? What gives me the right to talk about this stuff in the first place? What’s up with asking a neural network to generalize from even numbers to odd numbers? (Hint: that’s the most important one). And lots more. I haven’t addressed literally every question I have seen, but I have tried to be representative. 1. What is general intelligence? Thomas Dietterich, an eminent professor of machine learning, and my most thorough and explicit critic thus far, gave a nice answer that I am very comfortable with: 2. Marcus wasn’t very nice to deep learning. He should have said more nice things about all of its vast accomplishments. And he minimizes others. Dietterich, mentioned above, made both of these points, writing: On the first part of that, true, I could have said more positive things. But it’s not like I didn’t say any. Or even like I forgot to mention Dietterich’s best example; I mentioned it on the first page: More generally, later in the article I cited a couple of great texts and excellent blogs that have pointers to numerous examples. A lot of them though, would not really count as AGI, which was the main focus of my paper. (Google Translate, for example, is extremely impressive, but it’s not general; it can’t, for example, answer questions about what it has translated, the way a human translator could.) The second part is more substantive. Is 1,000 categories really very finite? Well, yes, compared to the flexibility of cognition. Cognitive scientists generally place the number of atomic concepts known by an individual as being on the order of 50,000, and we can easily compose those into a vastly greater number of complex thoughts. Pets and fish are probably counted in those 50,000; pet fish, which is something different, probably isn’t counted. And I can easily entertain the concept of “a pet fish that is suffering from Ick”, or note that “it is always disappointing to buy a pet fish only to discover that it was infected with Ick” (an experience that I had as a child and evidently still resent). How many ideas like that I can express? It’s a lot more than 1,000. I am not precisely sure how many visual categories a person can recognize, but suspect the math is roughly similar. Try google images on “pet fish”, and you do ok; try it on “pet fish wearing goggles” and you mostly find dogs wearing goggles, with a false alarm rate of over 80%. Machines win over nonexpert humans on distinguishing similar dog breeds, but people win, by a wide margin, on interpreting complex scenes, like what would happen to a skydiver who was wearing a backpack rather than a parachute. In focusing on 1,000 category chunks the machine learning field is, in my view, doing itself a disservice, trading a short-term feeling of success for a denial of harder, more open-ended problems (like scene and sentence comprehension) that must eventually be addressed. Compared to the essentially infinite range of sentences and scenes we can see and comprehend, 1000 of anything really is small. [See also Note 2 at bottom] 3. Marcus says deep learning is useless, but it’s great for many things Of course it is useful; I never said otherwise, only that (a) in its current supervised form, deep learning might be approaching its limits and (b) that those limits would stop short from full artificial general intelligence — unless, maybe, we started incorporating a bunch of other stuff like symbol-manipulation and innateness. The core of my conclusion was this: 4. “One thing that I don’t understand. — @GaryMarcus says that DL is not good for hierarchical structures. But in @ylecun nature review paper [says that] that DL is particularly suited for exploiting such hierarchies.” This is an astute question, from Ram Shankar, and I should have been a LOT clearer about the answer: there are many different types of hierarchy one could think about. Deep learning is really good, probably the best ever, at the sort of feature-wise hierarchy LeCun talked about, which I typically refer to as hierarchical feature detection; you build lines out of pixels, letters out of lines, words out of letters and so forth. Kurzweil and Hawkins have emphasized this sort of thing, too, and it really goes back to Hubel and Wiesel (1959)in neuroscience experiments and to Fukushima. (Fukushima, Miyake, & Ito, 1983) in AI. Fukushima, in his Neocognitron model, hand-wired his hierarchy of successively more abstract features; LeCun and many others after showed that (at least in some cases) you don’t have to hand engineer them. But you don’t have to keep track of the subcomponents you encounter along the way; the top-level system need not explicitly encode the structure of the overall output in terms of which parts were seen along the way; this is part of why a deep learning system can be fooled into thinking a pattern of a black and yellow stripes is a school bus. (Nguyen, Yosinski, & Clune, 2014). That stripe pattern is strongly correlated with activation of the school bus output units, which is in turn correlated with a bunch of lower-level features, but in a typical image-recognition deep network, there is no fully-realized representation of a school bus as being made up of wheels, a chassis, windows, etc. Virtually the whole spoofing literature can be thought of in these terms. [Note 3] The structural sense of hierarchy which I was discussing was different, and focused around systems that can make explicit reference to the parts of larger wholes. The classic illustration would be Chomsky’s sense of hierarchy, in which a sentence is composed of increasingly complex grammatical units (e.g., using a novel phrase like the man who mistook his hamburger for a hot dog with a larger sentence like The actress insisted that she would not be outdone by the man who mistook his hamburger for a hot dog). I don’t think deep learning does well here (e.g., in discerning the relation between the actress, the man, and the misidentified hot dog), though attempts have certainly been made. Even in vision, the problem is not entirely licked; Hinton’s recent capsule work (Sabour, Frosst, & Hinton, 2017), for example, is an attempt to build in more robust part-whole directions for image recognition, by using more structured networks. I see this as a good trend, and one potential way to begin to address the spoofing problem, but also as a reflection of trouble with the standard deep learning approach. 5. “It’s weird to discuss deep learning in [the] context of general AI. General AI is not the goal of deep learning!” Best twitter response to this came from University of Quebec professor Daniel Lemire: “Oh! Come on! Hinton, Bengio... are openly going for a model of human intelligence.” Second prize goes to a math PhD at Google, Jeremy Kun, who countered the dubious claim that “General AI is not the goal of deep learning” with “If that’s true, then deep learning experts sure let everyone believe it is without correcting them.” Andrew Ng’s recent Harvard Business Review article, which I cited, implies that deep learning can do anything a person can do in a second. Thomas Dietterich’s tweet that said in part “it is hard to argue that there are limits to DL”. Jeremy Howard worried that the idea that deep learning is overhyped might itself be overhyped, and then suggested that every known limit had been countered. DeepMind’s recent AlphaGo paper [See Note 4] is positioned somewhat similarly, with Silver et al (Silver et al., 2017) enthusiastically reporting that: In that paper’s concluding discussion, not one of the 10 challenges to deep learning that I reviewed was mentioned. (As I will discuss in a paper coming out soon, it’s not actually a pure deep learning system, but that’s a story for another day.) The main reason people keep benchmarking their AI systems against humans is precisely because AGI is the goal. 6. What Marcus said is a problem with supervised learning, not deep learning. Yann LeCun presented a version of this, in a comment on my Facebook page: The part about my allegedly not recognizing LeCun’s recent work is, well, odd. It’s true that I couldn’t find a good summary article to cite (when I asked LeCun, he told me by email that there wasn’t one yet) but I did mention his interest explicitly: I also noted that: My conclusion was positive, too. Although I expressed reservations about current approaches to building unsupervised systems, I ended optimistically: What LeCun’s remark does get right is that many of the problems I addressed are a general problem with supervised learning, not something unique to deep learning; I could have been more clear about this. Many other supervised learning techniques face similar challenges, such as problems in generalization and dependence on massive data sets; relatively little of what I said is unique to deep learning. In my focus on assessing deep learning at the five year resurgence mark, I neglected to say that. But it doesn’t really help deep learning that other supervised learning techniques are in the same boat. If someone could come up with a truly impressive way of using deep learning in an unsupervised way, a reassessment might be required. But I don’t see that unsupervised learning, at least as it currently pursued, particularly remedies the challenges I raised, e.g., with respect to reasoning, hierarchical representations, transfer, robustness, and interpretability. It’s simply a promissory note. [Note 5] As Portland State and Santa Fe Institute Professor Melanie Mitchell’s put it in a thus far unanswered tweet: I would, too. In the meantime, I see no principled reason to believe that unsupervised learning can solve the problems I raise, unless we add in more abstract, symbolic representations, first. 7. Deep learning is not just convolutional networks [of the sort Marcus critiqued], it’s “essentially a new style of programming — ”differentiable programming” — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc” — Tom Dietterich This seemed (in the context of Dietterich’s longer series of tweets) to have been proposed as a criticism, but I am puzzled by that, as I am a fan of differentiable programming and said so. Perhaps the point was that deep learning can be taken in a broader way. In any event, I would not equate deep learning and differentiable programming (e.g., approaches that I cited like neural Turing machines and neural programming). Deep learning is a component of many differentiable systems. But such systems also build in exactly the sort of elements drawn from symbol-manipulation that I am and have been urging the field to integrate (Marcus, 2001; Marcus, Marblestone, & Dean, 2014a; Marcus, Marblestone, & Dean, 2014b), including memory units and operations over variables, and other systems like routing units stressed in the more recent two essays. If integrating all this stuff into deep learning is what gets us to AGI, my conclusion, quoted below, will have turned out to be dead on: 8. Now vs the future. Maybe deep learning doesn’t work now, but it’s offspring will get us to AGI. Possibly. I do think that deep learning might play an important role in getting us to AGI, if some key things (many not yet discovered) are added in first. But what we add matters, and whether it is reasonable to call some future system an instance of deep learning per se, or more sensible to call the ultimate system “a such-and-such that uses deep learning”, depends on where deep learning fits into the ultimate solution. Maybe, for example, in truly adequate natural language understanding systems, symbol-manipulation will play an equally large role as deep learning, or an even larger one. Part of the issue here is of course terminological. A very good friend recently asked me, why can’t we just call anything that includes deep learning, deep learning, even if it includes symbol-manipulation? Some enhancement to deep learning ought to work. To which I respond: why not call anything that includes symbol-manipulation, symbol-manipulation, even if it includes deep learning? Gradient-based optimization should get its due, but so should symbol-manipulation, which as yet is the only known tool for systematically representing and achieving high-level abstraction, bedrock to virtually all of the world’s complex computer systems, from spreadsheets to programming environments to operating systems. Eventually, I conjecture, credit will also be due to the inevitable marriage between the two, hybrid systems that bring together the two great ideas of 20th century AI, symbol-processing and neural networks, both initially developed in the 1950s. Other new tools yet to be invented may be critical as well. To a true acolyte of deep learning, anything is deep learning, no matter what it’s incorporating, and no matter how different it might be from current techniques. (Viva Imperialism!) If you replaced every transistor in a classic symbolic microprocessor with a neuron, but kept the chip’s logic entirely unchanged, a true deep learning acolyte would still declare victory. But we won’t understand the principles driving (eventual) success if we lump everything together. [Note 6] 9. No machine can extrapolate. It’s not fair to expect a neural network to generalize from even numbers to odd numbers. Here’s a function, expressed over binary digits. f(110) = 011; f(100) = 001; f(010) = 010. What’s f(111)? If you are an ordinary human, you are probably going to guess 111. If you are neural network of the sort I discussed, you probably won’t. If you have been told many times that hidden layers in neural networks “abstract functions”, you should be a little bit surprised by this. If you are a human, you might think of the function as something like “reversal”, easily expressed in a line of computer code. If you are a neural network of a certain sort, it’s very hard to learn the abstraction of reversal in a way that extends from evens in that context to odds. But is that impossible? Certainly not if you have a prior notion of an integer. Try another, this time in decimal: f(4) = 8; f(6) = 12. What’s f(5)? None of my human readers would care that questions happens to require you to extrapolate from even numbers to odds; a lot of neural networks would be flummoxed. Sure, the function is undetermined by the sparse number of examples, like all functions, but it is interesting and important that most people would (amid the infinite range of a priori possible inductions), would alight on f(5)=10. And just as interesting that most standard multilayer perceptrons, representing the numbers as binary digits, wouldn’t. That’s telling us something, but many people in the neural network community, François Chollet being one very salient exception, don’t want to listen. Importantly, recognizing that a rule applies to any integer is roughly the same kind of generalization that allows one to recognize that a novel noun that can be used in one context can be used in a huge variety of other contexts. From the first time I hear the word blicket used as an object, I can guess that it will fit into a wide range of frames, like I thought I saw a blicket, I had a close encounter with a blicket, and exceptionally large blickets frighten me, etc. And I can both generate and interpret such sentences, without specific further training. It doesn’t matter whether blicket is or not similar in (for example) phonology to other words I have heard, nor whether I pile on the adjectives or use the word as a subject or an object. If most machine learning [ML] paradigms have a problem with this, we should have problem with most ML paradigms. Am I being “fair”? Well, yes, and no. It’s true that I am asking neural networks to do something that violates their assumptions. A neural network advocate might, for example, say, “hey wait a minute, in your reversal example, there are three dimensions in your input space, representing the left binary digit, the middle binary digit, and rightmost binary digit. The rightmost binary digit has only been a zero in the training; there is no way a network can know what to do when you get to one in that position.” For example, Vincent Lostenlan, a postdoc at Cornell, said Dietterich, made essentially the same point, more concisely: But although both are right about why odds-and-evens are (in this context) hard for deep learning, they are both wrong about the larger issues for three reasons. First, it can’t be that people can’t extrapolate. You just did, in two different examples, at the top of this section. Paraphrasing Chico Marx. who are you going to believe, me or your own eyes? To someone immersed deeply — perhaps too deeply — in contemporary machine learning, my odds-and-evens problem seems unfair because a certain dimension (the one which contains the value of 1 in the rightmost digit) hasn’t been illustrated in the training regime. But when you, a human, look at my examples above, you will not be stymied by this particular gap in the training data. You won’t even notice it, because your attention is on higher-level regularities. People routinely extrapolate in exactly the fashion that I have been describing, like recognizing string reversal from the three training examples I gave above. In a technical sense, that is extrapolation, and you just did it. In The Algebraic Mind I referred to this specific kind of extrapolation as generalizing universally quantified one-to-one mappings outside of a space of training examples. As a field we desperately need a solution to this challenge, if we are ever to catch up to human learning — even if it means shaking up our assumptions. Now, it might reasonably be objected that it’s not a fair fight: humans manifestly depend on prior knowledge when they generalize such mappings. (In some sense, Dieterrich proposed this objection later in his tweet stream.) True enough. But in a way, that’s the point: neural networks of a certain sort don’t have a good way of incorporating the right sort of prior knowledge in the place. It is precisely because those networks don’t have a way of incorporating prior knowledge like “many generalizations hold for all elements of unbounded classes” or “odd numbers leave a remainder of one when divided by two” that neural networks that lack operations over variables fail. The right sort of prior knowledge that would allow neural networks to acquire and represent universally quantified one-to-one mappings. Standard neural networks can’t represent such mappings, except in certain limited ways. (Convolution is a way of building in one particular such mapping, prior to learning). Second, saying that no current system (deep learning or otherwise) can extrapolate in the way that I have described is no excuse; once again other architectures may be in the choppy water, but that doesn’t mean we shouldn’t be trying to swim to shore. If we want to get to AGI, we have to solve the problem. (Put differently: yes, one could certainly hack together solutions to get deep learning to solve my specific number series problems, by, for example, playing games with the input encoding schemes; the real question, if we want to get to AGI, is how to have a system learn the sort of generalizations I am describing in a general way.) Third, the claim that no current system can extrapolate turns out to be, well, false; there are already ML systems that can extrapolate at least some functions of exactly the sort I described, and you probably own one: Microsoft Excel, its Flash Fill function in particular (Gulwani, 2011). Powered by a very different approach to machine learning, it can do certain kinds of extrapolation, albeit in a narrow context, by the bushel, e.g., try typing the (decimal) digits 1, 11, 21 in a series of rows and see if the system can extrapolate via Flash Fill to the eleventh item in the sequence (101). Spoiler alert, it can, in exactly the same way as you probably would, even though there were no positive examples in the training dimension of the hundreds digit. The systems learns from examples the function you want and extrapolates it. Piece of cake. Can any deep learning system do that with three training examples, even with a range of experience on other small counting functions, like 1, 3, 5, .... and 2, 4, 6 ....? Well maybe, but only the ones that are likely do so are likely to be hybrids that build in operations over variables, which are quite different from the sort of typical convolutional neural networks that most people associate with deep learning. Putting all this very differently, one crude way to think about where we are with most ML systems that we have today [Note 7] is that they just aren’t designed to think “outside the box”; they are designed to be awesome interpolators inside the box. That’s fine for some purposes, but not others. Humans are better at thinking outside boxes than contemporary AI; I don’t think anyone can seriously doubt that. But that kind of extrapolation, that Microsoft can do in a narrow context, but that no machine can do with human-like breadth, is precisely what machine learning engineers really ought to be working on, if they want to get to AGI. 10. Everybody in the field already knew this. There is nothing new here. Well, certainly not everybody; as noted, there were many critics who think we still don’t know the limits of deep learning, and others who believe that there might be some, but none yet discovered. That said, I never said that any of my points was entirely new; for virtually all, I cited other scholars, who had independently reached similar conclusions. 11. Marcus failed to cite X. Definitely true; the literature review was incomplete. One favorite among the papers I failed to cite is Shanahan’s Deep Symbolic Reinforcement (Garnelo, Arulkumaran, & Shanahan, 2016); I also can’t believe I forgot Richardson and Domingos’ (2006) Markov Logic Networks. I also wish I had cited Evans and Edward Grefenstette (2017), a great paper from DeepMind. And Smolensky’s tensor calculus work (Smolensky et al., 2016). And work on inductive programming in various forms (Gulwani et al., 2015) and probabilistic programming, too, by Noah Goodman (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2012) All seek to bring rules and networks close to together. And older stuff by pioneers like Jordan Pollack (Smolensky et al., 2016). And Forbus and Gentner’s (Falkenhainer, Forbus, & Gentner, 1989) and Hofstadter and Mitchell’s (1994) work on analogy; and many others. I am sure there is a lot more I could and should have cited. Overall, I tried to be representative rather than fully comprehensive, but I still could have done better. #chagrin. 12. Marcus has no standing in the field; he isn’t a practitioner; he is just a critic. Hesitant to raise this one, but it came up in all kinds of different responses, even from the mouths of certain well-known professionals. As Ram Shankar noted, “As a community, we must circumscribe our criticism to science and merit based arguments.” What really matters is not my credentials (which I believe do in fact qualify me to write) but the validity of the arguments. Either my arguments are correct, or they are not. [Still, for those who are curious, I supply an optional mini-history of some of my relevant credentials in Note 8 at the end.] 13. Re: hierarchy, what about Socher’s tree-RNNs? I have written to him, in hopes of having a better understanding of its current status. I’ve also privately pushed several other teams towards trying out tasks like Lake and Baroni (2017) presented. Pengfei et al (2017) offers some interesting discussion. 14. You could have been more critical of deep learning. Nobody quite said that, not in exactly those words, but a few came close, generally privately. One colleague for example pointed out that there may be some serious errors of future forecasting around The same colleague added Another colleague, ML researcher and author Pedro Domingos, pointed out still other shortcomings of current deep learning methods that I didn’t mention: Like other flexible supervised learning methods, deep learning systems can be unstable in the sense that slightly changing the training data may result in large changes in the resulting model. As Domingos notes, there’s no guarantee this sort of rise and decline won’t repeat itself. Neural networks have risen and fallen several times before, all the way back to Rosenblatt’s first Perceptron in 1957. We shouldn’t mistake cyclical enthusiasm for a complete solution to intelligence, which still seems (to me, anyway) to be decades away. If we want to reach AGI, we owe it to ourselves to be as keenly aware of challenges we face as we are of our successes. 2. There are other problems too in relying on these 1,000 image sets. For example, in reading a draft of this paper, Melanie Mitchell pointed me to important recent work by Loghmani and colleague (2017) on assessing how deep learning does in the real world. Quoting from the abstract, the paper “analyzes the transferability of deep representations from Web images to robotic data [in the wild]. Despite the promising results obtained with [representations developed from Web image], the experiments demonstrate that object classification with real-life robotic data is far from being solved.” 3. And that literature is growing fast. In late December there was a paper about fooling deep nets into mistaking a pair of skiers for a dog [https://arxiv.org/pdf/1712.09665.pdf] and another on a general-purpose tool for building real-world adversarial patches: https://arxiv.org/pdf/1712.09665.pdf. (See also https://arxiv.org/abs/1801.00634.) It’s frightening to think how vulnerable deep learning can be real-world contexts. And for that matter consider Filip Pieknewski’s blog on why photo-trained deep learning systems have trouble transferring what they have learned to line drawings, https://blog.piekniewski.info/2016/12/29/can-a-deep-net-see-a-cat/. Vision is not as solved as many people seem to think. 4. As I will explain in the forthcoming paper, AlphaGo is not actually a pure [deep] reinforcement learning system, although the quoted passage presented it as such. It’s really more of a hybrid, with important components that are driven by symbol-manipulating algorithms, along with a well engineered deep-learning component. 5. AlphaZero, by the way, isn’t unsupervised, it’s self-supervised, using self-play and simulation as a way of generating supervised data; I will have a lot more to say about that system in a forthcoming paper. 6. Consider, for example Google Search, and how one might understand it. Google has recently added in a deep learning algorithm, RankBrain, to the wide array of algorithms it uses for search. And Google Search certainly takes in data and knowledge and processes them hierarchically (which according to Maher Ibrahim is all you need to count as being deep learning). But, realistically, deep learning is just one cue among many; the knowledge graph component, for example, is based instead primarily on classical AI notions of traversing ontologies. By any reasonable measure Google Search is a hybrid, with deep learning as just one strand among many. Calling Google Search as a whole. “a deep learning system” would be grossly misleading, akin to relabeling carpentry “screwdrivery”, just because screwdrivers happen to be involved. 7. Important exceptions include inductive logic programming, inductive function programming (the brains behind Microsoft’s Flash Fill) and neural programming. All are making some progress here; some of these even include deep learning, but they also all include structured representations and operations over variables among their primitive operations; that’s all I am asking for. 8. My AI experiments begin in adolescence, with, among other thing, a Latin-English translator that I coded in the programming language Logo. In graduate school, studying with Steven Pinker, I explored the relation between language acquisition, symbolic rules, and neural networks. (I also owe a debt to my undergraduate mentor Neil Stillings.) The child language data I gathered (Marcus et al., 1992) for my dissertation have been cited hundreds of times, and were the most frequently-modeled data in the 90’s debate about neural networks and how children learned language. In the late 1990’s I discovered some specific, replicable problems with multilayer perceptrons, (Marcus, 1998b; Marcus, 1998a)); based on those observation, I designed a widely-cited experiment. published in Science (Marcus, Vijayan, Bandi Rao, & Vishton, 1999), that showed that young infants could extract algebraic rules, contra Jeff Elman’s (1990) then popular neural network. All of this culminated in a 2001 MIT Press book (Marcus, 2001), which lobbied for a variety of representational primitives, some of which have begun to pop up in recent neural networks; in particular that the use of operations over variables in the new field of differentiable programming (Daniluk, Rocktäschel, Welbl, & Riedel, 2017; Graves et al., 2016) owes something to the position outlined in that book. There was a strong emphasis on having memory records, as well, which can be seen in the memory networks being developed e.g., at Facebook (Bordes, Usunier, Chopra, & Weston, 2015).) The next decade saw me work on other problems including innateness (Marcus, 2004) (which I will discuss at length in the forthcoming piece about AlphaGo) and evolution (Marcus, 2004; Marcus, 2008), I eventually returned to AI and cognitive modeling, publishing a 2014 article on cortical computation in Science (Marcus, Marblestone, & Dean, 2014) that also anticipates some of what is now happening in differentiable programming. More recently, I took a leave from academia to found and lead a machine learning company in 2014; by any reasonable measure that company was successful, acquired by Uber roughly two years after founding. As co-founder and CEO I put together a team of some of the very best machine learning talent in the world, including Zoubin Ghahramani, Jeff Clune, Noah Goodman, Ken Stanley and Jason Yosinski, and played a pivotal role in developing our core intellectual property and shaping our intellectual mission. (A patent is pending, co-written by Zoubin Ghahramani and myself.) Although much of what we did there remains confidential, now owned by Uber, and not by me, I can say that a large part of our efforts were addressed towards integrating deep learning with our own techniques, which gave me a great deal of familiarity with joys and tribulations of Tensorflow and vanishing (and exploding) gradients. We aimed for state-of-the-art results (sometimes successfully, sometimes not) with sparse data, using hybridized deep learning systems on a daily basis. Bordes, A., Usunier, N., Chopra, S., & Weston, J. (2015). Large-scale Simple Question Answering with Memory Networks. arXiv. Daniluk, M., Rocktäschel, T., Welbl, J., & Riedel, S. (2017). Frustratingly Short Attention Spans in Neural Language Modeling. arXiv. Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2)(2), 179–211. Evans, R., & Grefenstette, E. (2017). Learning Explanatory Rules from Noisy Data. arXiv, cs.NE. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1)(1), 1–63. Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 5, 826–834. Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards Deep Symbolic Reinforcement Learning. arXiv, cs.AI. Goodman, N., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A. et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626)(7626), 471–476. Gulwani, S. (2011). Automating string processing in spreadsheets using input-output examples. dl.acm.org, 46(1)(1), 317–330. Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11)(11), 90–99. Hofstadter, D. R., & Mitchell, M. (1994). The copycat project: A model of mental fluidity and analogy-making. Advances in connectionist and neural computation theory, 2(31–112)(31–112), 29–30. Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the Limitation of Convolutional Neural Networks in Recognizing Negative Images. arXiv, cs.CV. Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3)(3), 574–591. Lake, B. M., & Baroni, M. (2017). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. arXiv. Loghmani, M. R., Caputo, B., & Vincze, M. (2017). Recognizing Objects In-the-wild: Where Do We Stand? arXiv, cs.RO. Marcus, G. F. (1998a). Rethinking eliminative connectionism. Cogn Psychol, 37(3)(3), 243 — 282. Marcus, G. F. (1998b). Can connectionism save constructivism? Cognition, 66(2)(2), 153 — 182. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F. (2004). The Birth of the Mind : how a tiny number of genes creates the complexities of human thought. Basic Books. Marcus, G. F. (2008). Kluge : the haphazard construction of the human mind. Boston : Houghton Mifflin. Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv. Marcus, G.F., Marblestone, A., & Dean, T. (2014a). The atoms of neural computation. Science, 346(6209)(6209), 551 — 552. Marcus, G. F., Marblestone, A. H., & Dean, T. L. (2014b). Frequently Asked Questions for: The Atoms of Neural Computation. Biorxiv (arXiv), q-bio.NC. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F., Pinker, S., Ullman, M., Hollander, M., Rosen, T. J., & Xu, F. (1992). Overregularization in language acquisition. Monogr Soc Res Child Dev, 57(4)(4), 1–182. Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Rule learning by seven-month-old infants. Science, 283(5398)(5398), 77–80. Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv, cs.CV. Pengfei, L., Xipeng, Q., & Xuanjing, H. (2017). Dynamic Compositional Neural Networks over Tree Structure IJCAI. Proceedings from Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17). Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv, cs.LG. Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine learning, 62(1)(1), 107–136. Sabour, S., dffsdfdsf, N., & Hinton, G. E. (2017). Dynamic Routing Between Capsules. arXiv, cs.CV. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676)(7676), 354–359. Smolensky, P., Lee, M., He, X., Yih, W.-t., Gao, J., & Deng, L. (2016). Basic Reasoning with Tensor Product Representations. arXiv, cs.AI. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Founder, Geometric Intelligence (acquired by Uber). Professor of Psychology and Neural Science, NYU. Freelancer for The New Yorker & New York Times. " Sarthak Jain,3.9K,10,https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------1----------------,How to easily Detect Objects with Deep Learning on Raspberry Pi,"Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API " Favio Vázquez,3.3K,14,https://towardsdatascience.com/a-weird-introduction-to-deep-learning-7828803693b0?source=tag_archive---------2----------------,A “weird” introduction to Deep Learning – Towards Data Science,"There are amazing introductions, courses and blog posts on Deep Learning. I will name some of them in the resources sections, but this is a different kind of introduction. But why weird? Maybe because it won’t follow the “normal” structure of a Deep Learning post, where you start with the math, then go into the papers, the implementation and then to applications. It will be more close to the post I did before about “My journey into Deep Learning”, I think telling a story can be much more helpful than just throwing information and formulas everywhere. So let’s begin. NOTE: There’s a companion webinar to this article. Find it here: Sometimes is important to have a written backup of your thoughts. I tend to talk a lot, and be present in several presentations and conference, and this is my way of contributing with a little knowledge to everyone. Deep Learning (DL)is such an important field for Data Science, AI, Technology and our lives right now, and it deserves all of the attention is getting. Please don’t say that deep learning is just adding a layer to a neural net, and that’s it, magic! Nope. I’m hoping that after reading this you have a different perspective of what DL is. I just created this timeline based on several papers and other timelines with the purpose of everyone seeing that Deep Learning is much more than just Neural Networks. There has been really theoretical advances, software and hardware improvements that were necessary for us to get to this day. If you want it just ping me and I’ll send it to you. (Find my contact in the end of the article). Deep Learning has been around for quite a while now. So why it became so relevant so fast the last 5–7 years? As I said before, until the late 2000s, we were still missing a reliable way to train very deep neural networks. Nowadays, with the development of several simple but important theoretical and algorithmic improvements, the advances in hardware (mostly GPUs, now TPUs), and the exponential generation and accumulation of data, DL came naturally to fit this missing spot to transform the way we do machine learning. Deep Learning is an active field of research too, nothing is settle or closed, we are still searching for the best models, topology of the networks, best ways to optimize their hyperparameters and more. Is very hard, as any other active field on science, to keep up to date with the investigation, but it’s not impossible. A side note on topology and machine learning (Deep Learning with Topological Signatures by Hofer et al.): Luckily for us, there are lots of people helping understand and digest all of this information through courses like the Andrew Ng one, blog posts and much more. This for me is weird, or uncommon because normally you have to wait for sometime (sometime years) to be able to digest difficult and advance information in papers or research journals. Of course, most areas of science are now really fast too to get from a paper to a blog post that tells you what yo need to know, but in my opinion DL has a different feel. We are working with something that is very exciting, most people in the field are saying that the last ideas in the papers of deep learning (specifically new topologies and configurations for NN or algorithms to improve their usage) are the best ideas in Machine Learning in decades (remember that DL is inside of ML). I’ve used the word learning a lot in this article so far. But what is learning? In the context of Machine Learning, the word “learning” describes an automatic search process for better representations of the data you are analyzing and studying (please have this in mind, is not making a computer learn). This is a very important word for this field, REP-RE-SEN-TA-TION. Don’t forget about it. What is a representation? It’s a way to look at data. Let me give you an example, let’s say I tell you I want you to drive a line that separates the blue circles from the green triangles for this plot: So, if you want to use a line this is what the author says: This is impossible if we remember the concept of a line: So is the case lost? Actually no. If we find a way of representing this data in a different way, in a way we can draw a straight line to separate the types of data. This is somethinkg that math taught us hundreds of years ago. In this case what we need is a coordinate transformation, so we can plot or represent this data in a way we can draw this line. If we look the polar coordinate transformation, we have the solution: And that’s it now we can draw a line: So, in this simple example we found and chose the transformation to get a better representation by hand. But if we create a system, a program that can search for different representations (in this case a coordinate change), and then find a way of calculating the percentage of categories being classified correctly with this new approach, in that moment we are doing Machine Learning. This is something very important to have in mind, deep learning is representation learning using different kinds of neural networks and optimize the hyperparameters of the net to get (learn)the best representation for our data. This wouldn’t be possible without the amazing breakthroughs that led us to the current state of Deep Learning. Here I name some of them: Learning representations by back-propagating errors by David E. Rumelhart, Geoffrey E. Hinton & Ronald J. Williams. A theoretical framework for Back-Propagation by Yann Lecun. 2. Idea: Better initialization of the parameters of the nets. Something to remember: The initialization strategy should be selected according to the activation function used (next). 3. Idea: Better activation functions. This mean, better ways of approximating the functions faster leading to faster training process. 4. Idea: Dropout. Better ways of preventing overfitting and more. Dropout: A Simple Way to Prevent Neural Networks from Overfitting, a great paper by Srivastava, Hinton and others. 5. Idea: Convolutional Neural Nets (CNNs). Gradient based learning applied to document recognition by Lecun and others ImageNet Classification with Deep Convolutional Neural Networks by Krizhevsky and others. 6. Idea: Residual Nets (ResNets). 7. Idea: Region Based CNNs. Used for object detection and more. 8. Idea: Recurrent Neural Networks (RNNs) and LSTMs. BTW: It was shown by Liao and Poggio (2016) that ResNets == RNNs, arXiv:1604.03640v1. 9. Idea: Generative Adversarial Networks (GANs). 10. Idea: Capsule Networks. And there are many others but I think those are really important theoretical and algorithmic breakthroughs that are changing the world, and that gave momentum for the DL revolution. It’s not easy to get started but I’ll try my best to guide you through this process. Check out this resources, but remember, this is not only watching videos and reading papers, it’s about understanding, programming, coding, failing and then making it happen. -1. Learn Python and R ;) 0. Andrew Ng and Coursera (you know, he doesn’t need an intro): Siraj Raval: He’s amazing. He has the power to explain hard concepts in a fun and easy way. Follow him on his YouTube channel. Specifically this playlists: — The Math of Intelligence: — Intro to Deep Learning: 3. François Chollet’s book: Deep Learning with Python (and R): 3. IBM Cognitive Class: 5. DataCamp: Deep Learning is one of the most important tools and theories a Data Scientist should learn. We are so lucky to see amazing people creating both research, software, tools and hardware specific for DL tasks. DL is computationally expensive, and even though there’s been advances in theory, software and hardware, we need the developments in Big Data and Distributed Machine Learning to improve performance and efficiency. Great people and companies are making amazing efforts to join the distributed frameworks (Spark) and DL libraries (TF and Keras). Here’s an overview: 2. Elephas: Distributed DL with Keras & PySpark: 3. Yahoo! Inc.: TensorFlowOnSpark: 4. CERN Distributed Keras (Keras + Spark) : 5. Qubole (tutorial Keras + Spark): 6. Intel Corporation: BigDL (Distributed Deep Learning Library for Apache Spark) 7. TensorFlow and Spark on Google Cloud: As I’ve said before one of the most important moments for this field was the creation and open sourced of TensorFlow. TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The things you are seeing in the image above are tensor manipulations working with the Riemann Tensor in General Relativity. Tensors, defined mathematically, are simply arrays of numbers, or functions, that transform according to certain rules under a change of coordinates. But in the scope of Machine Learning and Deep Learning a tensor is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represents tensors as n-dimensional arrays of base datatypes. We use heavily tensors all the time in DL, but you don’t need to be an expert in them to use it. You may need to understand a little bit about them so here I list some good resources: After you check that out, the breakthroughs I mentioned before and the programming frameworks like TensorFlow or Keras (for more on Keras go here), now I think you have an idea of what you need to understand and work with Deep Learning. But what have we achieved so far with DL? To name a few (from François Chollet book on DL): And much more. Here’s a list of 30 great and funny applications of DL: Thinking about the future of Deep Learning (for programming or building applications), I’ll repeat what I said in other posts. I really think GUIs and AutoML are the near future of getting things done with Deep Learning. Don’t get me wrong, I love coding, but I think the amount of code we will be writing next years will decay. We cannot spend so many hours worldwide programming the same stuff over and over again, so I think these two features (GUIs and AutoML) will help Data Scientist on getting more productive and solving more problems. On of the best free platforms for doing these tasks in a simple GUI is Deep Cognition. Their simple drag & drop interface helps you design deep learning models with ease. Deep Learning Studio can automatically design a deep learning model for your custom dataset thanks to their advance AutoML feature with nearly one click. Here you can learn more about them: Take a look at the prices :O, it’s freeeee :) I mean, it’s amazing how fast the development in the area is right now, that we can have simple GUIs to interact with all the hard and interesting concepts I talked about in this post. One of the things I like about that platform is that you can still code, interact with TensorFlow, Keras, Caffe, MXNet an much more with the command line or their Notebook without installing anything. You have both the notebook and the CLI! I take my hat off to them and their contribution to society. Other interesting applications of deep learning that you can try for free or for little cost are (some of them are on private betas): Thanks for reading this weird introduction to Deep Learning. I hope it helped you getting started in this amazing area, or maybe just discover something new. If you have questions just add me on LinkedIn and we’ll chat there: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data scientist, physicist and computer engineer. Love sharing ideas, thoughts and contributing to Open Source in Machine Learning and Deep Learning ;). Sharing concepts, ideas, and codes. " Oleksandr Savsunenko,5.5K,4,https://hackernoon.com/the-new-neural-internet-is-coming-dda85b876adf?source=tag_archive---------3----------------,The New Neural Internet is Coming – Hacker Noon,"How it all began / The Landscape Think of the typical and well-studied neural networks (such as image classifier) as a left hemisphere of the neural network technology. With this in mind, it is easy to understand what is Generative Adversarial Network. It is a kind of right hemisphere — the one that is claimed to be responsible for creativity. The Generative Adversarial Networks (GANs) are the first step of neural networks technology learning creativity. Typical GAN is a neural network trained to generate images on the certain topic using an image dataset and some random noise as a seed. Up until now images created by GANs were of low quality and limited in resolution. Recent advances by NVIDIA showed that it is within a reach to generate photorealistic images in high-resolution and they published the technology itself in open-access. There is a plethora of GANs types of various complexity, architectures, and strange acronyms. We are mostly interested here in conditional GANs and variational autoencoders. Conditional GANs are capable of not just mimicking the broad type of images as “bedroom”, “face”, “dog” but also dive into more specific categories. For example, the Text2Image network is capable of translation textual image description into the image itself. By varying random seed that is concatenated to the “meanings” vector we are able to produce an infinite number of birds image, matching description. Let’s just close your eyes and see the world in 2 years. Companies like NVIDIA will push GAN technology to industry-ready level, same as they did with celebrities faces generation. This means, that a GAN will be able to generate any image, on-demand, on-the-fly based on textual (for example) description. This will render obsolete a number of photography and design related industries. Here’s how this will work. Again, the network is able to generate an infinite number of images by varying random seed. And here’s the scary part. Such a network can receive not only description of the target object it needs to generate, but also a vector describing you — the ad consumer. This ad can have a very deep description of your personality, web browsing history, recent transactions, and geolocation, so the GAN will generate one-time, unique and, that fits you perfectly. CTR is going sky high. By measuring your reactions the network will adapt and make ads targeting you more and more precisely, hitting your soft spots. So, at the end of the day, we are going to see a fully personalized content everywhere on the Internet. Everyone will see fully custom versions of all content, that is adapted to the consumer based on his lifestyle, opinions, and history. We all witnessed arousal of this Bubble pattern after latest USA elections and it’s gonna be getting worse. GANs will able to target content precisely to you with no limitations of the medium — starting from image ads and up to complex opinions, tread and publications, generated by machines. This will create a constant feedback loop, improving based on your interactions. And there is going to be a competition of different GANs between each other. Kind of a fully automated war of phycological manipulations, having humanity as a battlefield. The driving force behind this trend is extremely simple — profits. And this is not a scary doomsday scenario, this actually is happening today. I have no idea. But surely we need few things: broad public discussions about this technology inevitable arrival and a backup plan to stop it. So, it’s better to start thinking now — how we can fight this process and benefit from it at the same time. We are not there yet due to some technical limitation. Up until recently images generated by GANs were just of bad quality and easily spotted as fake. NVIDIA showed that it is actually doable to generate 1024x1024 extremely real faces. To move things forward we would need faster and bigger GPUs, more theoretical studies on GAN, more smart hacks around GAN training, more labeled datasets, etc. Please, notice — we don’t need new power sources, quantum processors (but they can help), general AI to reach this point or some other purely theoretical new cool things. All we need is within a reach of few years and likely big corp already have this kind of resources available. Also, we will need smarter neural networks. I am definitely looking for progress in capsules approach by Hinton et al. And of course, we will be the first to implement this in super-resolution technology, that should heavily benefit from GAN progress. Let me know what you think. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Machine learning engineer, doer/maker/dreamer, father how hackers start their afternoons. " Max Pechyonkin,3.4K,8,https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a?source=tag_archive---------4----------------,Stochastic Weight Averaging — a New Way to Get State of the Art Results in Deep Learning,"In this article, I will discuss two interesting recent papers that provide an easy way to improve performance of any given neural network by using a smart way to ensemble. They are: Additional prerequisite reading that will make context of this post much more easy to understand: Traditional ensembling combines several different models and makes them predict on the same input. Then some way of averaging is used to determine the final prediction of the ensemble. It can be simple voting, an average or even another model that learns to predict correct value or label based on the inputs of models in the ensemble. Ridge regression is one particular way of combining several predictions which is used by Kaggle-winning machine learning practitioners. When applied in deep learning, ensembling can be used to combine predictions of several neural networks to produce one final prediction. Usually it is a good idea to use neural networks of different architectures in an ensemble, because they will likely make mistakes on different training samples and therefore the benefit of ensembling will be larger. However, you can also ensemble models with the same architecture and it will give surprisingly good results. One very cool trick exploiting this approach was proposed in the snapshot ensembling paper. The authors take weights snapshot while training the same network and then after training create an ensemble of nets with the same architecture but different weights. This allows to improve test performance, and it is a very cheap way too because you just train one model once, just saving weights from time to time. You can refer to this awesome post for more details. If you aren’t yet using cyclical learning rates, then you definitely should, as it becomes the standard state-of-the art training technique that is very simple, not computationally heavy and provides significant gains at almost no additional cost. All of the examples above are ensembles in the model space, because they combine several models and then use models’ predictions to produce the final prediction. In the paper that I am discussing in this post, however, the authors propose to use a novel ensembling in the weights space. This method produces an ensemble by combining weights of the same network at different stages of training and then uses this model with combined weights to make predictions. There are 2 benefits from this approach: Let’s see how it works. But first we need to understand some important facts about loss surfaces and generalizable solutions. The first important insight is that a trained network is a point in multidimensional weight space. For a given architecture, each distinct combination of network weights produces a separate model. Since there are infinitely many combinations of weights for any given architecture, there will be infinitely many solutions. The goal of training of a neural network is to find a particular solution (point in the weight space) that will provide low value of the loss function both on training and testing data sets. During training, by changing weights, training algorithm changes the network and travel in the weight space. Gradient descent algorithm travels on a loss plane in this space where plane elevation is given by the value of the loss function. It is very hard to visualize and understand the geometry of multidimensional weight space. At the same time, it is very important to understand it because stochastic gradient descent essentially traverses a loss surface in this highly multidimensional space during training and tries to find a good solution — a “point” on the loss surface where loss value is low. It is known that such surfaces have many local optima. But it turns out that not all of them are equally good. One metric that can distinguish a good solution from a bad one is its flatness. The idea being that training data set and testing data set will produce similar but not exactly the same loss surfaces. You can imagine that a test surface will be shifted a bit relative to the train surface. For a narrow solution, during test time, a point that gave low loss can have a large loss because of this shift. This means that this “narrow” solution did not generalize well — training loss is low, while testing loss is large. On the other hand, for a “wide” and flat solution, this shift will lead to training and testing loss being close to each other. I explained the difference between narrow and wide solutions because the new method which is the focus of this post leads to nice and wide solutions. Initially, SGD will make a big jump in the weight space. Then, as the learning rate gets smaller due to cosine annealing, SGD will converge to some local solution and the algorithm will take a “snapshot” of the model by adding it to the ensemble. Then the rate is reset to high value again and SGD takes a large jump again before converging to some different local solution. Cycle length in the snapshot ensembling approach is 20 to 40 epochs. The idea of long learning rate cycles is to be able to find sufficiently different models in the weight space. If the models are too similar, then predictions of the separate networks in the ensemble will be too close and the benefit of ensembling will be negligible. Snapshot ensembling works really well and improves model performance, but Fast Geometric Ensembling works even better. Fast geometric ensembling is very similar to snapshot ensembling, but is has some distinguishing features. It uses linear piecewise cyclical learning rate schedule instead of cosine. Secondly, the cycle length in FGE is much shorter — only 2 to 4 epochs per cycle. At first intuition, the short cycle is wrong because the models at the end of each cycle will be close to each other and therefore ensembling them will not give any benefits. However, as the authors discovered, because there exist connected paths of low loss between sufficiently different models, it is possible to travel along those paths in small steps and the models encountered along will be different enough to allow ensembling them with good results. Thus, FGE shows improvement compared to snapshot ensembles and it takes smaller steps to find the model (which makes it faster to train). To benefit from both snapshot ensembling or FGE, one needs to store multiple models and then make predictions for all of them before averaging for the final prediction. Thus, for additional performance of the ensemble, one needs to pay with higher amount of computation. So there is no free lunch there. Or is there? This is where the new paper with stochastic weight averaging comes in. Stochastic weight averaging closely approximates fast geometric ensembling but at a fraction of computational loss. SWA can be applied to any architecture and data set and shows good result in all of them. The paper suggests that SWA leads to wider minima, the benefits of which I discussed above. SWA is not an ensemble in its classical understanding. At the end of training you get one model, but it’s performance beats snapshot ensembles and approaches FGE. Intuition for SWA comes from empirical observation that local minima at the end of each learning rate cycle tend to accumulate at the border of areas on loss surface where loss value is low (points W1, W2 and W3 are at the border of the red area of low loss in the left panel of figure above). By taking the average of several such points, it is possible to achieve a wide, generalizable solution with even lower loss (Wswa in the left panel of the figure above). Here is how it works. Instead of an ensemble of many models, you only need two models: At the end of each learning rate cycle, the current weights of the second model will be used to update the weight of the running average model by taking weighted mean between the old running average weights and the new set of weights from the second model (formula provided in the figure on the left). By following this approach, you only need to train one model, and store only two models in memory during training. For prediction, you only need the running average model and predicting on it is much faster than using ensemble described above, where you use many models to predict and then average results. Authors of the paper provide their own implementation in PyTorch. Also, SWA is implemented in the awesome fast.ai library that everyone should be using. And if you haven’t yet seen their course, then follow the links. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning Sharing concepts, ideas, and codes. " Daniel Simmons,3.4K,8,https://itnext.io/you-can-build-a-neural-network-in-javascript-even-if-you-dont-really-understand-neural-networks-e63e12713a3?source=tag_archive---------5----------------,You can build a neural network in JavaScript even if you don’t really understand neural networks,"Click here to share this article on LinkedIn » (Skip this part if you just want to get on with it...) I should really start by admitting that I’m no expert in neural networks or machine learning. To be perfectly honest, most of it still completely baffles me. But hopefully that’s encouraging to any fellow non-experts who might be reading this, eager to get their feet wet in M.L. Machine learning was one of those things that would come up from time to time and I’d think to myself “yeah, that would be pretty cool... but I’m not sure that I want to spend the next few months learning linear algebra and calculus.” Like a lot of developers, however, I’m pretty handy with JavaScript and would occasionally look for examples of machine learning implemented in JS, only to find heaps of articles and StackOverflow posts about how JS is a terrible language for M.L., which, admittedly, it is. Then I’d get distracted and move on, figuring that they were right and I should just get back to validating form inputs and waiting for CSS grid to take off. But then I found Brain.js and I was blown away. Where had this been hiding?! The documentation was well written and easy to follow and within about 30 minutes of getting started I’d set up and trained a neural network. In fact, if you want to just skip this whole article and just read the readme on GitHub, be my guest. It’s really great. That said, what follows is not an in-depth tutorial about neural networks that delves into hidden input layers, activation functions, or how to use Tensorflow. Instead, this is a dead-simple, beginner level explanation of how to implement Brain.js that goes a bit beyond the documentation. Here’s a general outline of what we’ll be doing: If you’d prefer to just download a working version of this project rather than follow along with the article then you can clone the GitHub repository here. Create a new directory and plop a good ol’ index.html boilerplate file in there. Then create three JS files: brain.js, training-data.js, and scripts.js (or whatever generic term you use for your default JS file) and, of course, import all of these at the bottom of your index.html file. Easy enough so far. Now go here to get the source code for Brain.js. Copy & paste the whole thing into your empty brain.js file, hit save and bam: 2 out of 4 files are finished. Next is the fun part: deciding what your machine will learn. There are countless practical problems that you can solve with something like this; sentiment analysis or image classification for example. I happen to think that applications of M.L. that process text as input are particularly interesting because you can find training data virtually everywhere and they have a huge variety of potential use cases, so the example that we’ll be using here will be one that deals with classifying text: We’ll be determining whether a tweet was written by Donald Trump or Kim Kardashian. Ok, so this might not be the most useful application. But Twitter is a treasure trove of machine learning fodder and, useless though it may be, our tweet-author-identifier will nevertheless illustrate a pretty powerful point. Once it’s been trained, our neural network will be able to look at a tweet that it has never seen before and then be able to determine whether it was written by Donald Trump or by Kim Kardashian just by recognizing patterns in the things they write. In order to do that, we’ll need to feed it as much training data as we can bear to copy / paste into our training-data.js file and then we can see if we can identify ourselves some tweet authors. Now all that’s left to do is set up Brain.js in our scripts.js file and feed it some training data in our training-data.js file. But before we do any of that, let’s start with a 30,000-foot view of how all of this will work. Setting up Brain.js is extremely easy so we won’t spend too much time on that but there are a few details about how it’s going to expect its input data to be formatted that we should go over first. Let’s start by looking at the setup example that’s included in the documentation (which I’ve slightly modified here) that illustrates all this pretty well: First of all, the example above is actually a working A.I (it looks at a given color and tells you whether black text or white text would be more legible on it). Which hopefully illustrates how easy Brain.js is to use. Just instantiate it, train it, and run it. That’s it. I mean, if you inlined the training data that would be 3 lines of code. Pretty cool. Now let’s talk about training data for a minute. There are two important things to note in the above example other than the overall input: {}, output: {} format of the training data. First, the data do not need to be all the same length. As you can see on line 11 above, only an R and a B value get passed whereas the other two inputs pass an R, G, and B value. Also, even though the example above shows the input as objects, it’s worth mentioning that you could also use arrays. I mention this largely because we’ll be passing arrays of varying length in our project. Second, those are not valid RGB values. Every one of them would come out as black if you were to actually use it. That’s because input values have to be between 0 and 1 in order for Brain.js to work with them. So, in the above example, each color had to be processed (probably just fed through a function that divides it by 255 — the max value for RGB) in order to make it work. And we’ll be doing the same thing. So if we want out neural network to accept tweets (i.e. strings) as an input, we’ll need to run them through an similar function (called encode() below) that will turn every character in a string into a value between 0 and 1 and store it in an array. Fortunately, Javascript has a native method for converting any character into ASCII code called charCodeAt(). So we’ll use that and divide the outcome by the max value for Extended ASCII characters: 255 (we’re using extended ASCII just in case we encounter any fringe cases like é or 1⁄2) which will ensure that we get a value <1. Also, we’ll be storing our training data as plain text, not as the encoded data that we’ll ultimately be feeding into our A.I. - you’ll thank me for this later. So we’ll need another function (called processTrainingData() below) that will apply the previously mentioned encoding function to our training data, selectively converting the text into encoded characters, and returning an array of training data that will play nicely with Brain.js So here’s what all of that code will look like (this goes into your ‘scripts.js’ file): Something that you’ll notice here that wasn’t present in the example from the documentation shown earlier (other than the two helper functions that we’ve already gone over) is on line 20 in the train() function, which saves the trained neural network to a global variable called trainedNet . This prevents us from having to re-train our neural network every time we use it. Once the network is trained and saved to the variable, we can just call it like a function and pass in our encoded input (as shown on line 25 in the execute() function) to use our A.I. Alright, so now your index.html, brain.js, and scripts.js files are finished. Now all we need is to put something into training-data.js and we’ll be ready to go. Last but not least, our training data. Like I mentioned, we’re storing all our tweets as text and encoding them into numeric values on the fly, which will make your life a whole lot easier when you actually need to copy / paste training data. No formatting necessary. Just paste in the text and add a new row. Add that to your ‘training-data.js’ file and you’re done! Note: although the above example only shows 3 samples from each person, I used 10 of each; I just didn’t want this sample to take up too much space. Of course, your neural network’s accuracy will increase proportionally to the amount of training data that you give it, so feel free to use more or less than me and see how it affects your outcomes Now, to run your newly-trained neural network just throw an extra line at the bottom of your ‘script.js’ file that calls the execute() function and passes in a tweet from Trump or Kardashian; make sure to console.log it because we haven’t built a UI. Here’s a tweet from Kim Kardashian that was not in my training data (i.e. the network has never encountered this tweet before): Then pull up your index.html page on localhost, check the console, aaand... There it is! The network correctly identified a tweet that it had never seen before as originating from Kim Kardashian, with a certainty of 86%. Now let’s try it again with a Trump tweet: And the result... Again, a never-before-seen tweet. And again, correctly identified! This time with 97% certainty. Now you have a neural network that can be trained on any text that you want! You could easily adapt this to identify the sentiment of an email or your company’s online reviews, identify spam, classify blog posts, determine whether a message is urgent or not, or any of a thousand different applications. And as useless as our tweet identifier is, it still illustrates a really interesting point: that a neural network like this can perform tasks as nuanced as identifying someone based on the way they write. So even if you don’t go out and create an innovative or useful tool that’s powered by machine learning, this is still a great bit of experience to have in your developer tool belt. You never know when it might come in handy or even open up new opportunities down the road. Once again, all of this is available in a GitHub repo here: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Web developer, JavaScript enthusiast, boxing fan ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. " Eugenio Culurciello,2.8K,13,https://towardsdatascience.com/artificial-intelligence-ai-in-2018-and-beyond-e06f05167f9c?source=tag_archive---------6----------------,"Artificial Intelligence, AI in 2018 and beyond – Towards Data Science","These are my opinions on where deep neural network and machine learning is headed in the larger field of artificial intelligence, and how we can get more and more sophisticated machines that can help us in our daily routines. Please note that these are not predictions of forecasts, but more a detailed analysis of the trajectory of the fields, the trends and the technical needs we have to achieve useful artificial intelligence. Not all machine learning is targeting artificial intelligences, and there are low-hanging fruits, which we will examine here also. The goal of the field is to achieve human and super-human abilities in machines that can help us in every-day lives. Autonomous vehicles, smart homes, artificial assistants, security cameras are a first target. Home cooking and cleaning robots are a second target, together with surveillance drones and robots. Another one is assistants on mobile devices or always-on assistants. Another is full-time companion assistants that can hear and see what we experience in our life. One ultimate goal is a fully autonomous synthetic entity that can behave at or beyond human level performance in everyday tasks. See more about these goals here, and here, and here. Software is defined here as neural networks architectures trained with an optimization algorithm to solve a specific task. Today neural networks are the de-facto tool for learning to solve tasks that involve learning supervised to categorize from a large dataset. But this is not artificial intelligence, which requires acting in the real world often learning without supervision and from experiences never seen before, often combining previous knowledge in disparate circumstances to solve the current challenge. Neural network architectures — when the field boomed, a few years back, we often said it had the advantage to learn the parameters of an algorithms automatically from data, and as such was superior to hand-crafted features. But we conveniently forgot to mention one little detail... the neural network architecture that is at the foundation of training to solve a specific task is not learned from data! In fact it is still designed by hand. Hand-crafted from experience, and it is currently one of the major limitations of the field. There is research in this direction: here and here (for example), but much more is needed. Neural network architectures are the fundamental core of learning algorithms. Even if our learning algorithms are capable of mastering a new task, if the neural network is not correct, they will not be able to. The problem on learning neural network architecture from data is that it currently takes too long to experiment with multiple architectures on a large dataset. One has to try training multiple architectures from scratch and see which one works best. Well this is exactly the time-consuming trial-and-error procedure we are using today! We ought to overcome this limitation and put more brain-power on this very important issue. Unsupervised learning —we cannot always be there for our neural networks, guiding them at every stop of their lives and every experience. We cannot afford to correct them at every instance, and provide feedback on their performance. We have our lives to live! But that is exactly what we do today with supervised neural networks: we offer help at every instance to make them perform correctly. Instead humans learn from just a handful of examples, and can self-correct and learn more complex data in a continuous fashion. We have talked about unsupervised learning extensively here. Predictive neural networks — A major limitation of current neural networks is that they do not possess one of the most important features of human brains: their predictive power. One major theory about how the human brain work is by constantly making predictions: predictive coding. If you think about it, we experience it every day. As you lift an object that you thought was light but turned out heavy. It surprises you, because as you approached to pick it up, you have predicted how it was going to affect you and your body, or your environment in overall. Prediction allows not only to understand the world, but also to know when we do not, and when we should learn. In fact we save information about things we do not know and surprise us, so next time they will not! And cognitive abilities are clearly linked to our attention mechanism in the brain: our innate ability to forego of 99.9% of our sensory inputs, only to focus on the very important data for our survival — where is the threat and where do we run to to avoid it. Or, in the modern world, where is my cell-phone as we walk out the door in a rush. Building predictive neural networks is at the core of interacting with the real world, and acting in a complex environment. As such this is the core network for any work in reinforcement learning. See more below. We have talked extensively about the topic of predictive neural networks, and were one of the pioneering groups to study them and create them. For more details on predictive neural networks, see here, and here, and here. Limitations of current neural networks — We have talked about before on the limitation of neural networks as they are today. Cannot predict, reason on content, and have temporal instabilities — we need a new kind of neural networks that you can about read here. Neural Network Capsules are one approach to solve the limitation of current neural networks. We reviewed them here. We argue here that Capsules have to be extended with a few additional features: Continuous learning — this is important because neural networks need to continue to learn new data-points continuously for their life. Current neural networks are not able to learn new data without being re-trained from scratch at every instance. Neural networks need to be able to self-assess the need of new training and the fact that they do know something. This is also needed to perform in real-life and for reinforcement learning tasks, where we want to teach machines to do new tasks without forgetting older ones. For more detail, see this excellent blog post by Vincenzo Lomonaco. Transfer learning — or how do we have these algorithms learn on their own by watching videos, just like we do when we want to learn how to cook something new? That is an ability that requires all the components we listed above, and also is important for reinforcement learning. Now you can really train your machine to do what you want by just giving an example, the same way we humans do every! Reinforcement learning — this is the holy grail of deep neural network research: teach machines how to learn to act in an environment, the real world! This requires self-learning, continuous learning, predictive power, and a lot more we do not know. There is much work in the field of reinforcement learning, but to the author it is really only scratching the surface of the problem, still millions of miles away from it. We already talked about this here. Reinforcement learning is often referred as the “cherry on the cake”, meaning that it is just minor training on top of a plastic synthetic brain. But how can we get a “generic” brain that then solve all problems easily? It is a chicken-in-the-egg problem! Today to solve reinforcement learning problems, one by one, we use standard neural networks: Both these components are obvious solutions to the problem, and currently are clearly wrong, but that is what everyone uses because they are some of the available building blocks. As such results are unimpressive: yes we can learn to play video-games from scratch, and master fully-observable games like chess and go, but I do not need to tell you that is nothing compared to solving problems in a complex world. Imagine an AI that can play Horizon Zero Dawn better than humans... I want to see that! But this is what we want. Machine that can operate like us. Our proposal for reinforcement learning work is detailed here. It uses a predictive neural network that can operate continuously and an associative memory to store recent experiences. No more recurrent neural networks — recurrent neural network (RNN) have their days counted. RNN are particularly bad at parallelizing for training and also slow even on special custom machines, due to their very high memory bandwidth usage — as such they are memory-bandwidth-bound, rather than computation-bound, see here for more details. Attention based neural network are more efficient and faster to train and deploy, and they suffer much less from scalability in training and deployment. Attention in neural network has the potential to really revolutionize a lot of architectures, yet it has not been as recognized as it should. The combination of associative memories and attention is at the heart of the next wave of neural network advancements. Attention has already showed to be able to learn sequences as well as RNNs and at up to 100x less computation! Who can ignore that? We recognize that attention based neural network are going to slowly supplant speech recognition based on RNN, and also find their ways in reinforcement learning architecture and AI in general. Localization of information in categorization neural networks — We have talked about how we can localize and detect key-points in images and video extensively here. This is practically a solved problem, that will be embedded in future neural network architectures. Hardware for deep learning is at the core of progress. Let us now forget that the rapid expansion of deep learning in 2008–2012 and in the recent years is mainly due to hardware: And we have talked about hardware extensively before. But we need to give you a recent update! Last 1–2 years saw a boom in the are of machine learning hardware, and in particular on the one targeting deep neural networks. We have significant experience here, and we are FWDNXT, the makers of SnowFlake: deep neural network accelerator. There are several companies working in this space: NVIDIA (obviously), Intel, Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google, Graphcore, Groq, Huawei, ARM, Wave Computing. All are developing custom high-performance micro-chips that will be able to train and run deep neural networks. The key is to provide the lowest power and the highest measured performance while computing recent useful neural networks operations, not raw theoretical operations per seconds — as many claim to do. But few people in the field understand how hardware can really change machine learning, neural networks and AI in general. And few understand what is important in micro-chips and how to develop them. Here is our list: About neuromorphic neural networks hardware, please see here. We talked briefly about applications in the Goals section above, but we really need to go into details here. How is AI and neural network going to get into our daily life? Here is our list: I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more... If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference! For interesting additional reading, please see: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I dream and build new technology Sharing concepts, ideas, and codes. " Devin Soni,5.8K,4,https://towardsdatascience.com/spiking-neural-networks-the-next-generation-of-machine-learning-84e167f4eb2b?source=tag_archive---------7----------------,"Spiking Neural Networks, the Next Generation of Machine Learning","Everyone who has been remotely tuned in to recent progress in machine learning has heard of the current 2nd generation artificial neural networks used for machine learning. These are generally fully connected, take in continuous values, and output continuous values. Although they have allowed us to make breakthrough progress in many fields, they are biologically inn-accurate and do not actually mimic the actual mechanisms of our brain’s neurons. The 3rd generation of neural networks, spiking neural networks, aims to bridge the gap between neuroscience and machine learning, using biologically-realistic models of neurons to carry out computation. A spiking neural network (SNN) is fundamentally different from the neural networks that the machine learning community knows. SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values. The occurrence of a spike is determined by differential equations that represent various biological processes, the most important of which is the membrane potential of the neuron. Essentially, once a neuron reaches a certain potential, it spikes, and the potential of that neuron is reset. The most common model for this is the Leaky integrate-and-fire (LIF) model. Additionally, SNNs are often sparsely connected and take advantage of specialized network topologies. At first glance, this may seem like a step backwards. We have moved from continuous outputs to binary, and these spike trains are not very interpretable. However, spike trains offer us enhanced ability to process spatio-temporal data, or in other words, real-world sensory data. The spatial aspect refers to the fact that neurons are only connected to neurons local to them, so these inherently process chunks of the input separately (similar to how a CNN would using a filter). The temporal aspect refers to the fact that spike trains occur over time, so what we lose in binary encoding, we gain in the temporal information of the spikes. This allows us to naturally process temporal data without the extra complexity that RNNs add. It has been proven, in fact, that spiking neurons are fundamentally more powerful computational units than traditional artificial neurons. Given that these SNNs are more powerful, in theory, than 2nd generation networks, it is natural to wonder why we do not see widespread use of them. The main issue that currently lies in practical use of SNNs is that of training. Although we have unsupervised biological learning methods such as Hebbian learning and STDP, there are no known effective supervised training methods for SNNs that offer higher performance than 2nd generation networks. Since spike trains are not differentiable, we cannot train SNNs using gradient descent without losing the precise temporal information in spike trains. Therefore, in order to properly use SNNs for real-world tasks, we would need to develop an effective supervised learning method. This is a very difficult task, as doing so would involve determining how the human brain actually learns, given the biological realism in these networks. Another issue, that we are much closer to solving, is that simulating SNNs on normal hardware is very computationally-intensive since it requires simulating differential equations. However, neuromorphic hardware such as IBM’s TrueNorth aims to solve this by simulating neurons using specialized hardware that can take advantage of the discrete and sparse nature of neuronal spiking behavior. The future of SNNs therefore remains unclear. On one hand, they are the natural successor of our current neural networks, but on the other, they are quite far from being practical tools for most tasks. There are some current real-world applications of SNNs in real-time image and audio processing, but the literature on practical applications remains sparse. Most papers on SNNs are either theoretical, or show performance under that of a simple fully-connected 2nd generation network. However, there are many teams working on developing SNN supervised learning rules, and I remain optimistic for the future of SNNs. Make sure you give this post 50 claps and my blog a follow if you enjoyed this post and want to see more. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. crypto markets, data science ☞ twitter @devin_soni ☞ website https://100.github.io/ Sharing concepts, ideas, and codes. " Carlos E. Perez,3.9K,7,https://medium.com/intuitionmachine/neurons-are-more-complex-than-what-we-have-imagined-b3dd00a1dcd3?source=tag_archive---------8----------------,Surprise! Neurons are Now More Complex than We Thought!!,"One of the biggest misconceptions around is the idea that Deep Learning (DL) or Artificial Neural Networks (ANN) mimics biological neurons. At best, ANN mimics a cartoonish version of a 1957 model of a neuron. Anyone claiming Deep Learning is biologically inspired is doing so for marketing purposes or has never bother to read biological literature. Neurons in Deep Learning are essentially mathematical functions that perform a similarity function of its inputs against internal weights. The closer a match is made, the more likely an action is performed (i.e. not sending a signal to zero). There are exceptions to this model (see: Autoregressive networks) however it is general enough to include the perceptron, convolution networks and RNNs. Neurons are very different from DL constructs. They don’t maintain continuous signals but rather exhibit spiking or event-driven behavior. So, when you hear about “neuromorphic” hardware, then these are inspired on “integrate and spike” neurons. These kinds of system, at best, get a lot of press (see: IBM TrueNorth), but have never been shown to be effective. However, there has been some research work that has shown some progress (see: https://arxiv.org/abs/1802.02627v1). If you ask me, if you truly want to build biologically inspired cognition, then you should at the very least explore systems are not continuous like DL. Biological systems, by nature, will use the least amount of energy to survive. DL systems, in stark contrast, are power hungry. That’s because DL is a brute-force method to achieve cognition. We know it works, we just don’t know how to scale it down. Jeff Hawkins of Numenta has always lamented that a more biologically-inspired approach is needed. So, in his research in building cognitive machinery, he has architected systems that try to more closely mirror the structure of the neo-cortex. Numenta’s model of a neuron is considerably more elaborate than the Deep Learning model of a neuron as you can see in this graphic: The team at Numenta is betting on this approach in the hopes of creating something that is more capable than Deep Learning. It hasn’t been proved to be anywhere near successful. They’ve been doing this long enough that the odds of them succeeding are diminishing overtime. Bycontrast, Deep Learning (despite its model of a cartoon neuron) has been shown to be unexpectedly effective in performing all kinds of mind-boggling feats of cognition. Deep Learning is doing something that is extraordinarily correct, we just don’t know exactly what that is! Unfortunately, we have to throw in a new monkey wrench on all these research. New experiments on the nature of neurons have revealed that biological neurons are even more complex than we have imagined them to be: In short, there is a lot more going on inside a single neuron than the simple idea of integrate-and-fire. Neurons may not be pure functions dependent on a single parameter (i.e. weight) but rather they are stateful machines. Alternatively, perhaps the weight may not even be single-valued and instead requires complex-valued or maybe higher dimensions. This is all behavior that research has yet to explore and thus we have little understanding to date. If you think this throws a monkey wrench on our understanding, there’s an even newer discovery that reveals even greater complexity: What this research reveals is that there is a mechanism for neurons to communicate with each other by sending packages of RNA code. To clarify, these are packages of instructions and not packages of data. There is a profound difference between sending codes and sending data. This implies that behavior from one neuron can change the behavior of another neuron; not through observation, but rather through injection of behavior. This code exchange mechanism hints at the validity of my earlier conjecture: “Are biological brains made of only discrete logic?” Experimental evidence reveals a new reality. Even at the smallest unit of our cognition, there is a kind of conversational cognition that is going on between individual neurons that modifies each other’s behavior. Thus, not only are neurons machines with state, they are also machines with an instruction set and a way to send code to each other. I’m sorry, but this is just another level of complexity. There are two obvious ramifications of these experimental discoveries. The first is that our estimates of the computational capabilities of the human brain are likely to be at least an order of magnitude off. The second is that research will begin in earnest to explore DL architectures with more complex internal node (or neuron) structures. If we were to make the rough argument that a single neuron performs a single operation, the total capacity of the human brain is measured at 38 peta operations per second. If were then to assume a DL model of operations being equal to floating point operations then a 38 petaflops system would be equivalent in capability. The top ranked supercomputer, Sunway Taihulight from China is estimated at 125 petaflops. However, let’s say the new results reveal 10x more computation, then the number should be 380 petaflops and we perhaps have breathing room until 2019. What is obvious, however, is that biological brains actually perform much more cognition with less computation. The second consequences it that it’s now time to get back to the drawing board and begin to explore more complex kinds of neurons. The more complex kinds we’ve seen to date are the ones derived from LSTM. Here is the result of a brute force architectural search for LSTM-like neurons: It’s not clear why these more complex LSTM are more effective. Only the architectural search algorithm knows but it can’t explain itself. There is newly released paper that explores more complex hand-engineered LSTMs: that reveals measurable improvements over standard LSTMs: In summary, a research plan that explores more complex kinds of neurons may bear promising fruit. This is not unlike the research that explores the use of complex values in neural networks. In these complex-valued networks, performance improvements are noticed only on RNN networks. This should indicate that these internal neuron complexities may be necessary for capabilities beyond simple perception. I suspect that these complexities are necessary for advanced cognition that seems to evade current Deep Learning systems. These include robustness to adversarial features, learning to forget, learning what to ignore, learning abstraction and recognizing contextual switching. I predict in the near future that we shall see more aggressive research in this area. After all, nature is already unequivocally telling us that neurons are individually more complex and therefore our own neuron models may also need to be more complex. Perhaps we need something as complicated as a Grassmann Algebra to make progress. ;-) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc. Deep Learning Patterns, Methodology and Strategy " Nityesh Agarwal,2.4K,13,https://towardsdatascience.com/wth-does-a-neural-network-even-learn-a-newcomers-dilemma-bd8d1bbbed89?source=tag_archive---------9----------------,“WTH does a neural network even learn??” — a newcomer’s dilemma,"I believe, we all have that psychologist/philosopher in our brains that likes to ponder upon how thinking happens. There. A simple, clear bird’s eye view of what neural networks learn — they learn “increasingly more complex concepts”. Doesn’t that feel familiar? Isn’t that how we learn anything at all? For instance, let’s consider how we, as kids, probably learnt to recognise objects and animals — See? So, neural networks learn like we do! It almost eases the mind to believe that we have this intangible sort of.. man-made “thing” that is analogous to the mind itself! It is especially appealing to someone who has just begun his/her Deep Learning journey. But NO. A neural network’s learning is NOT ANALOGOUS to our own. Almost all the credible guides and ‘starters packs’ on the subject of deep learning come with a warning, something along the lines of: ..and that’s where all the confusion begins! I think this was mostly because of the way in which most of the tutorials and beginner level books approach the subject. Let’s see how Michael Nielsen describes what the hidden neurons are doing in his book — Neural Networks and Deep Learning: He, like many others, uses the analogy between neural networks and the human mind to try to explain a neural networks. The way lines and edges make loops, which then help in recognising some digits is what we would think of doing. Many other tutorials try to use a similar analogy to explain what it means to build a hierarchy of knowledge. I have to say that because of this analogy, I understand neural nets better. But it is one of the paradoxes, that the very analogy that makes a difficult concept intelligible to the masses, can also create an illusion of knowledge among them. Readers need to understand that it is just an analogy. Nothing more, nothing less. They need to understand that every simple analogy needs to be followed by more rigorous, seemingly difficult explanations. Now don’t get me wrong. I am deeply thankful to Michael Nielsen for writing this book. It is one of the best books on the subject out there. He is careful in mentioning that this is “just for the sake of argument”. But I took it to mean this — Maybe, the network won’t use the same exact pieces. Maybe, it will figure out some other pieces and join them in some other way to recognise the digits. But the essence will be the same. Right? I mean each of those pieces has to be some kind of an edge or a line or some loopy structure. After all, it doesn’t seem like there are other possibilities if you want to build a hierarchical structure to solve the problem of recognising digits. As I gained a better intuition about them and how they work, I understood that this view is obviously wrong. It hit me.. Let’s consider loops — Being able to identify a loop is essential for us humans to write digits- an 8 is two loops joined end-to-end, a 9 is loop with a tail under it and a 6 is loop with a tail up top. But when it comes to recognising digits in an image, features like loops seem difficult and infeasible for a neural network (Remember, I’m talking about your vanilla neutral networks or MLPs here). I know its just a lot of “hand-wavy” reasoning but I think it is enough to convince. Probably, the edges and all the other hand-engineered features will face similar problems. ..and there’s the dilemma! I had no clue about the answer or how to find it until 3blue1brown released a set of videos about neural networks. It was Grant Sanderson’s take at explaining the subject to newcomers. Maybe even he felt that there were some missing pieces in the explanation by other people and that he could address them in his tutorials. And boy, did he! Grant Sanderson of 3blue1brown, who uses a structure with 2 hidden layers, says — The very loops and edges that we ruled out above. They were not looking for loops or edges or anything even remotely close! They were looking for.. well something inexplicable.. some strange patterns that can be confused for random noise! I found those weight matrix images (in the above screenshot) really fascinating. I thought of them as a Lego puzzle. The weight matrix images were like the elementary Lego blocks and my task was to figure out a way to arrange them together so that I could create all 10 digits. This idea was inspired from the excerpt of Neural Networks and Deep Learning that I posted above. There we saw how we could assemble a 0 using hand-made features like edges and curves. So, I thought that, maybe, we could do the same with the features that the neural network actually found good. All I needed was those weight matrix images that were used in 3blue1brown’s video. Now the problem was that Grant had put only 7 images in the video. So, I was gonna have to generate them on my own and create my very own set of Lego blocks! I imported the code used in Michael Nielsen’s book to a Jupyter notebook. Then, I extended the Network class in there to include the methods that would help me visualise the weight matrices. One pixel for every connection in the network. One image for each neuron showing how much it ‘likes’(colour: blue) or ‘dislikes’(colour: red) the previous layer neurons. So, if I was to look at the image belonging to one of the neurons in the hidden layer, it would be like a heat map showing one feature, one basic Lego block that will be used to recognise digits. Blue pixels would represent connections that it “likes” whereas red ones would represent the connections that it “dislikes”. I trained a neural network that had: Notice that we will have 30 different types of basic Lego blocks for our Lego puzzle here because that’s the size of our hidden layer. And.. here’s what they look like! — These are the features that we were looking for! The ones that are better than loops and edges according to the network. And here’s how it classifies all 10 digits: And guess what?None of them make any sense!! None of the features seem to capture any isolated distinguishable feature in the input image. All of them can be mistaken to be just randomly shaped blobs at randomly chosen places. I mean, just look at how it identifies a ‘0': This is the weight matrix image for the output neuron that recognizes ‘0's: To be clear, the pixels in this image represent the weights connecting the hidden layer to the output neuron that recognises ‘0's. We shall take only a handful of the most useful features for each digit into account. To do that, we can visually select the most intense blue pixels and the most intense red pixels. Here, the blue ones should give us the most useful features and the red ones should give us the most dreaded ones (think of it as the neuron saying — “The image will absolutely *not* match this prototype if it is a 0”). Indices of the three most intense blue pixels: 3, 6, 26Indices of the three most intense red pixels: 5, 18, 22 Matrices 6 and 26 seem to capture something like a blue boundary of sorts that is surrounding inner red pixels — exactly what could actually help in identifying a ‘0’. But what about matrix 3? It does not capture any feature we can even explain in words. The same goes for matrix 18. Why would the neuron not like it? It seems quite similar to matrix 3. And let’s not even go into the weird blue ‘S’ in 22. Nonsensical, see! Let’s do it for ‘1’: Indices of the three most intense blue pixels: 0, 11, 16Indices of the top two most intense red pixels: 7, 20 I have no words for this one! I won’t even try to comment. In what world can THOSE be used to identify 1’s !? Now, the much anticipated ‘8’ (how will it represent the 2 loops in it??): Top 3 most intense blue pixels: 1, 6, 14Top 3 most intense red pixels: 7, 24, 27 Nope, this isn’t any good either. There seem to be no loops like we were expecting it to have. But there is another interesting thing to notice in here — A majority of the pixels in the output layer neuron image (the one above the collage) are red. It seems like the network has figured out a way to recognise 8s using features that it does not like! So, NO. I couldn’t put digits together using those features as Lego blocks. I failed real bad at the task. But to be fair to myself, those features weren’t so much Lego-blocky either! Here’s why— So, there it is. Neural networks can be said to learn like us if you consider the way they build hierarchies of features just like we do. But when you see the features themselves, they are nothing like what we would use. The networks give you almost no explanation for the features that they learn. Neural networks are good function approximators. When we build and train one, we mostly just care about its accuracy — On what percentage of the test samples does it give positive results? This works incredibly well for a lot of purposes because modern neural nets can have remarkably high accuracies — upward of 98% is not uncommon (meaning that the chances of failure are just 1 in a 100!) But here’s the catch — When they are wrong, there’s no easy way to understand the reason why they are. They can’t be “debugged” in the traditional sense. For example, here’s an embarrassing incident that happened with Google because of this: Understanding what neural networks learn is a subject of great importance. It is crucial to unleashing the true power of deep learning. It will help us in A few weeks ago The New York Times Magazine ran a story about how neural networks were trained to predict the death of cancer patients with a remarkable accuracy. Here’s what the writer, an oncologist, said: I think I can strongly relate to this because of my little project. :-) During the little project that I described earlier, I stumbled upon a few other results that I found really cool and worth sharing. So here they are — Smaller networks: I wanted to see how low I could make the hidden layer size while still getting a considerable accuracy across my test set. It turns out that with 10 neurons, the network was able to classify 9343 out of 10000 test images correctly. That’s 93.43% accuracy at classifying images that it has never seen with just 10 hidden neurons. Just 10 different types of Lego blocks to recognise 10 digits!! I find this incredibly fascinating. Of course, these weights don’t make much sense either! In case you are curious, I tried it with 5 neurons too and I got an accuracy of 86.65%; 4 neurons- accuracy 83.73%; below that it dropped very steeply — 3 neurons- 58.75%, 2 neurons- 22.80%. Weight initialisation + regularisation makes a LOT of difference: Just regularising your network and using good initialisations for the weights can have a huge effect on what your network learns. Let me demonstrate. I used the same network architecture, meaning same no. of layers and same no. of neurons in the layers. I then trained 2 Network objects- one without regularisation and using the same old np.random.randn() whereas in the other one I used regularisation along with np.random.randn()/sqrt(n). This is what I observed: Yeah! I was shocked too! (Note: I have shown the weight matrices associated with different index neurons in the above collage. This is because due to different initialisations, even the ones at the same index learn different features. So, I chose the ones that appear to make the effect most starking.) To know more about weight initialisation techniques in neural networks I recommend that you start here. If you want to discuss this article or any other project that you have in mind or really anything AI please feel free to comment below or drop me a message on LinkedIn, Facebook or Twitter. I have learnt a lot more about deep learning since I did the project in this article (like completing the Deep Learning Specialisation at Coursera!😄). Don’t hesitate to reach out if you think I could be of any help. Thank you for reading! 😄 You can follow me on Twitter — https://twitter.com/nityeshaga; I won’t spam your feed. 😉 Originally published on the Zeolearn blog. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Reader, writer and a programmer. Sharing concepts, ideas, and codes. "