author
stringlengths
3
31
claps
stringlengths
1
5
reading_time
int64
2
31
link
stringlengths
92
277
title
stringlengths
24
104
text
stringlengths
1.35k
44.5k
Max Pechyonkin
23K
8
https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b?source=tag_archive---------4----------------
Understanding Hinton’s Capsule Networks. Part I: Intuition.
Part I: Intuition (you are reading it now)Part II: How Capsules WorkPart III: Dynamic Routing Between CapsulesPart IV: CapsNet Architecture Quick announcement about our new publication AI3. We are getting the best writers together to talk about the Theory, Practice, and Business of AI and machine learning. Follow it to stay up to date on the latest trends. Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. For everyone in the deep learning community, this is huge news, and for several reasons. First of all, Hinton is one of the founders of deep learning and an inventor of numerous models and algorithms that are widely used today. Secondly, these papers introduce something completely new, and this is very exciting because it will most likely stimulate additional wave of research and very cool applications. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details. However, before talking about capsules, we need to have a look at CNNs, which are the workhorse of today’s deep learning. CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example. Imagine a face. What are the components? We have the face oval, two eyes, a nose and a mouth. For a CNN, a mere presence of these objects can be a very strong indicator to consider that there is a face in the image. Orientational and relative spatial relationships between these components are not very important to a CNN. How do CNNs work? The main component of a CNN is a convolutional layer. Its job is to detect important features in the image pixels. Layers that are deeper (closer to the input) will learn to detect simple features such as edges and color gradients, whereas higher layers will combine simple features into more complex features. Finally, dense layers at the top of the network will combine very high level features and produce classification predictions. An important thing to understand is that higher-level features combine lower-level features as a weighted sum: activations of a preceding layer are multiplied by the following layer neuron’s weights and added, before being passed to activation nonlinearity. Nowhere in this setup there is pose (translational and rotational) relationship between simpler features that make up a higher level feature. CNN approach to solve this issue is to use max pooling or successive convolutional layers that reduce spacial size of the data flowing through the network and therefore increase the “field of view” of higher layer’s neurons, thus allowing them to detect higher order features in a larger region of the input image. Max pooling is a crutch that made convolutional networks work surprisingly well, achieving superhuman performance in many areas. But do not be fooled by its performance: while CNNs work better than any model before them, max pooling nonetheless is losing valuable information. Hinton himself stated that the fact that max pooling is working so well is a big mistake and a disaster: Of course, you can do away with max pooling and still get good results with traditional CNNs, but they still do not solve the key problem: In the example above, a mere presence of 2 eyes, a mouth and a nose in a picture does not mean there is a face, we also need to know how these objects are oriented relative to each other. Computer graphics deals with constructing a visual image from some internal hierarchical representation of geometric data. Note that the structure of this representation needs to take into account relative positions of objects. That internal representation is stored in computer’s memory as arrays of geometrical objects and matrices that represent relative positions and orientation of these objects. Then, special software takes that representation and converts it into an image on the screen. This is called rendering. Inspired by this idea, Hinton argues that brains, in fact, do the opposite of rendering. He calls it inverse graphics: from visual information received by eyes, they deconstruct a hierarchical representation of the world around us and try to match it with already learned patterns and relationships stored in the brain. This is how recognition happens. And the key idea is that representation of objects in the brain does not depend on view angle. So at this point the question is: how do we model these hierarchical relationships inside of a neural network? The answer comes from computer graphics. In 3D graphics, relationships between 3D objects can be represented by a so-called pose, which is in essence translation plus rotation. Hinton argues that in order to correctly do classification and object recognition, it is important to preserve hierarchical pose relationships between object parts. This is the key intuition that will allow you to understand why capsule theory is so important. It incorporates relative relationships between objects and it is represented numerically as a 4D pose matrix. When these relationships are built into internal representation of data, it becomes very easy for a model to understand that the thing that it sees is just another view of something that it has seen before. Consider the image below. You can easily recognize that this is the Statue of Liberty, even though all the images show it from different angles. This is because internal representation of the Statue of Liberty in your brain does not depend on the view angle. You have probably never seen these exact pictures of it, but you still immediately knew what it was. For a CNN, this task is really hard because it does not have this built-in understanding of 3D space, but for a CapsNet it is much easier because these relationships are explicitly modeled. The paper that uses this approach was able to cut error rate by 45% as compared to the previous state of the art, which is a huge improvement. Another benefit of the capsule approach is that it is capable of learning to achieve state-of-the art performance by only using a fraction of the data that a CNN would use (Hinton mentions this in his famous talk about what is wrongs with CNNs). In this sense, the capsule theory is much closer to what the human brain does in practice. In order to learn to tell digits apart, the human brain needs to see only a couple of dozens of examples, hundreds at most. CNNs, on the other hand, need tens of thousands of examples to achieve very good performance, which seems like a brute force approach that is clearly inferior to what we do with our brains. The idea is really simple, there is no way no one has come up with it before! And the truth is, Hinton has been thinking about this for decades. The reason why there were no publications is simply because there was no technical way to make it work before. One of the reasons is that computers were just not powerful enough in the pre-GPU-based era before around 2012. Another reason is that there was no algorithm that allowed to implement and successfully learn a capsule network (in the same fashion the idea of artificial neurons was around since 1940-s, but it was not until mid 1980-s when backpropagation algorithm showed up and allowed to successfully train deep networks). In the same fashion, the idea of capsules itself is not that new and Hinton has mentioned it before, but there was no algorithm up until now to make it work. This algorithm is called “dynamic routing between capsules”. This algorithm allows capsules to communicate with each other and create representations similar to scene graphs in computer graphics. Capsules introduce a new building block that can be used in deep learning to better model hierarchical relationships inside of internal knowledge representation of a neural network. Intuition behind them is very simple and elegant. Hinton and his team proposed a way to train such a network made up of capsules and successfully trained it on a simple data set, achieving state-of-the-art performance. This is very encouraging. Nonetheless, there are challenges. Current implementations are much slower than other modern deep learning models. Time will show if capsule networks can be trained quickly and efficiently. In addition, we need to see if they work well on more difficult data sets and in different domains. In any case, the capsule network is a very interesting and already working model which will definitely get more developed over time and contribute to further expansion of deep learning application domain. This concludes part one of the series on capsule networks. In the Part II, more technical part, I will walk you through the CapsNet’s internal workings step by step. You can follow me on Twitter. Let’s also connect on LinkedIn. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur
Slav Ivanov
3.9K
17
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------5----------------
The $1700 great Deep Learning box: Assembly, setup and benchmarks
Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Stefan Kojouharov
14.2K
7
https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------6----------------
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data
Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Vishal Maini
8K
13
https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feab?source=tag_archive---------7----------------
Machine Learning for Humans, Part 2.1: Supervised Learning
How much money will we make by spending more dollars on digital advertising? Will this loan applicant pay back the loan or not? What’s going to happen to the stock market tomorrow? In supervised learning problems, we start with a data set containing training examples with associated correct labels. For example, when learning to classify handwritten digits, a supervised learning algorithm takes thousands of pictures of handwritten digits along with labels containing the correct number each image represents. The algorithm will then learn the relationship between the images and their associated numbers, and apply that learned relationship to classify completely new images (without labels) that the machine hasn’t seen before. This is how you’re able to deposit a check by taking a picture with your phone! To illustrate how supervised learning works, let’s examine the problem of predicting annual income based on the number of years of higher education someone has completed. Expressed more formally, we’d like to build a model that approximates the relationship f between the number of years of higher education X and corresponding annual income Y. One method for predicting income would be to create a rigid rules-based model for how income and education are related. For example: “I’d estimate that for every additional year of higher education, annual income increases by $5,000.” You could come up with a more complex model by including some rules about degree type, years of work experience, school tiers, etc. For example: “If they completed a Bachelor’s degree or higher, give the income estimate a 1.5x multiplier.” But this kind of explicit rules-based programming doesn’t work well with complex data. Imagine trying to design an image classification algorithm made of if-then statements describing the combinations of pixel brightnesses that should be labeled “cat” or “not cat”. Supervised machine learning solves this problem by getting the computer to do the work for you. By identifying patterns in the data, the machine is able to form heuristics. The primary difference between this and human learning is that machine learning runs on computer hardware and is best understood through the lens of computer science and statistics, whereas human pattern-matching happens in a biological brain (while accomplishing the same goals). In supervised learning, the machine attempts to learn the relationship between income and education from scratch, by running labeled training data through a learning algorithm. This learned function can be used to estimate the income of people whose income Y is unknown, as long as we have years of education X as inputs. In other words, we can apply our model to the unlabeled test data to estimate Y. The goal of supervised learning is to predict Y as accurately as possible when given new examples where X is known and Y is unknown. In what follows we’ll explore several of the most common approaches to doing so. The rest of this section will focus on regression. In Part 2.2 we’ll dive deeper into classification methods. Regression predicts a continuous target variable Y. It allows you to estimate a value, such as housing prices or human lifespan, based on input data X. Here, target variable means the unknown variable we care about predicting, and continuous means there aren’t gaps (discontinuities) in the value that Y can take on. A person’s weight and height are continuous values. Discrete variables, on the other hand, can only take on a finite number of values — for example, the number of kids somebody has is a discrete variable. Predicting income is a classic regression problem. Your input data X includes all relevant information about individuals in the data set that can be used to predict income, such as years of education, years of work experience, job title, or zip code. These attributes are called features, which can be numerical (e.g. years of work experience) or categorical (e.g. job title or field of study). You’ll want as many training observations as possible relating these features to the target output Y, so that your model can learn the relationship f between X and Y. The data is split into a training data set and a test data set. The training set has labels, so your model can learn from these labeled examples. The test set does not have labels, i.e. you don’t yet know the value you’re trying to predict. It’s important that your model can generalize to situations it hasn’t encountered before so that it can perform well on the test data. In our trivially simple 2D example, this could take the form of a .csv file where each row contains a person’s education level and income. Add more columns with more features and you’ll have a more complex, but possibly more accurate, model. How do we build models that make accurate, useful predictions in the real world? We do so by using supervised learning algorithms. Now let’s get to the fun part: getting to know the algorithms. We’ll explore some of the ways to approach regression and classification and illustrate key machine learning concepts throughout. “Draw the line. Yes, this counts as machine learning.” First, we’ll focus on solving the income prediction problem with linear regression, since linear models don’t work well with image recognition tasks (this is the domain of deep learning, which we’ll explore later). We have our data set X, and corresponding target values Y. The goal of ordinary least squares (OLS) regression is to learn a linear model that we can use to predict a new y given a previously unseen x with as little error as possible. We want to guess how much income someone earns based on how many years of education they received. Linear regression is a parametric method, which means it makes an assumption about the form of the function relating X and Y (we’ll cover examples of non-parametric methods later). Our model will be a function that predicts ŷ given a specific x: β0 is the y-intercept and β1 is the slope of our line, i.e. how much income increases (or decreases) with one additional year of education. Our goal is to learn the model parameters (in this case, β0 and β1) that minimize error in the model’s predictions. To find the best parameters: Graphically, in two dimensions, this results in a line of best fit. In three dimensions, we would draw a plane, and so on with higher-dimensional hyperplanes. Mathematically, we look at the difference between each real data point (y) and our model’s prediction (ŷ). Square these differences to avoid negative numbers and penalize larger differences, and then add them up and take the average. This is a measure of how well our data fits the line. For a simple problem like this, we can compute a closed form solution using calculus to find the optimal beta parameters that minimize our loss function. But as a cost function grows in complexity, finding a closed form solution with calculus is no longer feasible. This is the motivation for an iterative approach called gradient descent, which allows us to minimize a complex loss function. “Put on a blindfold, take a step downhill. You’ve found the bottom when you have nowhere to go but up.” Gradient descent will come up over and over again, especially in neural networks. Machine learning libraries like scikit-learn and TensorFlow use it in the background everywhere, so it’s worth understanding the details. The goal of gradient descent is to find the minimum of our model’s loss function by iteratively getting a better and better approximation of it. Imagine yourself walking through a valley with a blindfold on. Your goal is to find the bottom of the valley. How would you do it? A reasonable approach would be to touch the ground around you and move in whichever direction the ground is sloping down most steeply. Take a step and repeat the same process continually until the ground is flat. Then you know you’ve reached the bottom of a valley; if you move in any direction from where you are, you’ll end up at the same elevation or further uphill. Going back to mathematics, the ground becomes our loss function, and the elevation at the bottom of the valley is the minimum of that function. Let’s take a look at the loss function we saw in regression: We see that this is really a function of two variables: β0 and β1. All the rest of the variables are determined, since X, Y, and n are given during training. We want to try to minimize this function. The function is f(β0,β1)=z. To begin gradient descent, you make some guess of the parameters β0 and β1 that minimize the function. Next, you find the partial derivatives of the loss function with respect to each beta parameter: [dz/dβ0, dz/dβ1]. A partial derivative indicates how much total loss is increased or decreased if you increase β0 or β1 by a very small amount. Put another way, how much would increasing your estimate of annual income assuming zero higher education (β0) increase the loss (i.e. inaccuracy) of your model? You want to go in the opposite direction so that you end up walking downhill and minimizing loss. Similarly, if you increase your estimate of how much each incremental year of education affects income (β1), how much does this increase loss (z)? If the partial derivative dz/β1 is a negative number, then increasing β1 is good because it will reduce total loss. If it’s a positive number, you want to decrease β1. If it’s zero, don’t change β1 because it means you’ve reached an optimum. Keep doing that until you reach the bottom, i.e. the algorithm converged and loss has been minimized. There are lots of tricks and exceptional cases beyond the scope of this series, but generally, this is how you find the optimal parameters for your parametric model. Overfitting: “Sherlock, your explanation of what just happened is too specific to the situation.” Regularization: “Don’t overcomplicate things, Sherlock. I’ll punch you for every extra word.” Hyperparameter (λ): “Here’s the strength with which I will punch you for every extra word.” A common problem in machine learning is overfitting: learning a function that perfectly explains the training data that the model learned from, but doesn’t generalize well to unseen test data. Overfitting happens when a model overlearns from the training data to the point that it starts picking up idiosyncrasies that aren’t representative of patterns in the real world. This becomes especially problematic as you make your model increasingly complex. Underfitting is a related issue where your model is not complex enough to capture the underlying trend in the data. Remember that the only thing we care about is how the model performs on test data. You want to predict which emails will be marked as spam before they’re marked, not just build a model that is 100% accurate at reclassifying the emails it used to build itself in the first place. Hindsight is 20/20 — the real question is whether the lessons learned will help in the future. The model on the right has zero loss for the training data because it perfectly fits every data point. But the lesson doesn’t generalize. It would do a horrible job at explaining a new data point that isn’t yet on the line. Two ways to combat overfitting: 1. Use more training data. The more you have, the harder it is to overfit the data by learning too much from any single training example. 2. Use regularization. Add in a penalty in the loss function for building a model that assigns too much explanatory power to any one feature or allows too many features to be taken into account. The first piece of the sum above is our normal cost function. The second piece is a regularization term that adds a penalty for large beta coefficients that give too much explanatory power to any specific feature. With these two elements in place, the cost function now balances between two priorities: explaining the training data and preventing that explanation from becoming overly specific. The lambda coefficient of the regularization term in the cost function is a hyperparameter: a general setting of your model that can be increased or decreased (i.e. tuned) in order to improve performance. A higher lambda value will more harshly penalize large beta coefficients that could lead to potential overfitting. To decide the best value of lambda, you’d use a method called cross-validation which involves holding out a portion of the training data during training, and then seeing how well your model explains the held-out portion. We’ll go over this in more depth Here’s what we covered in this section: In the next section — Part 2.2: Supervised Learning II — we’ll talk about two foundational methods of classification: logistic regression and support vector machines. For a more thorough treatment of linear regression, read chapters 1–3 of An Introduction to Statistical Learning. The book is available for free online and is an excellent resource for understanding machine learning concepts with accompanying exercises. For more practice: To actually implement gradient descent in Python, check out this tutorial. And here is a more mathematically rigorous description of the same concepts. In practice, you’ll rarely need to implement gradient descent from scratch, but understanding how it works behind the scenes will allow you to use it more effectively and understand why things break when they do. More from Machine Learning for Humans 🤖👶 From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC. Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact.
Arvind N
9.5K
8
https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------8----------------
Thoughts after taking the Deeplearning.ai courses – Towards Data Science
[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes.
Blaise Aguera y Arcas
8.7K
15
https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------0----------------
Do algorithms reveal sexual orientation or just expose our stereotypes?
by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft.
David Foster
12.8K
11
https://medium.com/applied-data-science/how-to-build-your-own-alphazero-ai-using-python-and-keras-7f664945c188?source=tag_archive---------1----------------
How to build your own AlphaZero AI using Python and Keras
In this article I’ll attempt to cover three things: In March 2016, Deepmind’s AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished. This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further. The paper ‘Mastering the Game of Go without Human Knowledge’ unveiled a new variant of the algorithm, AlphaGo Zero, that had defeated AlphaGo 100–0. Incredibly, it had done so by learning solely through self-play, starting ‘tabula rasa’ (blank state) and gradually finding strategies that would beat previous incarnations of itself. No longer was a database of human expert games required to build a super-human AI . A mere 48 days later, on 5th December 2017, DeepMind released another paper ‘Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm’ showing how AlphaGo Zero could be adapted to beat the world-champion programs StockFish and Elmo at chess and shogi. The entire learning process, from being shown the games for the first time, to becoming the best computer program in the world, had taken under 24 hours. With this, AlphaZero was born — the general algorithm for getting good at something, quickly, without any prior knowledge of human expert strategy. There are two amazing things about this achievement: It cannot be overstated how important this is. This means that the underlying methodology of AlphaGo Zero can be applied to ANY game with perfect information (the game state is fully known to both players at all times) because no prior expertise is required beyond the rules of the game. This is how it was possible for DeepMind to publish the chess and shogi papers only 48 days after the original AlphaGo Zero paper. Quite literally, all that needed to change was the input file that describes the mechanics of the game and to tweak the hyper-parameters relating to the neural network and Monte Carlo tree search. If AlphaZero used super-complex algorithms that only a handful of people in the world understood, it would still be an incredible achievement. What makes it extraordinary is that a lot of the ideas in the paper are actually far less complex than previous versions. At its heart, lies the following beautifully simple mantra for learning: Doesn’t that sound a lot like how you learn to play games? When you play a bad move, it’s either because you misjudged the future value of resulting positions, or you misjudged the likelihood that your opponent would play a certain move, so didn’t think to explore that possibility. These are exactly the two aspects of gameplay that AlphaZero is trained to learn. Firstly, check out the AlphaGo Zero cheat sheet for a high level understanding of how AlphaGo Zero works. It’s worth having that to refer to as we walk through each part of the code. There’s also a great article here that explains how AlphaZero works in more detail. Clone this Git repository, which contains the code I’ll be referencing. To start the learning process, run the top two panels in the run.ipynb Jupyter notebook. Once it’s built up enough game positions to fill its memory the neural network will begin training. Through additional self-play and training, it will gradually get better at predicting the game value and next moves from any position, resulting in better decision making and smarter overall play. We’ll now have a look at the code in more detail, and show some results that demonstrate the AI getting stronger over time. N.B — This is my own understanding of how AlphaZero works based on the information available in the papers referenced above. If any of the below is incorrect, apologies and I’ll endeavour to correct it! The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go... but there are still 4,531,985,219,092 game positions in total. The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row — each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn. Here’s a summary of the key files that make up the codebase: This file contains the game rules for Connect4. Each squares is allocated a number from 0 to 41, as follows: The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting player’s piece at the bottom of the centre column. You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it. This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages: There are two agents involved in this loop, the best_player and the current_player. The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again. This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree. The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree. The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move. The replay method retrains the neural network, using memories from previous games. This file contains the Residual_CNN class, which defines how to build an instance of the neural network. It uses a condensed version of the neural network architecture in the AlphaGoZero paper — i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head. The depth and number of convolutional filters can be specified in the config file. The Keras library is used to build the network, with a backend of Tensorflow. To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook: This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree. The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move. This is where you set the key parameters that influence the algorithm. Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead. Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents. To play against your creation, run the following code (it’s also in the run.ipynb notebook) When you run the algorithm, all model and memory files are saved in the run folder, in the root directory. To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint. An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player. This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function. The locations of the run and run_archive folders. Log files are saved to the log folder inside the run folder. To turn on logging, set the values of the logger_disabled variables to False inside this file. Viewing the log files will help you to understand how the algorithm works and see inside its ‘mind’. For example, here is a sample from the logger.mcts file. Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase: Training over a couple of days produces the following chart of loss against mini-batch iteration number: The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two. Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first. Here are the final standings: Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasn’t yet saturated — with further training time, the players would continue to get stronger, learning more and more intricate strategies. As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version: 1st neural network version 30th neural network version This is a good strategy as many lines require the centre column — claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input. There is a game.py file for a game called ‘Metasquares’ in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins. If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead. Hopefully you find this article useful — let me know in the comments below if you find any typos or have questions about anything in the codebase or article and I’ll get back to you as soon as possible. If you would like to learn more about how our company, Applied Data Science develops innovative data science solutions for businesses, feel free to get in touch through our website or directly through LinkedIn. ... and if you like this, feel free to leave a few hearty claps :) Applied Data Science is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you’re looking to do more with your data, let’s talk. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder of Applied Data Science Cutting edge data science news and projects
Aman Agarwal
7K
24
https://medium.freecodecamp.org/explained-simply-how-an-ai-program-mastered-the-ancient-game-of-go-62b8940a9080?source=tag_archive---------2----------------
Explained Simply: How an AI program mastered the ancient game of Go
This is about AlphaGo, Google DeepMind’s Go playing AI that shook the technology world in 2016 by defeating one of the best players in the world, Lee Sedol. Go is an ancient board game which has so many possible moves at each step that future positions are hard to predict — and therefore it requires strong intuition and abstract thinking to play. Because of this reason, it was believed that only humans could be good at playing Go. Most researchers thought that it would still take decades to build an AI which could think like that. In fact, I’m releasing this essay today because this week (March 8–15) marks the two-year anniversary of the AlphaGo vs Sedol match! But AlphaGo didn’t stop there. 8 months later, it played 60 professional games on a Go website under disguise as a player named “Master”, and won every single game, against dozens of world champions, of course without resting between games. Naturally this was a HUGE achievement in the field of AI and sparked worldwide discussions about whether we should be excited or worried about artificial intelligence. Today we are going to take the original research paper published by DeepMind in the Nature journal, and break it down paragraph-by-paragraph using simple English. After this essay, you’ll know very clearly what AlphaGo is, and how it works. I also hope that after reading this you will not believe all the news headlines made by journalists to scare you about AI, and instead feel excited about it. Worrying about the growing achievements of AI is like worrying about the growing abilities of Microsoft Powerpoint. Yes, it will get better with time with new features being added to it, but it can’t just uncontrollably grow into some kind of Hollywood monster. You DON’T need to know how to play Go to understand this paper. In fact, I myself have only read the first 3–4 lines in Wikipedia’s opening paragraph about it. Instead, surprisingly, I use some examples from basic Chess to explain the algorithms. You just have to know what a 2-player board game is, in which each player takes turns and there is one winner at the end. Beyond that you don’t need to know any physics or advanced math or anything. This will make it more approachable for people who only just now started learning about machine learning or neural networks. And especially for those who don’t use English as their first language (which can make it very difficult to read such papers). If you have NO prior knowledge of AI and neural networks, you can read the “Deep Learning” section of one of my previous essays here. After reading that, you’ll be able to get through this essay. If you want to get a shallow understanding of Reinforcement Learning too (optional reading), you can find it here. Here’s the original paper if you want to try reading it: As for me: Hi I’m Aman, an AI and autonomous robots engineer. I hope that my work will save you a lot of time and effort if you were to study this on your own. Do you speak Japanese? Ryohji Ikebe has kindly written a brief memo about this essay in Japanese, in a series of Tweets. As you know, the goal of this research was to train an AI program to play Go at the level of world-class professional human players. To understand this challenge, let me first talk about something similar done for Chess. In the early 1990s, IBM came out with the Deep Blue computer which defeated the great champion Gary Kasparov in Chess. (He’s also a very cool guy, make sure to read more about him later!) How did Deep Blue play? Well, it used a very brute force method. At each step of the game, it took a look at all the possible legal moves that could be played, and went ahead to explore each and every move to see what would happen. And it would keep exploring move after move for a while, forming a kind of HUGE decision tree of thousands of moves. And then it would come back along that tree, observing which moves seemed most likely to bring a good result. But, what do we mean by “good result”? Well, Deep Blue had many carefully designed chess strategies built into it by expert chess players to help it make better decisions — for example, how to decide whether to protect the king or get advantage somewhere else? They made a specific “evaluation algorithm” for this purpose, to compare how advantageous or disadvantageous different board positions are (IBM hard-coded expert chess strategies into this evaluation function). And finally it chooses a carefully calculated move. On the next turn, it basically goes through the whole thing again. As you can see, this means Deep Blue thought about millions of theoretical positions before playing each move. This was not so impressive in terms of the AI software of Deep Blue, but rather in the hardware — IBM claimed it to be one of the most powerful computers available in the market at that time. It could look at 200 million board positions per second. Now we come to Go. Just believe me that this game is much more open-ended, and if you tried the Deep Blue strategy on Go, you wouldn’t be able to play well. There would be SO MANY positions to look at at each step that it would simply be impractical for a computer to go through that hell. For example, at the opening move in Chess there are 20 possible moves. In Go the first player has 361 possible moves, and this scope of choices stays wide throughout the game. This is what they mean by “enormous search space.” Moreover, in Go, it’s not so easy to judge how advantageous or disadvantageous a particular board position is at any specific point in the game — you kinda have to play the whole game for a while before you can determine who is winning. But let’s say you magically had a way to do both of these. And that’s where deep learning comes in! So in this research, DeepMind used neural networks to do both of these tasks (if you haven’t read about them yet, here’s the link again). They trained a “policy neural network” to decide which are the most sensible moves in a particular board position (so it’s like following an intuitive strategy to pick moves from any position). And they trained a “value neural network” to estimate how advantageous a particular board arrangement is for the player (or in other words, how likely you are to win the game from this position). They trained these neural networks first with human game examples (your good old ordinary supervised learning). After this the AI was able to mimic human playing to a certain degree, so it acted like a weak human player. And then to train the networks even further, they made the AI play against itself millions of times (this is the “reinforcement learning” part). With this, the AI got better because it had more practice. With these two networks alone, DeepMind’s AI was able to play well against state-of-the-art Go playing programs that other researchers had built before. These other programs had used an already popular pre-existing game playing algorithm, called the “Monte Carlo Tree Search” (MCTS). More about this later. But guess what, we still haven’t talked about the real deal. DeepMind’s AI isn’t just about the policy and value networks. It doesn’t use these two networks as a replacement of the Monte Carlo Tree Search. Instead, it uses the neural networks to make the MCTS algorithm work better... and it got so much better that it reached superhuman levels. THIS improved variation of MCTS is “AlphaGo”, the AI that beat Lee Sedol and went down in AI history as one of the greatest breakthroughs ever. So essentially, AlphaGo is simply an improved implementation of a very ordinary computer science algorithm. Do you understand now why AI in its current form is absolutely nothing to be scared of? Wow, we’ve spent a lot of time on the Abstract alone. Alright — to understand the paper from this point on, first we’ll talk about a gaming strategy called the Monte Carlo Tree Search algorithm. For now, I’ll just explain this algorithm at enough depth to make sense of this essay. But if you want to learn about it in depth, some smart people have also made excellent videos and blog posts on this: 1. A short video series from Udacity2. Jeff Bradberry’s explanation of MCTS3. An MCTS tutorial by Fullstack Academy The following section is long, but easy to understand (I’ll try my best) and VERY important, so stay with me! The rest of the essay will go much quicker. Let’s talk about the first paragraph of the essay above. Remember what I said about Deep Blue making a huge tree of millions of board positions and moves at each step of the game? You had to do simulations and look at and compare each and every possible move. As I said before, that was a simple approach and very straightforward approach — if the average software engineer had to design a game playing AI, and had all the strongest computers of the world, he or she would probably design a similar solution. But let’s think about how do humans themselves play chess? Let’s say you’re at a particular board position in the middle of the game. By game rules, you can do a dozen different things — move this pawn here, move the queen two squares here or three squares there, and so on. But do you really make a list of all the possible moves you can make with all your pieces, and then select one move from this long list? No — you “intuitively” narrow down to a few key moves (let’s say you come up with 3 sensible moves) that you think make sense, and then you wonder what will happen in the game if you chose one of these 3 moves. You might spend 15–20 seconds considering each of these 3 moves and their future — and note that during these 15 seconds you don’t have to carefully plan out the future of each move; you can just “roll out” a few mental moves guided by your intuition without TOO much careful thought (well, a good player would think farther and more deeply than an average player). This is because you have limited time, and you can’t accurately predict what your opponent will do at each step in that lovely future you’re cooking up in your brain. So you’ll just have to let your gut feeling guide you. I’ll refer to this part of the thinking process as “rollout”, so take note of it!So after “rolling out” your few sensible moves, you finally say screw it and just play the move you find best. Then the opponent makes a move. It might be a move you had already well anticipated, which means you are now pretty confident about what you need to do next. You don’t have to spend too much time on the rollouts again. OR, it could be that your opponent hits you with a pretty cool move that you had not expected, so you have to be even more careful with your next move.This is how the game carries on, and as it gets closer and closer to the finishing point, it would get easier for you to predict the outcome of your moves — so your rollouts don’t take as much time. The purpose of this long story is to describe what the MCTS algorithm does on a superficial level — it mimics the above thinking process by building a “search tree” of moves and positions every time. Again, for more details you should check out the links I mentioned earlier. The innovation here is that instead of going through all the possible moves at each position (which Deep Blue did), it instead intelligently selects a small set of sensible moves and explores those instead. To explore them, it “rolls out” the future of each of these moves and compares them based on their imagined outcomes.(Seriously — this is all I think you need to understand this essay) Now — coming back to the screenshot from the paper. Go is a “perfect information game” (please read the definition in the link, don’t worry it’s not scary). And theoretically, for such games, no matter which particular position you are at in the game (even if you have just played 1–2 moves), it is possible that you can correctly guess who will win or lose (assuming that both players play “perfectly” from that point on). I have no idea who came up with this theory, but it is a fundamental assumption in this research project and it works. So that means, given a state of the game s, there is a function v*(s) which can predict the outcome, let’s say probability of you winning this game, from 0 to 1. They call it the “optimal value function”. Because some board positions are more likely to result in you winning than other board positions, they can be considered more “valuable” than the others. Let me say it again: Value = Probability between 0 and 1 of you winning the game. But wait — say there was a girl named Foma sitting next to you while you play Chess, and she keeps telling you at each step if you’re winning or losing. “You’re winning... You’re losing... Nope, still losing...” I think it wouldn’t help you much in choosing which move you need to make. She would also be quite annoying. What would instead help you is if you drew the whole tree of all the possible moves you can make, and the states that those moves would lead to — and then Foma would tell you for the entire tree which states are winning states and which states are losing states. Then you can choose moves which will keep leading you to winning states. All of a sudden Foma is your partner in crime, not an annoying friend. Here, Foma behaves as your optimal value function v*(s). Earlier, it was believed that it’s not possible to have an accurate value function like Foma for the game of Go, because the games had so much uncertainty. BUT — even if you had the wonderful Foma, this wonderland strategy of drawing out all the possible positions for Foma to evaluate will not work very well in the real world. In a game like Chess or Go, as we said before, if you try to imagine even 7–8 moves into the future, there can be so many possible positions that you don’t have enough time to check all of them with Foma. So Foma is not enough. You need to narrow down the list of moves to a few sensible moves that you can roll out into the future. How will your program do that? Enter Lusha. Lusha is a skilled Chess player and enthusiast who has spent decades watching grand masters play Chess against each other. She can look at your board position, look quickly at all the available moves you can make, and tell you how likely it would be that a Chess expert would make any of those moves if they were sitting at your table. So if you have 50 possible moves at a point, Lusha will tell you the probability that each move would be picked by an expert. Of course, a few sensible moves will have a much higher probability and other pointless moves will have very little probability. She is your policy function, p(a\s). For a given state s, she can give you probabilities for all the possible moves that an expert would make. Wow — you can take Lusha’s help to guide you in how to select a few sensible moves, and Foma will tell you the likelihood of winning from each of those moves. You can choose the move that both Foma and Lusha approve. Or, if you want to be extra careful, you can roll out the moves selected by Lusha, have Foma evaluate them, pick a few of them to roll out further into the future, and keep letting Foma and Lusha help you predict VERY far into the game’s future — much quicker and more efficient than to go through all the moves at each step into the future. THIS is what they mean by “reducing the search space”. Use a value function (Foma) to predict outcomes, and use a policy function (Lusha) to give you grand-master probabilities to help narrow down the moves you roll out. These are called “Monte Carlo rollouts”. Then while you backtrack from future to present, you can take average values of all the different moves you rolled out, and pick the most suitable action. So far, this has only worked on a weak amateur level in Go, because the policy functions and value functions that they used to guide these rollouts weren’t that great. Phew. The first line is self explanatory. In MCTS, you can start with an unskilled Foma and unskilled Lusha. The more you play, the better they get at predicting solid outcomes and moves. “Narrowing the search to a beam of high probability actions” is just a sophisticated way of saying, “Lusha helps you narrow down the moves you need to roll out by assigning them probabilities that an expert would play them”. Prior work has used this technique to achieve strong amateur level AI players, even with simple (or “shallow” as they call it) policy functions. Yeah, convolutional neural networks are great for image processing. And since a neural network takes a particular input and gives an output, it is essentially a function, right? So you can use a neural network to become a complex function. So you can just pass in an image of the board position and let the neural network figure out by itself what’s going on. This means it’s possible to create neural networks which will behave like VERY accurate policy and value functions. The rest is pretty self explanatory. Here we discuss how Foma and Lusha were trained. To train the policy network (predicting for a given position which moves experts would pick), you simply use examples of human games and use them as data for good old supervised learning. And you want to train another slightly different version of this policy network to use for rollouts; this one will be smaller and faster. Let’s just say that since Lusha is so experienced, she takes some time to process each position. She’s good to start the narrowing-down process with, but if you try to make her repeat the process , she’ll still take a little too much time. So you train a *faster policy network* for the rollout process (I’ll call it... Lusha’s younger brother Jerry? I know I know, enough with these names). After that, once you’ve trained both of the slow and fast policy networks enough using human player data, you can try letting Lusha play against herself on a Go board for a few days, and get more practice. This is the reinforcement learning part — making a better version of the policy network. Then, you train Foma for value prediction: determining the probability of you winning. You let the AI practice through playing itself again and again in a simulated environment, observe the end result each time, and learn from its mistakes to get better and better. I won’t go into details of how these networks are trained. You can read more technical details in the later section of the paper (‘Methods’) which I haven’t covered here. In fact, the real purpose of this particular paper is not to show how they used reinforcement learning on these neural networks. One of DeepMind’s previous papers, in which they taught AI to play ATARI games, has already discussed some reinforcement learning techniques in depth (And I’ve already written an explanation of that paper here). For this paper, as I lightly mentioned in the Abstract and also underlined in the screenshot above, the biggest innovation was the fact that they used RL with neural networks for improving an already popular game-playing algorithm, MCTS. RL is a cool tool in a toolbox that they used to fine-tune the policy and value function neural networks after the regular supervised training. This research paper is about proving how versatile and excellent this tool it is, not about teaching you how to use it. In television lingo, the Atari paper was a RL infomercial and this AlphaGo paper is a commercial. A quick note before you move on. Would you like to help me write more such essays explaining cool research papers? If you’re serious, I’d be glad to work with you. Please leave a comment and I’ll get in touch with you. So, the first step is in training our policy NN (Lusha), to predict which moves are likely to be played by an expert. This NN’s goal is to allow the AI to play similar to an expert human. This is a convolutional neural network (as I mentioned before, it’s a special kind of NN that is very useful in image processing) that takes in a simplified image of a board arrangement. “Rectifier nonlinearities” are layers that can be added to the network’s architecture. They give it the ability to learn more complex things. If you’ve ever trained NNs before, you might have used the “ReLU” layer. That’s what these are. The training data here was in the form of random pairs of board positions, and the labels were the actions chosen by humans when they were in those positions. Just regular supervised learning. Here they use “stochastic gradient ASCENT”. Well, this is an algorithm for backpropagation. Here, you’re trying to maximise a reward function. And the reward function is just the probability of the action predicted by a human expert; you want to increase this probability. But hey — you don’t really need to think too much about this. Normally you train the network so that it minimises a loss function, which is essentially the error/difference between predicted outcome and actual label. That is called gradient DESCENT. In the actual implementation of this research paper, they have indeed used the regular gradient descent. You can easily find a loss function that behaves opposite to the reward function such that minimising this loss will maximise the reward. The policy network has 13 layers, and is called “SL policy” network (SL = supervised learning). The data came from a... I’ll just say it’s a popular website on which millions of people play Go. How good did this SL policy network perform? It was more accurate than what other researchers had done earlier. The rest of the paragraph is quite self-explanatory. As for the “rollout policy”, you do remember from a few paragraphs ago, how Lusha the SL policy network is slow so it can’t integrate well with the MCTS algorithm? And we trained another faster version of Lusha called Jerry who was her younger brother? Well, this refers to Jerry right here. As you can see, Jerry is just half as accurate as Lusha BUT it’s thousands of times faster! It will really help get through rolled out simulations of the future faster, when we apply the MCTS. For this next section, you don’t *have* to know about Reinforcement Learning already, but then you’ll have to assume that whatever I say works. If you really want to dig into details and make sure of everything, you might want to read a little about RL first. Once you have the SL network, trained in a supervised manner using human player moves with the human moves data, as I said before you have to let her practice by itself and get better. That’s what we’re doing here. So you just take the SL policy network, save it in a file, and make another copy of it. Then you use reinforcement learning to fine-tune it. Here, you make the network play against itself and learn from the outcomes. But there’s a problem in this training style. If you only forever practice against ONE opponent, and that opponent is also only practicing with you exclusively, there’s not much of new learning you can do. You’ll just be training to practice how to beat THAT ONE player. This is, you guessed it, overfitting: your techniques play well against one opponent, but don’t generalize well to other opponents. So how do you fix this? Well, every time you fine-tune a neural network, it becomes a slightly different kind of player. So you can save this version of the neural network in a list of “players”, who all behave slightly differently right? Great — now while training the neural network, you can randomly make it play against many different older and newer versions of the opponent, chosen from that list. They are versions of the same player, but they all play slightly differently. And the more you train, the MORE players you get to train even more with! Bingo! In this training, the only thing guiding the training process is the ultimate goal, i.e winning or losing. You don’t need to specially train the network to do things like capture more area on the board etc. You just give it all the possible legal moves it can choose from, and say, “you have to win”. And this is why RL is so versatile; it can be used to train policy or value networks for any game, not just Go. Here, they tested how accurate this RL policy network was, just by itself without any MCTS algorithm. As you would remember, this network can directly take a board position and decide how an expert would play it — so you can use it to single-handedly play games.Well, the result was that the RL fine-tuned network won against the SL network that was only trained on human moves. It also won against other strong Go playing programs. Must note here that even before training this RL policy network, the SL policy network was already better than the state of the art — and now, it has further improved! And we haven’t even come to the other parts of the process like the value network. Did you know that baby penguins can sneeze louder than a dog can bark? Actually that’s not true, but I thought you’d like a little joke here to distract from the scary-looking equations above. Coming to the essay again: we’re done training Lusha here. Now back to Foma — remember the “optimal value function”: v*(s) -> that only tells you how likely you are to win in your current board position if both players play perfectly from that point on?So obviously, to train an NN to become our value function, we would need a perfect player... which we don’t have. So we just use our strongest player, which happens to be our RL policy network. It takes the current state board state s, and outputs the probability that you will win the game. You play a game and get to know the outcome (win or loss). Each of the game states act as a data sample, and the outcome of that game acts as the label. So by playing a 50-move game, you have 50 data samples for value prediction. Lol, no. This approach is naive. You can’t use all 50 moves from the game and add them to the dataset. The training data set had to be chosen carefully to avoid overfitting. Each move in the game is very similar to the next one, because you only move once and that gives you a new position, right? If you take the states at all 50 of those moves and add them to the training data with the same label, you basically have lots of “kinda duplicate” data, and that causes overfitting. To prevent this, you choose only very distinct-looking game states. So for example, instead of all 50 moves of a game, you only choose 5 of them and add them to the training set. DeepMind took 30 million positions from 30 million different games, to reduce any chances of there being duplicate data. And it worked! Now, something conceptual here: there are two ways to evaluate the value of a board position. One option is a magical optimal value function (like the one you trained above). The other option is to simply roll out into the future using your current policy (Lusha) and look at the final outcome in this roll out. Obviously, the real game would rarely go by your plans. But DeepMind compared how both of these options do. You can also do a mixture of both these options. We will learn about this “mixing parameter” a little bit later, so make a mental note of this concept! Well, your single neural network trying to approximate the optimal value function is EVEN BETTER than doing thousands of mental simulations using a rollout policy! Foma really kicked ass here. When they replaced the fast rollout policy with the twice-as-accurate (but slow) RL policy Lusha, and did thousands of simulations with that, it did better than Foma. But only slightly better, and too slowly. So Foma is the winner of this competition, she has proved that she can’t be replaced. Now that we have trained the policy and value functions, we can combine them with MCTS and give birth to our former world champion, destroyer of grand masters, the breakthrough of a generation, weighing two hundred and sixty eight pounds, one and only Alphaaaaa GO! In this section, ideally you should have a slightly deeper understanding of the inner workings of the MCTS algorithm, but what you have learned so far should be enough to give you a good feel for what’s going on here. The only thing you should note is how we’re using the policy probabilities and value estimations. We combine them during roll outs, to narrow down the number of moves we want to roll out at each step. Q(s,a) represents the value function, and u(s,a) is a stored probability for that position. I’ll explain. Remember that the policy network uses supervised learning to predict expert moves? And it doesn’t just give you most likely move, but rather gives you probabilities for each possible move that tell how likely it is to be an expert move. This probability can be stored for each of those actions. Here they call it “prior probability”, and they obviously use it while selecting which actions to explore. So basically, to decide whether or not to explore a particular move, you consider two things: First, by playing this move, how likely are you to win? Yes, we already have our “value network” to answer this first question. And the second question is, how likely is it that an expert would choose this move? (If a move is super unlikely to be chosen by an expert, why even waste time considering it. This we get from the policy network) Then let’s talk about the “mixing parameter” (see came back to it!). As discussed earlier, to evaluate positions, you have two options: one, simply use the value network you have been using to evaluate states all along. And two, you can try to quickly play a rollout game with your current strategy (assuming the other player will play similarly), and see if you win or lose. We saw how the value function was better than doing rollouts in general. Here they combine both. You try giving each prediction 50–50 importance, or 40–60, or 0–100, and so on. If you attach a % of X to the first, you’ll have to attach 100-X to the second. That’s what this mixing parameter means. You’ll see these hit and trial results later in the paper. After each roll out, you update your search tree with whatever information you gained during the simulation, so that your next simulation is more intelligent. And at the end of all simulations, you just pick the best move. Interesting insight here! Remember how the RL fine-tuned policy NN was better than just the SL human-trained policy NN? But when you put them within the MCTS algorithm of AlphaGo, using the human trained NN proved to be a better choice than the fine-tuned NN. But in the case of the value function (which you would remember uses a strong player to approximate a perfect player), training Foma using the RL policy works better than training her with the SL policy. “Doing all this evaluation takes a lot of computing power. We really had to bring out the big guns to be able to run these damn programs.” Self explanatory. “LOL, our program literally blew the pants off of every other program that came before us” This goes back to that “mixing parameter” again. While evaluating positions, giving equal importance to both the value function and the rollouts performed better than just using one of them. The rest is self explanatory, and reveals an interesting insight! Self explanatory. Self explanatory. But read that red underlined sentence again. I hope you can see clearly now that this line right here is pretty much the summary of what this whole research project was all about. Concluding paragraph. “Let us brag a little more here because we deserve it!” :) Oh and if you’re a scientist or tech company, and need some help in explaining your science to non-technical people for marketing, PR or training etc, I can help you. Drop me a message on Twitter: @mngrwl From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Engineer, teacher, learner of foreign languages, lover of history, cinema and art. Our community publishes stories worth reading on development, design, and data science.
Eugenio Culurciello
6.4K
8
https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0?source=tag_archive---------3----------------
The fall of RNN / LSTM – Towards Data Science
We fell for Recurrent neural networks (RNN), Long-short term memory (LSTM), and all their variants. Now it is time to drop them! It is the year 2014 and LSTM and RNN make a great come-back from the dead. We all read Colah’s blog and Karpathy’s ode to RNN. But we were all young and unexperienced. For a few years this was the way to solve sequence learning, sequence translation (seq2seq), which also resulted in amazing results in speech to text comprehension and the raise of Siri, Cortana, Google voice assistant, Alexa. Also let us not forget machine translation, which resulted in the ability to translate documents into different languages or neural machine translation, but also translate images into text, text into images, and captioning video, and ... well you got the idea. Then in the following years (2015–16) came ResNet and Attention. One could then better understand that LSTM were a clever bypass technique. Also attention showed that MLP network could be replaced by averaging networks influenced by a context vector. More on this later. It only took 2 more years, but today we can definitely say: But do not take our words for it, also see evidence that Attention based networks are used more and more by Google, Facebook, Salesforce, to name a few. All these companies have replaced RNN and variants for attention based models, and it is just the beginning. RNN have the days counted in all applications, because they require more resources to train and run than attention-based models. See this post for more info. Remember RNN and LSTM and derivatives use mainly sequential processing over time. See the horizontal arrow in the diagram below: This arrow means that long-term information has to sequentially travel through all cells before getting to the present processing cell. This means it can be easily corrupted by being multiplied many time by small numbers < 0. This is the cause of vanishing gradients. To the rescue, came the LSTM module, which today can be seen as multiple switch gates, and a bit like ResNet it can bypass units and thus remember for longer time steps. LSTM thus have a way to remove some of the vanishing gradients problems. But not all of it, as you can see from the figure above. Still we have a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it. No question LSTM and GRU and derivatives are able to learn a lot of longer term information! See results here; but they can remember sequences of 100s, not 1000s or 10,000s or more. And one issue of RNN is that they are not hardware friendly. Let me explain: it takes a lot of resources we do not have to train these network fast. Also it takes much resources to run these model in the cloud, and given that the demand for speech-to-text is growing rapidly, the cloud is not scalable. We will need to process at the edge, right into the Amazon Echo! See note below for more details. If sequential processing is to be avoided, then we can find units that “look-ahead” or better “look-back”, since most of the time we deal with real-time causal data where we know the past and want to affect future decisions. Not so in translating sentences, or analyzing recorded videos, for example, where we have all data and can reason on it more time. Such look-back/ahead units are neural attention modules, which we previously explained here. To the rescue, and combining multiple neural attention modules, comes the “hierarchical neural attention encoder”, shown in the figure below: A better way to look into the past is to use attention modules to summarize all past encoded vectors into a context vector Ct. Notice there is a hierarchy of attention modules here, very similar to the hierarchy of neural networks. This is also similar to Temporal convolutional network (TCN), reported in Note 3 below. In the hierarchical neural attention encoder multiple layers of attention can look at a small portion of recent past, say 100 vectors, while layers above can look at 100 of these attention modules, effectively integrating the information of 100 x 100 vectors. This extends the ability of the hierarchical neural attention encoder to 10,000 past vectors. But more importantly look at the length of the path needed to propagate a representation vector to the output of the network: in hierarchical networks it is proportional to log(N) where N are the number of hierarchy layers. This is in contrast to the T steps that a RNN needs to do, where T is the maximum length of the sequence to be remembered, and T >> N. This architecture is similar to a neural Turing machine, but lets the neural network decide what is read out from memory via attention. This means an actual neural network will decide which vectors from the past are important for future decisions. But what about storing to memory? The architecture above stores all previous representation in memory, unlike neural Turning machines. This can be rather inefficient: think about storing the representation of every frame in a video — most times the representation vector does not change frame-to-frame, so we really are storing too much of the same! What can we do is add another unit to prevent correlated data to be stored. For example by not storing vectors too similar to previously stored ones. But this is really a hack, the best would be to be let the application guide what vectors should be saved or not. This is the focus of current research studies. Stay tuned for more information. Tell your friends! It is very surprising to us to see so many companies still use RNN/LSTM for speech to text, many unaware that these networks are so inefficient and not scalable. Please tell them about this post. About training RNN/LSTM: RNN and LSTM are difficult to train because they require memory-bandwidth-bound computation, which is the worst nightmare for hardware designer and ultimately limits the applicability of neural networks solutions. In short, LSTM require 4 linear layer (MLP layer) per cell to run at and for each sequence time-step. Linear layers require large amounts of memory bandwidth to be computed, in fact they cannot use many compute unit often because the system has not enough memory bandwidth to feed the computational units. And it is easy to add more computational units, but hard to add more memory bandwidth (note enough lines on a chip, long wires from processors to memory, etc). As a result, RNN/LSTM and variants are not a good match for hardware acceleration, and we talked about this issue before here and here. A solution will be compute in memory-devices like the ones we work on at FWDNXT. See this repository for a simple example of these techniques. Note 1: Hierarchical neural attention is similar to the ideas in WaveNet. But instead of a convolutional neural network we use hierarchical attention modules. Also: Hierarchical neural attention can be also bi-directional. Note 2: RNN and LSTM are memory-bandwidth limited problems (see this for details). The processing unit(s) need as much memory bandwidth as the number of operations/s they can provide, making it impossible to fully utilize them! The external bandwidth is never going to be enough, and a way to slightly ameliorate the problem is to use internal fast caches with high bandwidth. The best way is to use techniques that do not require large amount of parameters to be moved back and forth from memory, or that can be re-used for multiple computation per byte transferred (high arithmetic intensity). Note 3: Here is a paper comparing CNN to RNN. Temporal convolutional network (TCN) “outperform canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory”. Note 4: Related to this topic, is the fact that we know little of how our human brain learns and remembers sequences. “We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks” — these chunks remind me of small convolutional or attention like networks on smaller sequences, that then are hierarchically strung together like in the hierarchical neural attention encoder and Temporal convolutional network (TCN). More studies make me think that working memory is similar to RNN networks that uses recurrent real neuron networks, and their capacity is very low. On the other hand both the cortex and hippocampus give us the ability to remember really long sequences of steps (like: where did I park my car at airport 5 days ago), suggesting that more parallel pathways may be involved to recall long sequences, where attention mechanism gate important chunks and force hops in parts of the sequence that is not relevant to the final goal or task. Note 5: The above evidence shows we do not read sequentially, in fact we interpret characters, words and sentences as a group. An attention-based or convolutional module perceives the sequence and projects a representation in our mind. We would not be misreading this if we processed this information sequentially! We would stop and notice the inconsistencies! I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more... If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I dream and build new technology Sharing concepts, ideas, and codes.
Gary Marcus
1.3K
27
https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1?source=tag_archive---------4----------------
In defense of skepticism about deep learning – Gary Marcus – Medium
In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.” In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). I also urged the community to consider incorporating more innate structure into AI systems. Within a few days, thousands of people had weighed in over Twitter, some enthusiastic (“e.g, the best discussion of #DeepLearning and #AI I’ve read in many years”), some not (“Thoughtful... But mostly wrong nevertheless”). Because I think clarity around these issues is so important, I’ve compiled a list of fourteen commonly-asked queries. Where does unsupervised learning fit in? Why didn’t I say more nice things about deep learning? What gives me the right to talk about this stuff in the first place? What’s up with asking a neural network to generalize from even numbers to odd numbers? (Hint: that’s the most important one). And lots more. I haven’t addressed literally every question I have seen, but I have tried to be representative. 1. What is general intelligence? Thomas Dietterich, an eminent professor of machine learning, and my most thorough and explicit critic thus far, gave a nice answer that I am very comfortable with: 2. Marcus wasn’t very nice to deep learning. He should have said more nice things about all of its vast accomplishments. And he minimizes others. Dietterich, mentioned above, made both of these points, writing: On the first part of that, true, I could have said more positive things. But it’s not like I didn’t say any. Or even like I forgot to mention Dietterich’s best example; I mentioned it on the first page: More generally, later in the article I cited a couple of great texts and excellent blogs that have pointers to numerous examples. A lot of them though, would not really count as AGI, which was the main focus of my paper. (Google Translate, for example, is extremely impressive, but it’s not general; it can’t, for example, answer questions about what it has translated, the way a human translator could.) The second part is more substantive. Is 1,000 categories really very finite? Well, yes, compared to the flexibility of cognition. Cognitive scientists generally place the number of atomic concepts known by an individual as being on the order of 50,000, and we can easily compose those into a vastly greater number of complex thoughts. Pets and fish are probably counted in those 50,000; pet fish, which is something different, probably isn’t counted. And I can easily entertain the concept of “a pet fish that is suffering from Ick”, or note that “it is always disappointing to buy a pet fish only to discover that it was infected with Ick” (an experience that I had as a child and evidently still resent). How many ideas like that I can express? It’s a lot more than 1,000. I am not precisely sure how many visual categories a person can recognize, but suspect the math is roughly similar. Try google images on “pet fish”, and you do ok; try it on “pet fish wearing goggles” and you mostly find dogs wearing goggles, with a false alarm rate of over 80%. Machines win over nonexpert humans on distinguishing similar dog breeds, but people win, by a wide margin, on interpreting complex scenes, like what would happen to a skydiver who was wearing a backpack rather than a parachute. In focusing on 1,000 category chunks the machine learning field is, in my view, doing itself a disservice, trading a short-term feeling of success for a denial of harder, more open-ended problems (like scene and sentence comprehension) that must eventually be addressed. Compared to the essentially infinite range of sentences and scenes we can see and comprehend, 1000 of anything really is small. [See also Note 2 at bottom] 3. Marcus says deep learning is useless, but it’s great for many things Of course it is useful; I never said otherwise, only that (a) in its current supervised form, deep learning might be approaching its limits and (b) that those limits would stop short from full artificial general intelligence — unless, maybe, we started incorporating a bunch of other stuff like symbol-manipulation and innateness. The core of my conclusion was this: 4. “One thing that I don’t understand. — @GaryMarcus says that DL is not good for hierarchical structures. But in @ylecun nature review paper [says that] that DL is particularly suited for exploiting such hierarchies.” This is an astute question, from Ram Shankar, and I should have been a LOT clearer about the answer: there are many different types of hierarchy one could think about. Deep learning is really good, probably the best ever, at the sort of feature-wise hierarchy LeCun talked about, which I typically refer to as hierarchical feature detection; you build lines out of pixels, letters out of lines, words out of letters and so forth. Kurzweil and Hawkins have emphasized this sort of thing, too, and it really goes back to Hubel and Wiesel (1959)in neuroscience experiments and to Fukushima. (Fukushima, Miyake, & Ito, 1983) in AI. Fukushima, in his Neocognitron model, hand-wired his hierarchy of successively more abstract features; LeCun and many others after showed that (at least in some cases) you don’t have to hand engineer them. But you don’t have to keep track of the subcomponents you encounter along the way; the top-level system need not explicitly encode the structure of the overall output in terms of which parts were seen along the way; this is part of why a deep learning system can be fooled into thinking a pattern of a black and yellow stripes is a school bus. (Nguyen, Yosinski, & Clune, 2014). That stripe pattern is strongly correlated with activation of the school bus output units, which is in turn correlated with a bunch of lower-level features, but in a typical image-recognition deep network, there is no fully-realized representation of a school bus as being made up of wheels, a chassis, windows, etc. Virtually the whole spoofing literature can be thought of in these terms. [Note 3] The structural sense of hierarchy which I was discussing was different, and focused around systems that can make explicit reference to the parts of larger wholes. The classic illustration would be Chomsky’s sense of hierarchy, in which a sentence is composed of increasingly complex grammatical units (e.g., using a novel phrase like the man who mistook his hamburger for a hot dog with a larger sentence like The actress insisted that she would not be outdone by the man who mistook his hamburger for a hot dog). I don’t think deep learning does well here (e.g., in discerning the relation between the actress, the man, and the misidentified hot dog), though attempts have certainly been made. Even in vision, the problem is not entirely licked; Hinton’s recent capsule work (Sabour, Frosst, & Hinton, 2017), for example, is an attempt to build in more robust part-whole directions for image recognition, by using more structured networks. I see this as a good trend, and one potential way to begin to address the spoofing problem, but also as a reflection of trouble with the standard deep learning approach. 5. “It’s weird to discuss deep learning in [the] context of general AI. General AI is not the goal of deep learning!” Best twitter response to this came from University of Quebec professor Daniel Lemire: “Oh! Come on! Hinton, Bengio... are openly going for a model of human intelligence.” Second prize goes to a math PhD at Google, Jeremy Kun, who countered the dubious claim that “General AI is not the goal of deep learning” with “If that’s true, then deep learning experts sure let everyone believe it is without correcting them.” Andrew Ng’s recent Harvard Business Review article, which I cited, implies that deep learning can do anything a person can do in a second. Thomas Dietterich’s tweet that said in part “it is hard to argue that there are limits to DL”. Jeremy Howard worried that the idea that deep learning is overhyped might itself be overhyped, and then suggested that every known limit had been countered. DeepMind’s recent AlphaGo paper [See Note 4] is positioned somewhat similarly, with Silver et al (Silver et al., 2017) enthusiastically reporting that: In that paper’s concluding discussion, not one of the 10 challenges to deep learning that I reviewed was mentioned. (As I will discuss in a paper coming out soon, it’s not actually a pure deep learning system, but that’s a story for another day.) The main reason people keep benchmarking their AI systems against humans is precisely because AGI is the goal. 6. What Marcus said is a problem with supervised learning, not deep learning. Yann LeCun presented a version of this, in a comment on my Facebook page: The part about my allegedly not recognizing LeCun’s recent work is, well, odd. It’s true that I couldn’t find a good summary article to cite (when I asked LeCun, he told me by email that there wasn’t one yet) but I did mention his interest explicitly: I also noted that: My conclusion was positive, too. Although I expressed reservations about current approaches to building unsupervised systems, I ended optimistically: What LeCun’s remark does get right is that many of the problems I addressed are a general problem with supervised learning, not something unique to deep learning; I could have been more clear about this. Many other supervised learning techniques face similar challenges, such as problems in generalization and dependence on massive data sets; relatively little of what I said is unique to deep learning. In my focus on assessing deep learning at the five year resurgence mark, I neglected to say that. But it doesn’t really help deep learning that other supervised learning techniques are in the same boat. If someone could come up with a truly impressive way of using deep learning in an unsupervised way, a reassessment might be required. But I don’t see that unsupervised learning, at least as it currently pursued, particularly remedies the challenges I raised, e.g., with respect to reasoning, hierarchical representations, transfer, robustness, and interpretability. It’s simply a promissory note. [Note 5] As Portland State and Santa Fe Institute Professor Melanie Mitchell’s put it in a thus far unanswered tweet: I would, too. In the meantime, I see no principled reason to believe that unsupervised learning can solve the problems I raise, unless we add in more abstract, symbolic representations, first. 7. Deep learning is not just convolutional networks [of the sort Marcus critiqued], it’s “essentially a new style of programming — ”differentiable programming” — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc” — Tom Dietterich This seemed (in the context of Dietterich’s longer series of tweets) to have been proposed as a criticism, but I am puzzled by that, as I am a fan of differentiable programming and said so. Perhaps the point was that deep learning can be taken in a broader way. In any event, I would not equate deep learning and differentiable programming (e.g., approaches that I cited like neural Turing machines and neural programming). Deep learning is a component of many differentiable systems. But such systems also build in exactly the sort of elements drawn from symbol-manipulation that I am and have been urging the field to integrate (Marcus, 2001; Marcus, Marblestone, & Dean, 2014a; Marcus, Marblestone, & Dean, 2014b), including memory units and operations over variables, and other systems like routing units stressed in the more recent two essays. If integrating all this stuff into deep learning is what gets us to AGI, my conclusion, quoted below, will have turned out to be dead on: 8. Now vs the future. Maybe deep learning doesn’t work now, but it’s offspring will get us to AGI. Possibly. I do think that deep learning might play an important role in getting us to AGI, if some key things (many not yet discovered) are added in first. But what we add matters, and whether it is reasonable to call some future system an instance of deep learning per se, or more sensible to call the ultimate system “a such-and-such that uses deep learning”, depends on where deep learning fits into the ultimate solution. Maybe, for example, in truly adequate natural language understanding systems, symbol-manipulation will play an equally large role as deep learning, or an even larger one. Part of the issue here is of course terminological. A very good friend recently asked me, why can’t we just call anything that includes deep learning, deep learning, even if it includes symbol-manipulation? Some enhancement to deep learning ought to work. To which I respond: why not call anything that includes symbol-manipulation, symbol-manipulation, even if it includes deep learning? Gradient-based optimization should get its due, but so should symbol-manipulation, which as yet is the only known tool for systematically representing and achieving high-level abstraction, bedrock to virtually all of the world’s complex computer systems, from spreadsheets to programming environments to operating systems. Eventually, I conjecture, credit will also be due to the inevitable marriage between the two, hybrid systems that bring together the two great ideas of 20th century AI, symbol-processing and neural networks, both initially developed in the 1950s. Other new tools yet to be invented may be critical as well. To a true acolyte of deep learning, anything is deep learning, no matter what it’s incorporating, and no matter how different it might be from current techniques. (Viva Imperialism!) If you replaced every transistor in a classic symbolic microprocessor with a neuron, but kept the chip’s logic entirely unchanged, a true deep learning acolyte would still declare victory. But we won’t understand the principles driving (eventual) success if we lump everything together. [Note 6] 9. No machine can extrapolate. It’s not fair to expect a neural network to generalize from even numbers to odd numbers. Here’s a function, expressed over binary digits. f(110) = 011; f(100) = 001; f(010) = 010. What’s f(111)? If you are an ordinary human, you are probably going to guess 111. If you are neural network of the sort I discussed, you probably won’t. If you have been told many times that hidden layers in neural networks “abstract functions”, you should be a little bit surprised by this. If you are a human, you might think of the function as something like “reversal”, easily expressed in a line of computer code. If you are a neural network of a certain sort, it’s very hard to learn the abstraction of reversal in a way that extends from evens in that context to odds. But is that impossible? Certainly not if you have a prior notion of an integer. Try another, this time in decimal: f(4) = 8; f(6) = 12. What’s f(5)? None of my human readers would care that questions happens to require you to extrapolate from even numbers to odds; a lot of neural networks would be flummoxed. Sure, the function is undetermined by the sparse number of examples, like all functions, but it is interesting and important that most people would (amid the infinite range of a priori possible inductions), would alight on f(5)=10. And just as interesting that most standard multilayer perceptrons, representing the numbers as binary digits, wouldn’t. That’s telling us something, but many people in the neural network community, François Chollet being one very salient exception, don’t want to listen. Importantly, recognizing that a rule applies to any integer is roughly the same kind of generalization that allows one to recognize that a novel noun that can be used in one context can be used in a huge variety of other contexts. From the first time I hear the word blicket used as an object, I can guess that it will fit into a wide range of frames, like I thought I saw a blicket, I had a close encounter with a blicket, and exceptionally large blickets frighten me, etc. And I can both generate and interpret such sentences, without specific further training. It doesn’t matter whether blicket is or not similar in (for example) phonology to other words I have heard, nor whether I pile on the adjectives or use the word as a subject or an object. If most machine learning [ML] paradigms have a problem with this, we should have problem with most ML paradigms. Am I being “fair”? Well, yes, and no. It’s true that I am asking neural networks to do something that violates their assumptions. A neural network advocate might, for example, say, “hey wait a minute, in your reversal example, there are three dimensions in your input space, representing the left binary digit, the middle binary digit, and rightmost binary digit. The rightmost binary digit has only been a zero in the training; there is no way a network can know what to do when you get to one in that position.” For example, Vincent Lostenlan, a postdoc at Cornell, said Dietterich, made essentially the same point, more concisely: But although both are right about why odds-and-evens are (in this context) hard for deep learning, they are both wrong about the larger issues for three reasons. First, it can’t be that people can’t extrapolate. You just did, in two different examples, at the top of this section. Paraphrasing Chico Marx. who are you going to believe, me or your own eyes? To someone immersed deeply — perhaps too deeply — in contemporary machine learning, my odds-and-evens problem seems unfair because a certain dimension (the one which contains the value of 1 in the rightmost digit) hasn’t been illustrated in the training regime. But when you, a human, look at my examples above, you will not be stymied by this particular gap in the training data. You won’t even notice it, because your attention is on higher-level regularities. People routinely extrapolate in exactly the fashion that I have been describing, like recognizing string reversal from the three training examples I gave above. In a technical sense, that is extrapolation, and you just did it. In The Algebraic Mind I referred to this specific kind of extrapolation as generalizing universally quantified one-to-one mappings outside of a space of training examples. As a field we desperately need a solution to this challenge, if we are ever to catch up to human learning — even if it means shaking up our assumptions. Now, it might reasonably be objected that it’s not a fair fight: humans manifestly depend on prior knowledge when they generalize such mappings. (In some sense, Dieterrich proposed this objection later in his tweet stream.) True enough. But in a way, that’s the point: neural networks of a certain sort don’t have a good way of incorporating the right sort of prior knowledge in the place. It is precisely because those networks don’t have a way of incorporating prior knowledge like “many generalizations hold for all elements of unbounded classes” or “odd numbers leave a remainder of one when divided by two” that neural networks that lack operations over variables fail. The right sort of prior knowledge that would allow neural networks to acquire and represent universally quantified one-to-one mappings. Standard neural networks can’t represent such mappings, except in certain limited ways. (Convolution is a way of building in one particular such mapping, prior to learning). Second, saying that no current system (deep learning or otherwise) can extrapolate in the way that I have described is no excuse; once again other architectures may be in the choppy water, but that doesn’t mean we shouldn’t be trying to swim to shore. If we want to get to AGI, we have to solve the problem. (Put differently: yes, one could certainly hack together solutions to get deep learning to solve my specific number series problems, by, for example, playing games with the input encoding schemes; the real question, if we want to get to AGI, is how to have a system learn the sort of generalizations I am describing in a general way.) Third, the claim that no current system can extrapolate turns out to be, well, false; there are already ML systems that can extrapolate at least some functions of exactly the sort I described, and you probably own one: Microsoft Excel, its Flash Fill function in particular (Gulwani, 2011). Powered by a very different approach to machine learning, it can do certain kinds of extrapolation, albeit in a narrow context, by the bushel, e.g., try typing the (decimal) digits 1, 11, 21 in a series of rows and see if the system can extrapolate via Flash Fill to the eleventh item in the sequence (101). Spoiler alert, it can, in exactly the same way as you probably would, even though there were no positive examples in the training dimension of the hundreds digit. The systems learns from examples the function you want and extrapolates it. Piece of cake. Can any deep learning system do that with three training examples, even with a range of experience on other small counting functions, like 1, 3, 5, .... and 2, 4, 6 ....? Well maybe, but only the ones that are likely do so are likely to be hybrids that build in operations over variables, which are quite different from the sort of typical convolutional neural networks that most people associate with deep learning. Putting all this very differently, one crude way to think about where we are with most ML systems that we have today [Note 7] is that they just aren’t designed to think “outside the box”; they are designed to be awesome interpolators inside the box. That’s fine for some purposes, but not others. Humans are better at thinking outside boxes than contemporary AI; I don’t think anyone can seriously doubt that. But that kind of extrapolation, that Microsoft can do in a narrow context, but that no machine can do with human-like breadth, is precisely what machine learning engineers really ought to be working on, if they want to get to AGI. 10. Everybody in the field already knew this. There is nothing new here. Well, certainly not everybody; as noted, there were many critics who think we still don’t know the limits of deep learning, and others who believe that there might be some, but none yet discovered. That said, I never said that any of my points was entirely new; for virtually all, I cited other scholars, who had independently reached similar conclusions. 11. Marcus failed to cite X. Definitely true; the literature review was incomplete. One favorite among the papers I failed to cite is Shanahan’s Deep Symbolic Reinforcement (Garnelo, Arulkumaran, & Shanahan, 2016); I also can’t believe I forgot Richardson and Domingos’ (2006) Markov Logic Networks. I also wish I had cited Evans and Edward Grefenstette (2017), a great paper from DeepMind. And Smolensky’s tensor calculus work (Smolensky et al., 2016). And work on inductive programming in various forms (Gulwani et al., 2015) and probabilistic programming, too, by Noah Goodman (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2012) All seek to bring rules and networks close to together. And older stuff by pioneers like Jordan Pollack (Smolensky et al., 2016). And Forbus and Gentner’s (Falkenhainer, Forbus, & Gentner, 1989) and Hofstadter and Mitchell’s (1994) work on analogy; and many others. I am sure there is a lot more I could and should have cited. Overall, I tried to be representative rather than fully comprehensive, but I still could have done better. #chagrin. 12. Marcus has no standing in the field; he isn’t a practitioner; he is just a critic. Hesitant to raise this one, but it came up in all kinds of different responses, even from the mouths of certain well-known professionals. As Ram Shankar noted, “As a community, we must circumscribe our criticism to science and merit based arguments.” What really matters is not my credentials (which I believe do in fact qualify me to write) but the validity of the arguments. Either my arguments are correct, or they are not. [Still, for those who are curious, I supply an optional mini-history of some of my relevant credentials in Note 8 at the end.] 13. Re: hierarchy, what about Socher’s tree-RNNs? I have written to him, in hopes of having a better understanding of its current status. I’ve also privately pushed several other teams towards trying out tasks like Lake and Baroni (2017) presented. Pengfei et al (2017) offers some interesting discussion. 14. You could have been more critical of deep learning. Nobody quite said that, not in exactly those words, but a few came close, generally privately. One colleague for example pointed out that there may be some serious errors of future forecasting around The same colleague added Another colleague, ML researcher and author Pedro Domingos, pointed out still other shortcomings of current deep learning methods that I didn’t mention: Like other flexible supervised learning methods, deep learning systems can be unstable in the sense that slightly changing the training data may result in large changes in the resulting model. As Domingos notes, there’s no guarantee this sort of rise and decline won’t repeat itself. Neural networks have risen and fallen several times before, all the way back to Rosenblatt’s first Perceptron in 1957. We shouldn’t mistake cyclical enthusiasm for a complete solution to intelligence, which still seems (to me, anyway) to be decades away. If we want to reach AGI, we owe it to ourselves to be as keenly aware of challenges we face as we are of our successes. 2. There are other problems too in relying on these 1,000 image sets. For example, in reading a draft of this paper, Melanie Mitchell pointed me to important recent work by Loghmani and colleague (2017) on assessing how deep learning does in the real world. Quoting from the abstract, the paper “analyzes the transferability of deep representations from Web images to robotic data [in the wild]. Despite the promising results obtained with [representations developed from Web image], the experiments demonstrate that object classification with real-life robotic data is far from being solved.” 3. And that literature is growing fast. In late December there was a paper about fooling deep nets into mistaking a pair of skiers for a dog [https://arxiv.org/pdf/1712.09665.pdf] and another on a general-purpose tool for building real-world adversarial patches: https://arxiv.org/pdf/1712.09665.pdf. (See also https://arxiv.org/abs/1801.00634.) It’s frightening to think how vulnerable deep learning can be real-world contexts. And for that matter consider Filip Pieknewski’s blog on why photo-trained deep learning systems have trouble transferring what they have learned to line drawings, https://blog.piekniewski.info/2016/12/29/can-a-deep-net-see-a-cat/. Vision is not as solved as many people seem to think. 4. As I will explain in the forthcoming paper, AlphaGo is not actually a pure [deep] reinforcement learning system, although the quoted passage presented it as such. It’s really more of a hybrid, with important components that are driven by symbol-manipulating algorithms, along with a well engineered deep-learning component. 5. AlphaZero, by the way, isn’t unsupervised, it’s self-supervised, using self-play and simulation as a way of generating supervised data; I will have a lot more to say about that system in a forthcoming paper. 6. Consider, for example Google Search, and how one might understand it. Google has recently added in a deep learning algorithm, RankBrain, to the wide array of algorithms it uses for search. And Google Search certainly takes in data and knowledge and processes them hierarchically (which according to Maher Ibrahim is all you need to count as being deep learning). But, realistically, deep learning is just one cue among many; the knowledge graph component, for example, is based instead primarily on classical AI notions of traversing ontologies. By any reasonable measure Google Search is a hybrid, with deep learning as just one strand among many. Calling Google Search as a whole. “a deep learning system” would be grossly misleading, akin to relabeling carpentry “screwdrivery”, just because screwdrivers happen to be involved. 7. Important exceptions include inductive logic programming, inductive function programming (the brains behind Microsoft’s Flash Fill) and neural programming. All are making some progress here; some of these even include deep learning, but they also all include structured representations and operations over variables among their primitive operations; that’s all I am asking for. 8. My AI experiments begin in adolescence, with, among other thing, a Latin-English translator that I coded in the programming language Logo. In graduate school, studying with Steven Pinker, I explored the relation between language acquisition, symbolic rules, and neural networks. (I also owe a debt to my undergraduate mentor Neil Stillings.) The child language data I gathered (Marcus et al., 1992) for my dissertation have been cited hundreds of times, and were the most frequently-modeled data in the 90’s debate about neural networks and how children learned language. In the late 1990’s I discovered some specific, replicable problems with multilayer perceptrons, (Marcus, 1998b; Marcus, 1998a)); based on those observation, I designed a widely-cited experiment. published in Science (Marcus, Vijayan, Bandi Rao, & Vishton, 1999), that showed that young infants could extract algebraic rules, contra Jeff Elman’s (1990) then popular neural network. All of this culminated in a 2001 MIT Press book (Marcus, 2001), which lobbied for a variety of representational primitives, some of which have begun to pop up in recent neural networks; in particular that the use of operations over variables in the new field of differentiable programming (Daniluk, Rocktäschel, Welbl, & Riedel, 2017; Graves et al., 2016) owes something to the position outlined in that book. There was a strong emphasis on having memory records, as well, which can be seen in the memory networks being developed e.g., at Facebook (Bordes, Usunier, Chopra, & Weston, 2015).) The next decade saw me work on other problems including innateness (Marcus, 2004) (which I will discuss at length in the forthcoming piece about AlphaGo) and evolution (Marcus, 2004; Marcus, 2008), I eventually returned to AI and cognitive modeling, publishing a 2014 article on cortical computation in Science (Marcus, Marblestone, & Dean, 2014) that also anticipates some of what is now happening in differentiable programming. More recently, I took a leave from academia to found and lead a machine learning company in 2014; by any reasonable measure that company was successful, acquired by Uber roughly two years after founding. As co-founder and CEO I put together a team of some of the very best machine learning talent in the world, including Zoubin Ghahramani, Jeff Clune, Noah Goodman, Ken Stanley and Jason Yosinski, and played a pivotal role in developing our core intellectual property and shaping our intellectual mission. (A patent is pending, co-written by Zoubin Ghahramani and myself.) Although much of what we did there remains confidential, now owned by Uber, and not by me, I can say that a large part of our efforts were addressed towards integrating deep learning with our own techniques, which gave me a great deal of familiarity with joys and tribulations of Tensorflow and vanishing (and exploding) gradients. We aimed for state-of-the-art results (sometimes successfully, sometimes not) with sparse data, using hybridized deep learning systems on a daily basis. Bordes, A., Usunier, N., Chopra, S., & Weston, J. (2015). Large-scale Simple Question Answering with Memory Networks. arXiv. Daniluk, M., Rocktäschel, T., Welbl, J., & Riedel, S. (2017). Frustratingly Short Attention Spans in Neural Language Modeling. arXiv. Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2)(2), 179–211. Evans, R., & Grefenstette, E. (2017). Learning Explanatory Rules from Noisy Data. arXiv, cs.NE. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1)(1), 1–63. Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 5, 826–834. Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards Deep Symbolic Reinforcement Learning. arXiv, cs.AI. Goodman, N., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A. et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626)(7626), 471–476. Gulwani, S. (2011). Automating string processing in spreadsheets using input-output examples. dl.acm.org, 46(1)(1), 317–330. Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11)(11), 90–99. Hofstadter, D. R., & Mitchell, M. (1994). The copycat project: A model of mental fluidity and analogy-making. Advances in connectionist and neural computation theory, 2(31–112)(31–112), 29–30. Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the Limitation of Convolutional Neural Networks in Recognizing Negative Images. arXiv, cs.CV. Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3)(3), 574–591. Lake, B. M., & Baroni, M. (2017). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. arXiv. Loghmani, M. R., Caputo, B., & Vincze, M. (2017). Recognizing Objects In-the-wild: Where Do We Stand? arXiv, cs.RO. Marcus, G. F. (1998a). Rethinking eliminative connectionism. Cogn Psychol, 37(3)(3), 243 — 282. Marcus, G. F. (1998b). Can connectionism save constructivism? Cognition, 66(2)(2), 153 — 182. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F. (2004). The Birth of the Mind : how a tiny number of genes creates the complexities of human thought. Basic Books. Marcus, G. F. (2008). Kluge : the haphazard construction of the human mind. Boston : Houghton Mifflin. Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv. Marcus, G.F., Marblestone, A., & Dean, T. (2014a). The atoms of neural computation. Science, 346(6209)(6209), 551 — 552. Marcus, G. F., Marblestone, A. H., & Dean, T. L. (2014b). Frequently Asked Questions for: The Atoms of Neural Computation. Biorxiv (arXiv), q-bio.NC. Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press. Marcus, G. F., Pinker, S., Ullman, M., Hollander, M., Rosen, T. J., & Xu, F. (1992). Overregularization in language acquisition. Monogr Soc Res Child Dev, 57(4)(4), 1–182. Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Rule learning by seven-month-old infants. Science, 283(5398)(5398), 77–80. Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv, cs.CV. Pengfei, L., Xipeng, Q., & Xuanjing, H. (2017). Dynamic Compositional Neural Networks over Tree Structure IJCAI. Proceedings from Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17). Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv, cs.LG. Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine learning, 62(1)(1), 107–136. Sabour, S., dffsdfdsf, N., & Hinton, G. E. (2017). Dynamic Routing Between Capsules. arXiv, cs.CV. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676)(7676), 354–359. Smolensky, P., Lee, M., He, X., Yih, W.-t., Gao, J., & Deng, L. (2016). Basic Reasoning with Tensor Product Representations. arXiv, cs.AI. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Founder, Geometric Intelligence (acquired by Uber). Professor of Psychology and Neural Science, NYU. Freelancer for The New Yorker & New York Times.
Bargava
11.8K
3
https://towardsdatascience.com/how-to-learn-deep-learning-in-6-months-e45e40ef7d48?source=tag_archive---------5----------------
How to learn Deep Learning in 6 months – Towards Data Science
It is quite possible to learn, follow and contribute to state-of-art work in deep learning in about 6 months’ time. This article details out the steps to achieve that. Pre-requisites - You are willing to spend 10–20 hours per week for the next 6 months- You have some programming skills. You should be comfortable to pick up Python along the way. And cloud. (No background in Python and cloud assumed).- Some math education in the past (algebra, geometry etc). - Access to internet and computer. Step 1 We learn driving a car — by driving. Not by learning how the clutch and the internal combustion engine work. Atleast not initially. When learning deep learning, we will follow the same top-down approach. Do the fast.ai course — Practical Deep Learning for Coders — Part 1. This takes about 4–6 weeks of effort. This course has a session on running the code on cloud. Google Colaboratory has free GPU access. Start with that. Other options include Paperspace, AWS, GCP, Crestle and Floydhub. All of these are great. Do not start to build your own machine. Atleast not yet. Step 2 This is the time to know some of the basics. Learn about calculus and linear algebra. For calculus, Big Picture of Calculus provides a good overview. For Linear Algebra, Gilbert Strang’s MIT course on OpenCourseWare is amazing. Once you finish the above two, read the Matrix Calculus for Deep Learning. Step 3 Now is the time to understand the bottom-up approach to deep learning. Do all the 5 courses in the deep learning specialisation in Coursera. You need to pay to get the assignments graded. But the effort is truly worth it. Ideally, given the background you have gained so far, you should be able to complete one course every week. Step 4 Do a capstone project. This is the time where you delve deep into a deep learning library(eg: Tensorflow, PyTorch, MXNet) and implement an architecture from scratch for a problem of your liking. The first three steps are about understanding how and where to use deep learning and gaining a solid foundation. This step is all about implementing a project from scratch and developing a strong foundation on the tools. Step 5 Now go and do fast.ai’s part II course — Cutting Edge Deep Learning for Coders. This covers more advanced topics and you will learn to read the latest research papers and make sense out of them. Each of the steps should take about 4–6 weeks’ time. And in about 26 weeks since the time you started, and if you followed all of the above religiously, you will have a solid foundation in deep learning. Where to go next? Do the Stanford’s CS231n and CS224d courses. These two are amazing courses with great depth for vision and NLP respectively. They cover the latest state-of-art. And read the deep learning book. This will solidify your understanding. Happy deep learning. Create every single day. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @ http://impel.io/ . Currently building a personalization engine http://www.recotap.com/. Data Science Trainer and Mentor. Sharing concepts, ideas, and codes.
Seth Weidman
2.8K
11
https://hackernoon.com/the-3-tricks-that-made-alphago-zero-work-f3d47b6686ef?source=tag_archive---------6----------------
The 3 Tricks That Made AlphaGo Zero Work – Hacker Noon
There were many advances in Deep Learning and AI in 2017, but few generated as much publicity and interest as DeepMind’s AlphaGo Zero. This program was truly a shocking breakthrough: not only did it beat the prior version of AlphaGo — the program that beat 17 time world champion Lee Sedol just a year and a half earlier — 100–0, it was trained without any data from real human games. Xavier Amatrain called it “more [significant] than anything...in the last 5 years” in Machine Learning. So how did DeepMind do it? In this essay, I’ll try to give an intuitive idea of the techniques AlphaGo Zero used, what made them work, and what the implications for future AI research are. Let’s start with the general approach that both AlphaGo and AlphaGo Zero took to playing Go. Both AlphaGo and AlphaGo Zero evaluated the Go board and chose moves using a combination of two methods: AlphaGo and AlphaGo Zero both worked by cleverly combining these two methods. Let’s look at each one in turn: Go is a sufficiently complex game that computers can’t simply search all possible moves using a brute force approach to find the best one (indeed, they can’t even come close). The best Go programs prior to AlphaGo overcame this by using “Monte Carlo Tree Search” or MCTS. At a high level, this method involves initially exploring many possible moves on the board, and then focusing this exploration over time as certain moves are found to be more likely to lead to wins than others. Both AlphaGo and AlphaGo Zero use a relatively straightforward version of MCTS for their “lookahead”, simply using many of the best practices listed in the Monte Carlo Tree Search Wikipedia page to properly manage the tradeoff between exploring new sequences of move or more deeply explore already-explored sequences (for more, see the details in the “Search” section under “Methods” in the original AlphaGo Paper published in Nature). Though, MCTS had been the core of all successful Go programs prior to AlphaGo, it was DeepMind’s clever combination of this technique with a neural network-based “intuition” that allowed it to surpass human performance. DeepMind’s major innovation with AlphaGo was to use deep neural networks to understand the state of the game, and then use this understanding to intelligently guide the search of the MCTS. More specifically: they trained networks that could look at Given this information, the neural networks could recommend: How did DeepMind train neural networks to do this? Here, AlphaGo and AlphaGo Zero used very different approaches; we’ll start first with AlphaGo’s: AlphaGo had two separately trained neural networks. DeepMind then combined these two neural networks with MCTS — that is, the program’s “intuition” with its brute force “lookahead” search— in a very clever way: it used the network that had been trained to predict moves to guide which branches of the game tree to search and used the network that had been trained to predict whether a position was “winning” to evaluate the positions it encountered during its search. This allowed AlphaGo to intelligently search upcoming moves and ultimately allowed it to beat Lee Sedol. AlphaGo Zero, however, took this to a whole new level. At a high level, AlphaGo Zero works the same way as AlphaGo: specifically, it plays Go by using MCTS-based lookahead search, intelligently guided by a neural network. However, AlphaGo Zero’s neural network — its “intuition” — was trained completely differently from that of AlphaGo: Let’s say you have a neural network that is attempting to “understand” the game of Go: that is, for every board position, it is using a deep neural network to generate evaluations of what the best moves are. What DeepMind realized is that no matter how intelligent this neural network is — whether it is completely clueless or a Go master — its evaluations can always be made better by MCTS. Fundamentally, MCTS performs the kind of lookahead search that we would imagine a human master would perform if given enough time: it intelligently guesses which variations— sequences of future moves — are most promising, simulates those variations, evaluates how good they actually are, and updates its assessments of its current best moves accordingly. An illustration of this is below. Suppose we have a neural network that is reading the board and determining that a given move results in a game being even, with an evaluation of 0.0. Then, the network intelligently looks ahead a few moves and finds a sequence of moves that can be forced from the current position that ends up resulting in an evaluation of 0.5. It can then update its evaluation of the current board position to reflect that it leads to a more favorable position down the road. This lookahead search, therefore, can always give us improved data on how good the various moves in the current position that the neural network is evaluating are. This is true whether our neural network is playing at an amateur level or an expert level: we can always generate improve evaluations for it by looking ahead and seeing which of its current options actually lead to better positions. In addition, just as in AlphaGo, we would also want our neural network to learn which moves are likely to lead to wins. So, also as before, our agent—using its MCTS-improved evaluations and the current state of its neural network — could play games against itself, winning some and losing others. This data, generated purely via lookahead and self-play, is what DeepMind used to train AlphaGo Zero. More specifically: Much was made of the fact that no games between humans were used to train AlphaGo Zero, and this first “trick” was the reason why: for a given state of a Go agent, it can always be made smarter by performing MCTS-based lookahead and using the results of that lookahead to improve the agent. This is how AlphaGo Zero was able to continuously improve, from when it was an amateur all the way up to when it better than the best human players. The second trick was a novel neural network structure that I’ll call the “Two Headed Monster”. AlphaGo Zero’s was its neural network architecture, a “two-headed” architecture. Its first 20 layers or so were layer “blocks” of a type often seen in modern neural net architecures. These layers were followed by two “heads”: one head that took the output of the first 20 layers and produced probabilities of the Go agent making certain moves, and another that took the output of the first 20 layers and outputted a probability of the current player winning. This is quite unusual. In almost all applications, neural networks output a single, fixed output — such as the probability of an image containing a dog, or a vector containing the probabilities of an image containing one of 10 types of objects. How can a net learn if it is receiving two sets of signals: one on how good its evaluations of the board are, and another how good the specific moves it is selecting are? The answer is simple: remember that neural networks are fundamentally just mathematical functions with a bunch of parameters that determine the predictions that they make; we “teach” them by repeatedly showing them “correct answers” and having them update their parameters so the answers they produce more closely match these correct answers. So, when we use the two headed neural net to make a prediction using Head #1, we simply update the parameters that led to making that prediction, namely the parameters in the “Body” and in “Head #1”. Similarly, when we make a prediction using Head #2, we update the parameters in the “Body” and in “Head #2”. This is how DeepMind trained its single, “two-headed” neural network that it used to guide MCTS during its search, just as AlphaGo did with two separate neural networks. This trick accounted for half of AlphaGo Zero’s increase in playing strength over AlphaGo. (this trick is known more technically as Multi-Task Learning with Hard Parameter Sharing. Sebastian Ruder has a great overview here). The other half of the increase in playing strength simply came from bringing the neural network architecture up-to-date with the latest advances in the field: AlphaGo Zero used a more “cutting edge” neural network architecture than AlphaGo. Specifically, they used a “residual” neural network architecture instead of a purely “convolutional” architecture. Residual nets were pioneered by Microsoft Research in late 2015, right around the time work on the first version of AlphaGo would have wrapped up, so it both understandable that DeepMind did not use them in the original AlphaGo program. Interestingly, as the chart below shows, each of these two neural network-related tricks — switching from convolutional to residual architecture and using the “Two Headed Monster” neural network architecture instead of separate neural networks — would have resulted in about half of the increase in playing strength as was achieved when both were combined. These three tricks are what enabled AlphaGo Zero to achieve its incredible performance that blew away even Alpha Go: It is worth noting that AlphaGo did not use any classical or even “cutting edge” reinforcement learning concepts — no Deep Q Learning, Asynchronous Actor-Critic Agents, or anything else we typically associate with reinforcement learning. It simply used simulations to generate training data for its neural nets to then learn from in a supervised fashion. Denny Britz sums this idea up well in this Tweet from just after when the AlphaGo Zero paper was released: Here’s a “step-by-step” timeline of how AlphaGo Zero was trained: 3. As these self-play games are happening, sample 2,048 positions from the most recent 500,000 games, along with whether the game was won or lost. For each move, record both A) the results of the MCTS evaluations of those positions — how “good” the various moves in these positions were based on lookahead — and B) whether the current player won or lost the game. 4. Train the neural network, using both A) the move evaluations produced by the MCTS lookahead search and B) whether the current player won or lost. 5. Finally, every 1,000 iterations of steps 3–4, evaluate the current neural network against the previous best version; if it wins at least 55% of the games, begin using it to generate self-play games instead of the prior version. Repeat steps 3–4 700,000 times, while the self-play games are continuously being played — after three days, you’ll have yourself an AlphaGo Zero! There are many implications of DeepMind’s incredible achievement for the future of AI research. Here are a couple of key ones: First, the fact that self-play data generated from simulations was “good enough” to be able to train the network suggests that simulated self-play data can train agents to surpass human performance in extremely complex tasks, even starting completely from scratch — data generated from human experts may not be needed. Second, the “Two Headed Monster” trick seems to significantly help agents learn to perform several related tasks in many domains, since it seems to prevent the agents from overfitting their behavior to any individual task. DeepMind seems to really like this trick, and has used it and more advanced versions of it to build agents that can learn multiple tasks in several different domains. Many projects in robotics, especially the burgeoning field of using simulations to teach robotic agents to use their limbs to accomplish tasks, are using these two tricks to great effect. Pieter Abbeel’s recent NIPS keynote highlights many impressive new results that use these tricks along with many bleeding edge reinforcement learning techniques. Indeed, locomotion seems like a perfect use case for the “Two Headed Monster” trick in particular: for example, robotic agents could be simultaneously trained to hit a baseball using a bat and to throw a punch to hit a moving target, since the two tasks require learning some common skills (e.g. balance, torso rotation). DeepMind’s AlphaGo Zero was one of the most intriguing advancements in AI and Deep Learning in 2017. I can’t wait to see what 2018 brings! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Senior Data Scientist at @thisismetis. I write about the intersection of Data Science, business, education, and society. how hackers start their afternoons.
Gabriel Aldamiz...
5.1K
11
https://hackernoon.com/how-we-grew-from-0-to-4-million-women-on-our-fashion-app-with-a-vertical-machine-learning-approach-f8b7fc0a89d7?source=tag_archive---------7----------------
How we grew from 0 to 4 million women on our fashion app, with a vertical machine learning approach
Three years ago we launched Chicisimo, our goal was to offer automated outfit advice. Today, with over 4 million women on the app, we want to share how our data and machine learning approach helped us grow. It’s been chaotic but it is now under control. If we wanted to build a human-level tool to offer automated outfit advice, we needed to understand people’s fashion taste. A friend can give us outfit advice because after seeing what we normally wear, she’s learnt our style. How could we build a system that learns fashion taste? We had previous experience with taste-based projects and a background in machine learning applied to music and other sectors. We saw how a collaborative filtering tool transformed the music industry from blindness to totally understanding people (check out the Audioscrobbler story). It also made life better for those who love music, and created several unicorns along the way. With this background, we built the following thesis: online fashion will be transformed by a tool that understands taste. Because if you understand taste, you can delight people with relevant content and a meaningful experience. We also thought that “outfits” were the asset that would allow taste to be understood, to learn what people wear or have in their closet, and what style each of us like. We decided we were going to build that tool to understand taste. We focused on developing the correct dataset, and built two assets: our mobile app and our data platform. From previous experience building mobile products, even in Symbian back then, we knew it was easy to bring people to an app but difficult to retain them. So we focused on small iterations to learn as fast as possible. We launched an extremely early alpha of Chicisimo with one key functionality. We launched under another name and in another country. You couldn’t even upload photos... but it allowed us to iterate with real data and get a lot of qualitative input. At some point, we launched the real Chicisimo, and removed this alpha from the App Store. We spent a long time trying to understand what our true levers of retention were, and what algorithms we needed in order to match content and people. Three things helped with retention: (a) identify retention levers using behavioral cohorts (we use Mixpanel for this). We run cohorts not only over the actions that people performed, but also over the value they received. This was hard to conceptualize for an app such as Chicisimo*. We thought in terms of what specific and measurable value people received, measured it, and run cohorts over those events, and then we were able to iterate over value received, not only over actions people performed. We also defined and removed anti-levers (all those noisy things that distract from the main value) and got all the relevant metrics for different time periods: first session, first day, first week, etc. These super specific metrics allowed us to iterate (*Nir Eyal’s book Hooked: How to Build Habit-Forming Products discusses a framework to create habits that helped us build our model); (b) re-think the onboarding process, once we knew the levers of retention. We define it as the process by which new signups find the value of the app as soon as possible, and before we lose them. We clearly articulated to ourselves what needed to happen (what and when). It went something like this: If people don’t do [action] during their first 7 minutes in their first session, they will not come back. So we need to change the experience to make that happen. We also run tons of user-tests with different types of people, and observed how they perceived (or mostly didn’t) the retention lever; (c) define how we learn. The data approach described above is key, but there is much more than data when building a product people love. In our case, first of all, we think that the what-to-wear problem is a very important one to solve, and we truly respect it. We obsess over understanding the problem, and over understanding how our solution is helping, or not. It’s our way of showing respect. This leads me to one of the most surprising aspects IMO of building a product: the fact that, regularly, we access new corpuses of knowledge that we did not have before, which help us improve the product significantly. When we’ve obtained these game-changing learnings, it’s always been by focusing on two aspects: how people relate to the problem, and how people relate to the product (the red arrows in the image below). There are a million subtleties that happen in these two relations, and we are building Chicisimo by trying to understand them. Now, we know that at any point there is something important that we don’t know and therefore the question always is: how can we learn... sooner? Talking with one of my colleagues, she once told me, “this is not about data, this is about people”. And the truth is, from day one we’ve learnt significantly by having conversations with women about how they relate with the problem, and with solutions. We use several mechanisms: having face to face conversations, reading the emails we get from women without predefined questions, or asking for feedback around specific topics (we now use Typeform and its a great tool for product insight). And then we talk among ourselves and try to articulate the learnings. We also seek external references: we talk with other product people, we play with inspiring apps, and we re-read articles that help us think. This process is what allows us to learn, and then build product and develop technology. At some point, we were lucky to get noticed by the App Store team, and we’ve been featured as App of the Day throughout the world (view Apple’s description of Chicisimo, here). On December 31st, Chicisimo was featured in a summary of apps the App Store team did, we are the pink “C.” in the left image below 😀. The app got viewed by 957,437 uniques thanks to this feature, for a total of 1.3M times. In our case, app features have a 0,5% conversion rate from impression to app install (normally: impression > product page view > install); ASO has a 3% conversion, and referrers 45%. The app aims at understanding taste so we can do a better job at suggesting outfit ideas. The simple act of delivering the right content at the right time can absolutely wow people, although it is an extremely difficult utility to build. Chicisimo content is 100% user-generated, and this poses some challenges: the system needs to classify different types of content automatically, build the right incentives, and understand how to match content and needs. We soon saw that there was a lot of data coming in. After thinking “hey, how cool we are, look at all this data we have”, we realized it was actually a nightmare because, being chaotic, the data wasn’t actionable. This wasn’t cool at all. But then we decided to start giving some structure to parts of the data, and we ended inventing what we called the Social Fashion Graph. The graph is a compact representation of how needs, outfits and people interrelate, a concept that helped us build the data platform. The data platform creates a high-quality dataset linked to a learning and training world, our app, which therefore improves with each new expression of taste. We thought of outfits as playlists: an outfit is a combination of items that makes sense to consume together. Using collaborative filtering, the relations captured here allow us to offer recommendations in different areas of the app. There was still a lot of noise in the data, and one of the hardest things was to understand how people were expressing the same fashion need in different ways, which made matching content and needs even more difficult. Lots of people might need ideas to go to school, and express that specific need in a hundred different ways. How do you capture this diversity, and how do you provide structure to it? We built a system to collect concepts (we call them needs) and captured equivalences among different ways to express the same need. We ended up building a list of the world’s what-to-wear needs, which we call our ontology. This really cleaned up the dataset and helped us understand what we had. This understanding led to better product decisions. We now understand that an outfit, a need or a person, can have a lot of understandable data attached, if you allow people to express freely (the app) while having the right system behind (the platform). Structuring data gave us control, while encouraging unstructured data gave us knowledge and flexibility. The end result is our current system. A system that learns the meaning of an outfit, how to respond to a need, or the taste of an individual. And I wouldn’t even dare saying that this is Day 1 for us. Screenshot of an internal tool. The amount of work we have in front of us is immense, but we feel things are now under control. One of the new areas we’ve been working on is adding a fourth element to the Social Fashion Graph: shoppable products. A system to match outfits to products automatically, and to help people decide what to buy next. This is pretty exciting. Back when we built recommender systems for music and other products, it was pretty easy (that’s what we think now, we obviously didn’t think that at the time:). First, it was easy to capture that you liked a given song. Then, it was easy to capture the sequence in which you and others would listen to that song, and therefore you could capture the correlations. With this data, you could do a lot. However, as we soon found out, fashion has its own challenges. There is not an easy way to match an outfit to a shoppable product (think about most garments in your wardrobe, most likely you won’t find a link to view/buy those garments online, something you can do for many other products you have at home). Another challenge: the industry is not capturing how people describe clothes or outfits, so there is a strong disconnect between many ecommerces and its shoppers (we think we’ve solved that problem. Also Similar.ai and Twiggle are working on it). Another challenge: style is complex to capture and classify by a machine. Now, deep learning brings a new tool to add to other mechanisms, and changes everything. Owning the correct data set allows us to focus on the specific narrow use cases related to outfit recommendations, and to focus on delivering value through the algorithms instead of spending time collecting and cleaning data. 👉 Now comes the fun and rewarding part, so please email us if you want to join the team and help build algorithms that have real impact on people — we are 100% remote, Slack based 👈 -😂😂😉 😉 😉. People’s very personal style can become as actionable as metadata and possibly as transparent as well (?), and I think we can see the path to get there. As we have a consumer product that people already love, we can ship early results of these algorithms partially hidden, and increase their presence as feedback improves results. There are more and more researchers working of these areas, you can read Tangseng’s paper on recommending outfits from personal closet or clothing parsing project, or how Edgar Simo-Serra defines similarity between images using user-provided metadata. Outfits are a key asset in the race to capture the $123 billion US apparel market. Data is also the reason many players are taking outfits to the forefront of technology: outfits are a daily habit, and have proven to be great assets to attract and retain shoppers, and capture their data. Many players are introducing a Shop the Look section with outfits from real people: Amazon, Zalando or Google are a few examples. Google recently introduced a new feature called Style Ideas showing how a “product can be worn in real life”. Same month Amazon launched its Alexa Echo Look to help you with your outfit, and Alibaba’s artificial intelligence personal stylist helped them achieve record sales during Singles Day. Some people think that fashion data is in the same place as music data was in 2003: ready to play a very relevant role. The good news is: the daily habit of deciding what to wear will not change. The need to buy new clothes won’t disappear, either. So, what do you think? Where will we be 10 years from now? Will taste data build unique online experiences? What role will outfits play? How will machine learning change fashion ecommerce? Will everything change, 10 years from now? We are a small team of eight, four on product and four engineers. We believe in focusing on our very specific problem, no one on earth can understand the problem better than us. We also believe on building the complete solution ourselves while doing as few things as possible. We work 100% remote and live in Slack + GitHub. You can learn more about our machine learning approach, here. If you are a deep learning engineer or a product manager in the fashion space, and want to chat & temporarily access our Social Fashion Graph, please email us describing your work. You can also download our iOS and Android apps, or simply say hi: hi at chicisimo.com. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO at @chicisimo. Machine learning to automate outfit advise. World’s largest outfits app how hackers start their afternoons.
Sarthak Jain
3.9K
10
https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------8----------------
How to easily Detect Objects with Deep Learning on Raspberry Pi
Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API
Emil Wallner
9.1K
25
https://medium.freecodecamp.org/how-you-can-train-an-ai-to-convert-your-design-mockups-into-html-and-css-cc7afd82fed4?source=tag_archive---------9----------------
How you can train an AI to convert your design mockups into HTML and CSS
Within three years, deep learning will change front-end development. It will increase prototyping speed and lower the barrier for building software. The field took off last year when Tony Beltramelli introduced the pix2code paper and Airbnb launched sketch2code. Currently, the largest barrier to automating front-end development is computing power. However, we can use current deep learning algorithms, along with synthesized training data, to start exploring artificial front-end automation right now. In this post, we’ll teach a neural network how to code a basic a HTML and CSS website based on a picture of a design mockup. Here’s a quick overview of the process: We’ll build the neural network in three iterations. First, we’ll make a bare minimum version to get a hang of the moving parts. The second version, HTML, will focus on automating all the steps and explaining the neural network layers. In the final version, Bootstrap, we’ll create a model that can generalize and explore the LSTM layer. All the code is prepared on GitHub and FloydHub in Jupyter notebooks. All the FloydHub notebooks are inside the floydhub directory and the local equivalents are under local. The models are based on Beltramelli‘s pix2code paper and Jason Brownlee’s image caption tutorials. The code is written in Python and Keras, a framework on top of TensorFlow. If you’re new to deep learning, I’d recommend getting a feel for Python, backpropagation, and convolutional neural networks. My three earlier posts on FloydHub’s blog will get you started: Let’s recap our goal. We want to build a neural network that will generate HTML/CSS markup that corresponds to a screenshot. When you train the neural network, you give it several screenshots with matching HTML. It learns by predicting all the matching HTML markup tags one by one. When it predicts the next markup tag, it receives the screenshot as well as all the correct markup tags until that point. Here is a simple training data example in a Google Sheet. Creating a model that predicts word by word is the most common approach today. There are other approaches, but that’s the method we’ll use throughout this tutorial. Notice that for each prediction it gets the same screenshot. So if it has to predict 20 words, it will get the same design mockup twenty times. For now, don’t worry about how the neural network works. Focus on grasping the input and output of the neural network. Let’s focus on the previous markup. Say we train the network to predict the sentence “I can code.” When it receives “I,” then it predicts “can.” Next time it will receive “I can” and predict “code.” It receives all the previous words and only has to predict the next word. The neural network creates features from the data. The network builds features to link the input data with the output data. It has to create representations to understand what is in each screenshot, the HTML syntax, that it has predicted. This builds the knowledge to predict the next tag. When you want to use the trained model for real-world usage, it’s similar to when you train the model. The text is generated one by one with the same screenshot each time. Instead of feeding it with the correct HTML tags, it receives the markup it has generated so far. Then, it predicts the next markup tag. The prediction is initiated with a “start tag” and stops when it predicts an “end tag” or reaches a max limit. Here’s another example in a Google Sheet. Let’s build a “hello world” version. We’ll feed a neural network a screenshot with a website displaying “Hello World!” and teach it to generate the markup. First, the neural network maps the design mockup into a list of pixel values. From 0–255 in three channels — red, blue, and green. To represent the markup in a way that the neural network understands, I use one hot encoding. Thus, the sentence “I can code” could be mapped like the below. In the above graphic, we include the start and end tag. These tags are cues for when the network starts its predictions and when to stop. For the input data, we will use sentences, starting with the first word and then adding each word one by one. The output data is always one word. Sentences follow the same logic as words. They also need the same input length. Instead of being capped by the vocabulary, they are bound by maximum sentence length. If it’s shorter than the maximum length, you fill it up with empty words, a word with just zeros. As you see, words are printed from right to left. This forces each word to change position for each training round. This allows the model to learn the sequence instead of memorizing the position of each word. In the below graphic there are four predictions. Each row is one prediction. To the left are the images represented in their three color channels: red, green and blue and the previous words. Outside of the brackets are the predictions one by one, ending with a red square to mark the end. In the hello world version, we use three tokens: start, <HTML><center><H1>Hello World!</H1></center></HTML> and end. A token can be anything. It can be a character, word, or sentence. Character versions require a smaller vocabulary but constrain the neural network. Word level tokens tend to perform best. Here we make the prediction: FloydHub is a training platform for deep learning. I came across them when I first started learning deep learning and I’ve used them since for training and managing my deep learning experiments. You can install it and run your first model within 10 minutes. It’s hands down the best option to run models on cloud GPUs. If you are new to FloydHub, do their 2-min installation or my 5-minute walkthrough. All the notebooks are prepared inside the FloydHub directory. The local equivalents are under local. Once it’s running, you can find the first notebook here: floydhub/Helloworld/helloworld.ipynb . If you want more detailed instructions and an explanation for the flags, check my earlier post. In this version, we’ll automate many of the steps from the Hello World model. This section will focus on creating a scalable implementation and the moving pieces in the neural network. This version will not be able to predict HTML from random websites, but it’s still a great setup to explore the dynamics of the problem. If we expand the components of the previous graphic it looks like this. There are two major sections. First, the encoder. This is where we create image features and previous markup features. Features are the building blocks that the network creates to connect the design mockups with the markup. At the end of the encoder, we glue the image features to each word in the previous markup. The decoder then takes the combined design and markup feature and creates a next tag feature. This feature is run through a fully connected neural network to predict the next tag. Since we need to insert one screenshot for each word, this becomes a bottleneck when training the network (example). Instead of using the images, we extract the information we need to generate the markup. The information is encoded into image features. This is done by using an already pre-trained convolutional neural network (CNN). The model is pre-trained on Imagenet. We extract the features from the layer before the final classification. We end up with 1536 eight by eight pixel images known as features. Although they are hard to understand for us, a neural network can extract the objects and position of the elements from these features. In the hello world version, we used a one-hot encoding to represent the markup. In this version, we’ll use a word embedding for the input and keep the one-hot encoding for the output. The way we structure each sentence stays the same, but how we map each token is changed. One-hot encoding treats each word as an isolated unit. Instead, we convert each word in the input data to lists of digits. These represent the relationship between the markup tags. The dimension of this word embedding is eight but often varies between 50–500 depending on the size of the vocabulary. The eight digits for each word are weights similar to a vanilla neural network. They are tuned to map how the words relate to each other (Mikolov et al., 2013). This is how we start developing markup features. Features are what the neural network develops to link the input data with the output data. For now, don’t worry about what they are, we’ll dig deeper into this in the next section. We’ll take the word embeddings and run them through an LSTM and return a sequence of markup features. These are run through a Time distributed dense layer — think of it as a dense layer with multiple inputs and outputs. In parallel, the image features are first flattened. Regardless of how the digits were structured, they are transformed into one large list of numbers. Then we apply a dense layer on this layer to form a high-level feature. These image features are then concatenated to the markup features. This can be hard to wrap your mind around — so let’s break it down. Here we run the word embeddings through the LSTM layer. In this graphic, all the sentences are padded to reach the maximum size of three tokens. To mix signals and find higher-level patterns, we apply a TimeDistributed dense layer to the markup features. TimeDistributed dense is the same as a dense layer, but with multiple inputs and outputs. In parallel, we prepare the images. We take all the mini image features and transform them into one long list. The information is not changed, just reorganized. Again, to mix signals and extract higher level notions, we apply a dense layer. Since we are only dealing with one input value, we can use a normal dense layer. To connect the image features to the markup features, we copy the image features. In this case, we have three markup features. Thus, we end up with an equal amount of image features and markup features. All the sentences are padded to create three markup features. Since we have prepared the image features, we can now add one image feature for each markup feature. After sticking one image feature to each markup feature, we end up with three image-markup features. This is the input we feed into the decoder. Here we use the combined image-markup features to predict the next tag. In the below example, we use three image-markup feature pairs and output one next tag feature. Note that the LSTM layer has the sequence set to false. Instead of returning the length of the input sequence, it only predicts one feature. In our case, it’s a feature for the next tag. It contains the information for the final prediction. The dense layer works like a traditional feedforward neural network. It connects the 512 digits in the next tag feature with the 4 final predictions. Say we have 4 words in our vocabulary: start, hello, world, and end. The vocabulary prediction could be [0.1, 0.1, 0.1, 0.7]. The softmax activation in the dense layer distributes a probability from 0–1, with the sum of all predictions equal to 1. In this case, it predicts that the 4th word is the next tag. Then you translate the one-hot encoding [0, 0, 0, 1] into the mapped value, say “end”. If you can’t see anything when you click these links, you can right click and click on “View Page Source.” Here is the original website for reference. In our final version, we’ll use a dataset of generated bootstrap websites from the pix2code paper. By using Twitter’s bootstrap, we can combine HTML and CSS and decrease the size of the vocabulary. We’ll enable it to generate the markup for a screenshot it has not seen before. We’ll also dig into how it builds knowledge about the screenshot and markup. Instead of training it on the bootstrap markup, we’ll use 17 simplified tokens that we then translate into HTML and CSS. The dataset includes 1500 test screenshots and 250 validation images. For each screenshot there are on average 65 tokens, resulting in 96925 training examples. By tweaking the model in the pix2code paper, the model can predict the web components with 97% accuracy (BLEU 4-ngram greedy search, more on this later). Extracting features from pre-trained models works well in image captioning models. But after a few experiments, I realized that pix2code’s end-to-end approach works better for this problem. The pre-trained models have not been trained on web data and are customized for classification. In this model, we replace the pre-trained image features with a light convolutional neural network. Instead of using max-pooling to increase information density, we increase the strides. This maintains the position and the color of the front-end elements. There are two core models that enable this: convolutional neural networks (CNN) and recurrent neural networks (RNN). The most common recurrent neural network is long-short term memory (LSTM), so that’s what I’ll refer to. There are plenty of great CNN tutorials, and I covered them in my previous article. Here, I’ll focus on the LSTMs. One of the harder things to grasp about LSTMs is timesteps. A vanilla neural network can be thought of as two timesteps. If you give it “Hello,” it predicts “World.” But it would struggle to predict more timesteps. In the below example, the input has four timesteps, one for each word. LSTMs are made for input with timesteps. It’s a neural network customized for information in order. If you unroll our model it looks like this. For each downward step, you keep the same weights. You apply one set of weights to the previous output and another set to the new input. The weighted input and output are concatenated and added together with an activation. This is the output for that timestep. Since we reuse the weights, they draw information from several inputs and build knowledge of the sequence. Here is a simplified version of the process for each timestep in an LSTM. To get a feel for this logic, I’d recommend building an RNN from scratch with Andrew Trask’s brilliant tutorial. The number of units in each LSTM layer determines it’s ability to memorize. This also corresponds to the size of each output feature. Again, a feature is a long list of numbers used to transfer information between layers. Each unit in the LSTM layer learns to keep track of different aspects of the syntax. Below is a visualization of a unit that keeps tracks of the information in the row div. This is the simplified markup we are using to train the bootstrap model. Each LSTM unit maintains a cell state. Think of the cell state as the memory. The weights and activations are used to modify the state in different ways. This enables the LSTM layers to fine-tune which information to keep and discard for each input. In addition to passing through an output feature for each input, it also forwards the cell states, one value for each unit in the LSTM. To get a feel for how the components within the LSTM interact, I recommend Colah’s tutorial, Jayasiri’s Numpy implementation, and Karphay’s lecture and write-up. It’s tricky to find a fair way to measure the accuracy. Say you compare word by word. If your prediction is one word out of sync, you might have 0% accuracy. If you remove one word which syncs the prediction, you might end up with 99/100. I used the BLEU score, best practice in machine translating and image captioning models. It breaks the sentence into four n-grams, from 1–4 word sequences. In the below prediction “cat” is supposed to be “code.” To get the final score, you multiply each score with 25%, (4/5) * 0.25 + (2/4) * 0.25 + (1/3) * 0.25 + (0/2) * 0.25 = 0.2 + 0.125 + 0.083 + 0 = 0.408 . The sum is then multiplied with a sentence length penalty. Since the length is correct in our example, it becomes our final score. You could increase the number of n-grams to make it harder. A four n-gram model is the model that best corresponds to human translations. I’d recommend running a few examples with the below code and reading the wiki page. Links to sample output Front-end development is an ideal space to apply deep learning. It’s easy to generate data, and the current deep learning algorithms can map most of the logic. One of the most exciting areas is applying attention to LSTMs. This will not just improve the accuracy, but enable us to visualize where the CNN puts its focus as it generates the markup. Attention is also key for communicating between markup, stylesheets, scripts and eventually the backend. Attention layers can keep track of variables, enabling the network to communicate between programming languages. But in the near feature, the biggest impact will come from building a scalable way to synthesize data. Then you can add fonts, colors, words, and animations step-by-step. So far, most progress is happening in taking sketches and turning them into template apps. In less then two years, we’ll be able to draw an app on paper and have the corresponding front-end in less than a second. There are already two working prototypes built by Airbnb’s design team and Uizard. Here are some experiments to get started. Getting started Further experiments Huge thanks to Tony Beltramelli and Jon Gold for their research and ideas, and for answering questions. Thanks to Jason Brownlee for his stellar Keras tutorials (I included a few snippets from his tutorial in the core Keras implementation), and Beltramelli for providing the data. Also thanks to Qingping Hou, Charlie Harrington, Sai Soundararaj, Jannes Klaas, Claudio Cabral, Alain Demenet and Dylan Djian for reading drafts of this. This the fourth part of a multi-part blog series from Emil as he learns deep learning. Emil has spent a decade exploring human learning. He’s worked for Oxford’s business school, invested in education startups, and built an education technology business. Last year, he enrolled at Ecole 42 to apply his knowledge of human learning to machine learning. If you build something or get stuck, ping me below or on twitter: emilwallner. I’d love to see what you are building. This was first published as a community post on Floydhub’s blog. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I study CS at 42 Paris, blog, and experiment with deep learning. Our community publishes stories worth reading on development, design, and data science.
Gant Laborde
1.3K
7
https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------2----------------
Machine Learning: how to go from Zero to Hero – freeCodeCamp
If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text. Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough. However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you. A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter. A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players. Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory. I’m talking about Machine Learning. You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm. So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I. Wow! Right? That’s a crazy process! Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane. Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games. Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks! I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML. Secondly, if you don’t influence the world, the world will influence you. Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML. The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work? If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos. If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting. Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley). It’s cool watching data go through a trained model, but you can even watch your neural network get trained. One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model! Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above. These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now. There are tons of resources available. I’ll be recommending two approaches. In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch! If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you. I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone. This one costs money after Level 1. Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper. Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did. This course is 100% free. You only have to pay for a certificate if you want one. With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple. But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways. If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course. TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course. Plenty more information on available courses and rankings can be found here. If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready. I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive! Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now. I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in. It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps! See? I hope this article has inspired you and those around you to learn ML! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech Our community publishes stories worth reading on development, design, and data science.
James JD Sutton
2.2K
9
https://medium.com/coinmonks/what-is-q-from-a-laymen-given-barney-style-6387b18267d2?source=---------3----------------
What is “Q” from a laymen... – Coinmonks – Medium
A bit long, but I think it might help people understand Qubic a bit. Two takeaways I took from reading Qubic: (Rev_02) Take Away One: 1. If you host a “Q-Node”, a node that supports the Q protocol (layer) you can earn rewards in these manners: Offering PoW (mining rigs, computer, or your coffee pot), PoS (your IOTA’s that you hold), your bandwidth that you don’t use (probably something to do with LIFI in the future, so this could be your router and lightbulbs in your house), and simply, the previous history of running an honest node for the system. All of the above can be used to pass the “resource test phase”. All of those resources: PoW, PoS, Po(Bandwidth), and Po(Honesty) are measured and quantified. Your resources than essentially set you in an equivalent resource pool ie: in a pool with other people of similar resource power. You then earn IOTA’s from people using the Oracle system, smart contract, or simply who want computational power (which is absolutely needed to be able to outsource the IoT industry which is for sure the future. So what does that mean. Before do you remember all of the questions: IOTA won’t work because people won’t run nodes, because they don’t get incentives like traditional blockchains. Well now they can!!! And not only that, “Q” takes every aspect of each crypto and combines it all in one... PoS, PoW, PoBandwidth, and PoHonesty. More so, if you have Asic’s, you are in the Asic’s pool, GPU’s, you’re in the GPU pool, old crappy computer (you’re in the old crappy computer pool), you stake a lot of IOTA, you’re in the high-stake IOTA pool... etc. This is the process of “proving” your resources to the network. People will purchase “resources” using the Qubic protocol. If they want quality, fast, or extreme computational power they have to pay. Remember, you the user set what you want to receive in IOTA for your resources (economic principles). If you spend $1200 a month on electricity and equipment, you will only charge more than $1200 a month for your resources, no one would charge less. So, in your pool, everyone will eventually come to a quorum charging a set amount, and thus the economy (the users) will pay for it. So, in essence, the better the pool, the more the reward you get (based on economic principals in society (just like blockchain). I don’t fully understand the exact quantitative measure of what equates to the reward (such as with hash power in blockchain), though it seems that once you prove your resources your machine performs the calculations that are being bought on the Qubic network. However, if your coffee pot has a jinn chip that is Ternary hardware, with Ternary programming (ABRA), then it can sell its resources when it’s not making coffee ie: proving its resources and then completing computations for buyers. This is just speculative and the ABRA ternary language will be able to interface with binary and lower the energy consumption but a significant amount. When combining ABRA with a ternary chip such as Jinn, the energy efficiency is even more! One of the major bottle necks or challenges that prevents advancement in technology is the amount of battery storage within machines. If we can’t redesign a battery to store more power, at least we can redesign the energy consumption within machine devices. Also, your autonomous car not only can offer up its PoW, it can also stake the IOTA’s it is not using in its wallet the bandwidth when it isn’t working or driving, and the experience / honesty factor (by proving its resources and then selling its computational power) as it “may” be able to be a node in itself. In addition the left-over electricity it has from charging up through solar or wind power, it can sell through the smart-grid to neighbors or local businesses. Your car has “multiple” resources and the Qubic network allows machines to offer “all” of their resources to their owner, not just one or two as with blockchain. Qubics revolutionizes machinery by allowing it; the machinery, to sell its resources. This is another building block to the ultimate vision of a machine acting in a “machine economy”. Rather than us setting this up, and the fees we want to charge, eventually we can create smart contracts with Qubic functions which then allows machines to negotiate and earn “themselves”, the machines will sell and buy resources “THEMSELVES”, truly creating a machine economy, “AND” if you own the machine, you earn the rewards (ie: income, passive income). Take Away Two: 2. From the above description these are only a few use cases that I take away from reading about Qubics. The reality is that the community will be coming up with new use cases every day for the following year probably. Use cases that we can’t even imagine at the present time, but here is my second takeaway: The Qubic protocol, where all this is happening. Miners earning, people staking their IOTA and earning (ie: “interest” or “passive income”) because they are HODLER’s (and by proving their resources they sell their computational power), Forex financial companies using Qubics for quorum “ORACLE” data, smart-contracts being run on the protocol, scientist using computational power for medical research, VW, Fujitsu, and Bosch using computational power for their IoT devices, etc. on and on. All those use cases, to power.... TO POWER, to run the network, all those functions will be conducted with zero fee transactions that take place on the Tangle with real-time smart-contract micro-payments. The whole system runs on data transactions (zero fee transactions) by sending MetaData within the transaction sent on the Tangle. MetaData essentially (I’m not a techie) is like the language that tells the Q-Nodes to wake up, to process data, pay, earn, and receive, and essentially run the whole Q network. So.... that is a SHITLOAD of transactions occurring!!!!!! At the present day, the amount of transactions right now occurring from Trinity, speculation, and trading, is like a drop in the ocean compared to how many transactions the Qubic network will produce. It’s not hard to understand, the Qubic network will run millions if not billions of transactions per day over the Tangle, and remember, “each transaction confirms two transactions”. So.... what does that mean. More transactions mean a faster Tangle, a more secure Tangle, an infinitely scalable Tangle.... and most importantly.... WE CAN TAKE THE COO (Coordinator) OFFLINE!!! Note: there may be use cases for multiple COO’s (coordinators) or private COO’s but that is a whole other arena and I simply state this because I read someone writing such an example that went right over my head. The point is: Q is needed to remove the COO! So, as everyone says, “Why don’t the dev’s focus working on removing the COO, (“wen remove COO”), you can see that THEY ARE working on it!”. The Qubic network will support the network because it incentives people to host nodes and earn IOTA! Also, if no one uses the Qubic network then it doesn’t work right?!? So, making “Corporate Partners”, and United Nations (NGO) affiliates, partnership with banks, all of this is needed to support the Qubic Network. So here are the building blocks to the dev’s vision: - You need a Tangle (Zero fee transactions that can that can send meta data) - You need IOTA (A transfer means of metadata and a form of payment that can buy and sell resources (ie: PoW, PoS, PoBandwidth, and PoHonesty) - You need the Qubic Network (creates Oracles, allows for Quorum Based Computations that powers Oracles.) - You need Oracles (Oracles power smart-contracts which is the whole shabang! It will change society and change global finance). - You need the Qubic Network (Connects users of the network with resource providers of the network, enables a machine economy, and provides computational power and the most advanced smart-contracts to society). - Users of the network (We need a community (that the IOTA Foundation builds from hosting AMA’s, takes the time to talk to the community on Discord, and provides transparency so we all can go along on their journey of completing their vision), we need global partners such as Bosch, VW, Fujitsu, etc., We need governments and societies such as Taiwan, Denmark, and maybe Sweden; and we need banking like DnB, and electrical companies like Elaad. We need the global integration to actually “use” the Qubic network for it to work (demand drives economic principals, which ultimately will pay the Q-Node providers, which will drive transactions thus scaling the network). - Lastly, you need to remove the COO and let the network grow organically. (This can only be done when the previous steps have been completed). Tangle ->IOTA ->Qubic Network ->Oracles ->Partners -> COO So removing the COO is one of the last steps. After removing COO the network can just grow organically on its own without much support or help from the dev’s. They can then work on building applications that work on top of the Qubic network. This is a large challenging undertaking that is being built step-by-step, each piece is part of a large puzzle that all comes together. As for the Qubic vision, which is what was just released, is a really large damn piece of that puzzle!!! It just goes to show, that all of this adds up to removing the COO. Everything the dev’s, and the IF, have been doing are working towards simply that! It’s all one big construct, not different pieces, everything ties together and the Qubic network is a large friggin piece of it all. Their sole mission is to complete the puzzle, the vision, so the COO can be removed, and the Tangle can literally change society through the machine economy. This is just my non-techie understanding at the moment. I have a lot more research and studying to do, but damn I love it! So glad to be allowed within this community and enjoy the journey with the IOTA Foundation. Please clarify if I totally misunderstood anything and looking forward to hear other people’s understanding. Lastly, after writing this I re-read the Qubic website. Difficult to understand, but my rough understanding is that Q-Nodes and Qubics can lie dormant listening to the Tangle. Qubics are event driven so that when one Qubic initiates, another Qubic may need that quorum information to activate, and when the one Qubic gets the result it intended to compute, then the that Qubic itself can activate. So, one Qubic can initiate another Qubic and so on so on like neurons firing, lighting up a portion of the brain, which then fires more neurons. This is all done through secured data streams, the Tangle, and the Q-Nodes, and the Qubic network. In a way, it’s a global living system with the data stream as its life-blood, the Tangle as it’s bone structure, and the Qubics and Q-Nodes as its neurons. For all we know, in the future, this global mass network can be, and power AI, or maybe it will grow to become one massive AI source that can help society in so many ways. As I stated, I’m a non-techie. I probably haven’t put out a bit of misinformation as I don’t fully understand it all. Really, I just hope to ignite curiosity, so people may be inspired to put a two into the new world of the Machine Economy. As well sometimes it is hard to see the big picture. The IOTA Foundation has been working on “A” vision, a machine-to-machine economy that will change society with the Tangle as a standard protocol, the bone structure of it all. The fact is each new development is another puzzle piece, or a foundation block, that stacks on top of the others. In the end we have the puzzle as a whole, or a great structure built upon a solid foundation. https://twitter.com/IotanSea https://qubic.iota.org https://www.iota.org/ https://www.facebook.com/groups/iotatangle/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The Crypto & Blockchain publication. Educate yourself about cryptocurrency, blockchain developments. Check tutorials on Solidity and smart contracts.
Justin Lee
511
10
https://medium.com/swlh/the-beginners-guide-to-conversational-commerce-96f9c7dbaefb?source=---------5----------------
The beginner’s guide to conversational commerce – The Startup – Medium
Your greengrocer does it. So does that guy selling sunglasses on the beach. It’s why the funny old French bakery around the corner’s been running for 15 years. Conversational marketing. A buzzword, a footnote, a revelation. Everyone’s talking about it, but what is it? At its most simple, it’s the act of talking — and more importantly, listening — to your customers: their problems, their stories, their successes. Forging a genuine connection and using that connection to inform your marketing decisions. At its most complex, conversational marketing has become synonymous with cutting-edge technologies for computer-based dialog processing. Brands have always known that one-to-one conversations are valuable; but up until very recently, it was impossible to personalize these conversations at scale, in real-time. No longer. Chatbots have become a mainstay of digital marketing, and every day their underlying AI becomes more sophisticated. Gartner predicts that by 2020, 30% of our interactions with technology will be through “conversations” with smart machines. In his 1999 Cluetrain Manifesto, David Weinberger reminded us that that’s a hundred times more true today. A successful conversational marketing strategy will pair the spark of authenticity from real conversation with the emerging technologies of the future. In a 2016 article, Chris Messina distills the concept: Conversational converse is the process of having a real-time, one-to-one conversation with a customer or lead. It’s a direct, personalized, dialog-driven approach to nurturing long-term relationships, collecting data and increasing sales. Unlike traditional digital marketing, it ‘pulls’ users in instead of ‘pushing’ content on them. It’s a discourse, not a lecture. Despite recently picking up speed, conversational marketing isn’t new. The concept made its first appearance in 2007 with Joseph Jaffe’s Join the Conversation. Jaffe wanted to teach marketers to re-engage their customers through community, partnership, and dialog: In the past, brands have been able to talk at their customers — through email, website interactions and social media — not with them. Brands have struggled to capture, keep and convert attention into sales, sign-ups, and long-term loyalty. Engagement was passive, and results were shallow. Customer service was relegated to a formulaic question-answer scenario that was unsatisfying for everyone involved. Take it from leading conversational marketing platform Drift’s stellar report: Today, messaging apps have over 5 billion monthly active users, and for the first time, usage rates have surpassed social networks. Whether it’s chatting with friends on WhatsApp or exchanging ideas with coworkers on Slack, messaging has become an integral part of our lives. Despite extreme app saturation, the average person only uses five apps regularly and, you guessed it — messaging apps claim these spots, boasting 10x better open rates than the next leading digital channel. These messaging platforms have huge audiences: there are over 4 billion active monthly users on the top three messaging apps. Like the rise of the internet or the app economy of the past decade, conversational marketing is born from current desires: for real-time connection and genuine value. Conversational marketing is an umbrella term that encompasses every dialog-driven tactic, from opt-in email marketing to customer feedback. But the engine powering recent developments is Artificial Intelligence (AI). Chatbots represent the new era in conversational marketing: scaleable, personalized, real-time and data-driven. Of course, these bots aren’t intended to replace human-to-human interactions; they’re there to support and enhance them: helping users have the right conversations with the right people at the right time. (For the meantime, anyway. According to Gartner research, chatbots will account for 85% of all customer service by 2020). Chatbots are a blank canvas, with the potential to be molded and infused with a persona that reflects a company’s values — like our very own GrowthBot(AKA a mini Dharmesh Shah). This technology is still in its infancy, so most bots follow a set of rules programmed by a human via a bot-building platform. The differentiator is that the chatbots carry out conversations with users using natural language. AI uses first-person data to learn more about each customer and deliver a hyper-personalized experience. Reps and bots can then join forces to manage these conversations at scale. Let’s imagine I’m going to a fancy party. Tonight. It’s last minute, and I’ve just received a message that it’s black tie; but I don’t have the right shoes. I need to quickly find a pair that is appropriate; my size; coherent with the rest of my outfit; a good price, etc. I would usually Google for a shop in my area, then go to browse on their website to find a pair I like. But other issues would soon crop up: do they have my size? Are the shoes smart enough? Are they in stock? I could fill out a query on the site’s contact form, or give them a ring, but will they answer? And if they don’t have the right pair, they are unlikely to suggest a range of alternatives. The whole process is time-consuming and inefficient. But suppose the brand I like has a strong conversational marketing culture. Instead of resorting to email, I would be able to conduct the conversation in seconds on my phone; instantly, I’m given the colours, sizes and styles in stock. I can pay for the right shoes with a tap of a button. Conversational marketing enables users to get the information they need instantly, without picking up the phone or engaging with a person. It’s not about laziness; it’s about ease. Chris Messina concludes, As Clara de Soto, cofounder of Reply.ai, told VentureBeat, If users are made to toggle between various apps and platforms to get the answer they need, the value of the bot is moot: it needs to be native to the place they spend most time, whether that’s Slack, Messenger or onsite chat. But it can be tricky for brands to consolidate all their conversations in one place. That’s why HubSpot created Conversations, a free, multi-channel tool that lets businesses have one-to-one conversations at scale. says Dharmesh Shah, co-founder and CTO of HubSpot. We have a much lower tolerance for mistakes with machines compared to humans: 73% of people say they won’t interact with a bot again after one negative experience. And if a bot seems to be able to converse in English, we tend to easily overestimate how capable it is. That’s why it’s crucial to manage your customers’ expectations appropriately. Bots are far from being autonomous, and people aren’t easily fooled; trying to present your bot as a human agent is likely to be self-defeating. Bots don’t understand context created by preceding text, and conversational nuances can easily affect their capacity to answer. Because bots live inside messaging apps, they have the potential to invade a highly personal space, making the stakes of getting it right much higher. According to research, people use messaging apps for customer assistance with one key goal: to get their problem solved, fast. Bots should serve one simple purpose well, without getting tangled up in the conversational complications that are better left to humans. The way brands and users interract is undergoing a monumental shift. Customers are smarter and better-informed than ever before. They expect personalization and transparency as a prerequisite. They feel empowered by their options. It’s hard to fool them, and even harder to gain their loyalty. And most significantly, they want 24/7, 365 days of the year instantaneousness: to be heard, to be helped, right now; not in half an hour, not tomorrow. That’s why conversational marketing represents a new cornerstone in marketing but also in customer service and experience, branding and sales. Building a bot for the sake of being on-trend is not enough; it needs to be part of a larger strategy where each conversation has a purpose. As a long-term strategy intended to facilitate lasting relationships, it needs to be spearheaded towards a long-term goal. Effective conversational marketing is an intersection of brand values, user engagement and valuable dialogue. It’s about building your audience first, selling last. Thanks for reading. Originally published at blog.growthbot.org. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
Kai Stinchcombe
44K
11
https://medium.com/@kaistinchcombe/decentralized-and-trustless-crypto-paradise-is-actually-a-medieval-hellhole-c1ca122efdec?source=---------6----------------
Blockchain is not only crappy technology but a bad vision for the future
Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction. This December I wrote a widely-circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but rather hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because they were looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: the company Ripple decided the best way to move money across international borders was to not use Ripples. Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The end result is a shared definitive historical record. And, what’s more, because consensus is formed by each person acting in their own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009 they abandoned it because of logistical problems getting everyone to enter the data, and in 2017 they re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely-proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor and they don’t trust you (because you’re just two individuals on the internet), but, because it’s on blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money they don’t have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic, and direct, with no middleman needed to arbitrate the transaction, dictate terms, and take a fat cut on the way. Isn’t that better for everybody? Hm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly-governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are ten times those of the world’s most mismanaged currencies, and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully, or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded, and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—but more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy, and personal security was at the point of the sword. This is what Somalia looks like now, and also, what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool, and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is, and whether it has been sprayed with pesticides or not. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market, and so on, do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data a sequence of small hashed files won’t get the mango-pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen, or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters, or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors, or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust, and, at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not. Kai Stinchcombe coined the terms “crypto-medieval” “futuristic integrity wand” and “smart mango.” Please use freely: coining terms makes you a futurist. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Whatever the opposite of a futurist is
savedroid ICO
340
3
https://medium.com/@ico_8796/sneakpeek-the-savedroid-crypto-saving-app-part-1-your-wish-64d1f7308518?source=---------7----------------
#SNEAKPEEK The savedroid crypto saving app — Part #1: Your wish
The international beta of our brand new crypto saving app is coming soon. The beta app will be launched in English language and will exclusively be available for our ICO token buyers only. Now, get ready to learn more about the savedroid crypto saving app even before its official release. Today, we give you a very first sneak peek of one of its core features: your wish. With savedroid you can save up for your personal goals you want to afford in the future. Your own lambo or your desired moon. Exactly that is your wish. So, using the savedroid crypto saving app is not just about piling up a fortune. It’s all about saving up for your personal wishes, which you are aspiring to fulfill but can’t afford right now. There are 3 simple steps to set up your wish in less than one minute: 1) What?First, name your wish and select one of our illustrations to always keep you motivated to continue saving. You can go small and save for your new pair of hipster sneakers or you may go big and start a crypto savings plan for your new family home. Everything is possible, only the moon is the limit — at least for now. 2) How much?Then set the amount you need to save up to afford your wish. The amount is denominated in Fiat currency as it is the prevailing means of payment. By the way, that makes it a lot easier for you as you don’t need to do the math converting Fiat to crypto and vice versa — this complex task is on us. 3) When?Finally, select the date by when you want to fulfil your wish. And you are done! That was easy. Just as easy as savedroid’s other features will be to deliver on our mission to democratize crypto and bring cryptocurrencies to the masses. To keep you posted on our latest product updates we have started this new #SNEAKPEEK series. Here we will provide you regular sneak peeks on our hottest new features. Stay tuned and follow our blog! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The savedroid ICO: Cryptocurrencies for Everyone — now! Give Power to the People. Join the Revolution: https://ico.savedroid.com
Brandon Morelli
221
5
https://techburst.io/artificial-intelligence-top-10-articles-june-2018-4b3fa7572b46?source=---------8----------------
Artificial Intelligence Top 10 Articles — June 2018
Here’s what’s trending this month in Artificial Intelligence. Topics include: Whether you’re experienced with Artificial Intelligence, or a newbie looking to learn the basics of AI, there’s something for everyone on this list. Disclosure: We receive compensation from the courses we feature. 4.3/5 Stars || 17 Hours of Video || 58,823 Students Build an AI that combines the power of Data Science, Machine Learning and Deep Learning to create powerful AI for Real-World applications. You will also have the chance to understand the story behind Artificial Intelligence. Learn More. 4.7/5 Stars || 8 Hours of Video || 15,063 Students Completely understand the relationship between reinforcement learning and psychology and on a technical level. Apply gradient-based supervised machine learning methods to reinforcement learning and implement 17 different reinforcement learning algorithms. Learn More. By Lance Ulanoff Have you heard about the Google Duplex yet? It’s pretty much the talk all over the internet. Google CEO Sundar Pichai has dropped its biggest bomb when they introduced Google Duplex to all. Take a look more on this story to know more. By Irhum Shafkat Understanding convolutions can often feel a bit unnerving yet it’s concept is fascinatingly powerful and highly extensible. Let’s try to break down the mechanics of the convolution operation, step-by-step, relate and explore it’s hierarchy into a more powerful one. By WiseWolf Fund AI is already shaping the economy, and in the near future, its effect may be even more significant. Ignoring the new technology and its influence on the global economic situation is a recipe for failure. Read more of this article now! By Sam Drozdov Machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed”. Learn the basics of machine learning and how to apply it to the products you are building right now. By Aman Dalmia Having a great opportunity to interact with great minds is one of the awesome privileges one can keep to their knowledge and motivate them to avoid the mistakes in a much better manner. By Simon Greenman Welcome to AI gold rush! Check out this awesome article that talks about how companies and startups make money on AI and how it helps the economic growth as well. By Justin Lee Are chatbots hype over already? Find out why our industry massively overestimated the initial impact chatbots would have and a lot more reasons why chatbots are not on the trend anymore. By Daniel Jeffries AI could mean the end of all jobs for most people and that’s just terrifying, right? Check out this topic to get to know more about how will AI bring an explosion to new jobs. By George Seif Learn more about Google’s AutoML — a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge in AI. By James Loy Understand the inner workings of Deep Learning through Python with Neural Network. Know and train more about Neural Network from scratch. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Creator of @codeburstio — Frequently posting web development tutorials & articles. Follow me on Twitter too: @BrandonMorelli bursts of tech to power through your day
Sarthak Jain
3.9K
10
https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=---------9----------------
How to easily Detect Objects with Deep Learning on Raspberry Pi
Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API
Dr. GP Pulipaka
2
6
https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------5----------------
3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and...
Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form. Latent semantic analysis Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document. Brief overview of linear algebra The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK. Deep Learning at scale with Google Colab notebooks Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper. Once this step is complete, it shows the following log without errors: The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab If it shows successful retrieval of access token, then Colab is all set. At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file. Once the drive is mounted, Colab has access to the datasets from Google drive. Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook. PyCharm IDE The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal. Results from OSX Terminal Jupyter Notebook on standalone machine Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine: References Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013 Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing. Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience
Gabriel Jiménez
50
5
https://medium.com/aimarketingassociation/chatbots-could-we-talk-edd6ccbd8f5a?source=---------7----------------
Chatbots, could we talk? – AIMA: AI Marketing Magazine – Medium
After the euphoria for apps, the trend is reversing. Every day we download fewer new apps and we keep with few in constant use. A lot has happened since Apple in 2009 proclaimed that there was an app for everything. As in the next commercial: The chat boom According to the report of the Internet Association 2017 on the habits of Internet users in México. The second social network used by Mexicans is WhatsApp a messaging app and the first, although the report has no separate data, is Facebook, which also includes Facebook Messenger. As a particular fact both are from Facebook, as is instagram that is in position 5 on the list. The customer experience in issues such as support, attention or navigation in telephone menus and the transition we have made from voice calls to text messages, both for practicity and cost have catalyzed the technological development of so-called virtual agents or chatbots to optimize resources and improve customer service. In an environment where an immediate response is the minimum that is expected, the best option to improve customer service at the lowest cost is through a chatbot. But what is a chatbot? It is a computer program, which works either through rules or the most advanced using artificial intelligence, the way to interact with them is via a chat. With rules Chatbots that work with rules have limited functionality, respond only to specific commands; If you do not write correctly what you want, it does not understand it. With artificial intelligence On the other hand, assistants who use artificial intelligence can understand what you say, in any way you write it, even if you do it incorrectly, abbreviated or with idiomatic expressions. They are also able to improve over time, learning the way people express themselves and how they ask. Context and memory Chatbots that use artificial intelligence can resume a previous conversation or, based on the context of the chat, move forward in a coherent manner. If for example we are looking for a movie to see in the cinema and first ask us the cinema we want to go to and then the movie, then we change the movie and then the chatbot will assume that we continue talking about the same cinema unless we specify otherwise. The above may seem very simple for us as people but for a chatbot to maintain a coherent and fluid conversation, it is a huge achievement and one that brings great value. Channels A chatbot can be integrated into any chat application, whether corporate, your website or commercial like Facebook Messenger or Whatsapp. Limitations One of the challenges faced by chatbots is the initial adoption, they may fail, mainly for 3 reasons: 1. As a result of not adequately delimiting its initial scope. We want to resolve all the possible issues with the chatbot, that deals with complaints, that supports, that sells, that generates interaction with customers, that gives service status. This causes, as with any project, scope creep, endless requirements which makes it seem that the project never will work appropiately. 2. It is not linked to an activity that solves a business issue, sometimes they apply to trivial situations or that do not have a relevant metric linked, so it is impossible to measure their effectiveness and quantify their benefits to the business. 3. Being a new technology, we tend to think that since it has intelligence it can answer any question outside the business context for which it was defined, thus also losing the initial focus and evaluating its performance outside the scope for which it was created. It is important to remember that although it has artificial intelligence, every bot has to have a period of learning and evolution and this takes time. Its process is similar to that carried by a child, when it begins to learn it makes mistakes, there are terms or forms of expression that it does not know but as time passes, it becomes more and more ready due to the experience it acquires with each conversation, the same It happens with the chatbot. Hand over That is why there always has to be a process to re-direct a human operator to a conversation in which the chatbot is not able to respond satisfactorily, in this way we keep the customer experience as a principle and we avoid frustrations to people. Connection with systems The chatbot can give an integral attention to clients through chat but its capacity to do it also depends on the integration that this one has with the systems of the company; without this, the service you provide will be incomplete and frustrating. For example, if we have a chatbot to schedule appointments, we need that in addition to understanding what people ask, you can access the agenda system to check if there is time available to schedule, if you do not have it, you will be limited and it will be practically useless. Applications The main change when using a chatbot is that instead of browsing websites, we can ask to get what we want, it is even possible to obtain recommendations based on questions to find the most appropriate for us. Benefits To know about chatbots and artificial intelligence, write to me @gabojimenez_ or linkedin.com/in/gabrieljimenezmunoz/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CONSULTATIVE SELLING | AI FOR BUSINESS | CHATBOTS | ANALYTICS | SPEAKER | WRITER | TEACHER Driving the AI Marketing movement
Kai Stinchcombe
44K
11
https://medium.com/@kaistinchcombe/decentralized-and-trustless-crypto-paradise-is-actually-a-medieval-hellhole-c1ca122efdec?source=tag_archive---------0----------------
Blockchain is not only crappy technology but a bad vision for the future
Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction. This December I wrote a widely-circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but rather hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because they were looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: the company Ripple decided the best way to move money across international borders was to not use Ripples. Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The end result is a shared definitive historical record. And, what’s more, because consensus is formed by each person acting in their own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009 they abandoned it because of logistical problems getting everyone to enter the data, and in 2017 they re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely-proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor and they don’t trust you (because you’re just two individuals on the internet), but, because it’s on blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money they don’t have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic, and direct, with no middleman needed to arbitrate the transaction, dictate terms, and take a fat cut on the way. Isn’t that better for everybody? Hm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly-governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are ten times those of the world’s most mismanaged currencies, and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully, or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded, and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—but more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy, and personal security was at the point of the sword. This is what Somalia looks like now, and also, what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool, and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is, and whether it has been sprayed with pesticides or not. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market, and so on, do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data a sequence of small hashed files won’t get the mango-pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen, or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters, or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors, or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust, and, at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not. Kai Stinchcombe coined the terms “crypto-medieval” “futuristic integrity wand” and “smart mango.” Please use freely: coining terms makes you a futurist. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Whatever the opposite of a futurist is
Dhruv Parthasarathy
4.3K
12
https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------1----------------
A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN
At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com
Slav Ivanov
3.9K
17
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------2----------------
The $1700 great Deep Learning box: Assembly, setup and benchmarks
Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Tyler Elliot Bettilyon
17.9K
13
https://medium.com/@TebbaVonMathenstien/are-programmers-headed-toward-another-bursting-bubble-528e30c59a0e?source=tag_archive---------3----------------
Are Programmers Headed Toward Another Bursting Bubble?
A friend of mine recently posed a question that I’ve heard many times in varying forms and forums: “Do you think IT and some lower-level programming jobs are going to go the way of the dodo? Seems a bit like a massive job bubble that’s gonna burst. It’s my opinion that one of the only things keeping tech and lower-level computer science-related jobs “prestigious” and well-paid is ridiculous industry jargon and public ignorance about computers, which are both going to go away in the next 10 years. [...]” This question is simultaneously on point about the future of technology jobs and exemplary of some pervasive misunderstandings regarding the field of software engineering. While it’s true that there is a great deal of “ridiculous industry jargon” there are equally many genuinely difficult problems waiting to be solved by those with the right skill-set. Some software jobs are definitely going away but programmers with the right experience and knowledge will continue to be prestigious and well remunerated for many years to come; as an example look at the recent explosion of AI researcher salaries and the corresponding dearth of available talent. Staying relevant in the ever changing technology landscape can be a challenge. By looking at the technologies that are replacing programmers in the status quo we should be able to predict what jobs might disappear from the market. Additionally, to predict how salaries and demand for specific skills might change we should consider the growing body of people learning to program. As Hannah pointed out “public ignorance” about computers is keeping wages high for those who can program and the public is becoming more computer savvy each year. The fear of automation replacing jobs is neither new nor unfounded. In any field, and especially in technology, market forces drive corporations toward automation and commodification. Gartner’s Hype Cycles are one way of contextualizing this phenomenon. As time goes on, specific ideas and technologies push towards the “plateau of productivity” where they are eventually automated. Looking at history one must conclude that automation has the power to destroy specific job markets. In diverse industries ranging from crop harvesting to automobile assembly technology advances have consistently replaced and augmented human labor to reduce costs. A professor once put it this way in his compilers course, “take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?” In this metaphor the “machine” is a computer programming language. This professor was really asking: Do you want to build websites using JavaScript, or do you want to build the V8 engine that powers JavaScript? The creation of websites is being automated by WordPress (and others) today. V8 on the other hand has a growing body of competitors some of whom are solving open research questions. Languages will come and go (how many Fortran job openings are there?) but there will always be someone building the next language. Lucky for us, programming language implementations are written with programming languages themselves. Being a “machine operator” in software puts you on the path to being a “machine creator” in a way which was not true of the steel mill workers of the past. The growing number of languages, interpreters, and compilers shows us that every job-destroying machine also brings with it new opportunities to improve those machines, maintain those machines, and so forth. Despite the growing body of jobs which no longer exist, there has yet to be a moment in history where humanity has collectively said, “I guess there isn’t any work left for us to do.” Commodification is coming for us all, not just software engineers. Throughout history, human labor has consistently been replaced with non-humans or augmented to require fewer and less skilled humans. Self-driving cars and trucks are the flavor of the week in this grand human tradition. If the cycle of creation and automation are a fact of life, the natural question to answer next is: which jobs and industries are at risk, and which are not? AWS, Heroku, and other similar hosting platforms have forever changed the role of the System Administrator/DevOps engineer. Internet businesses used to absolutely need their own server master. Someone who was well versed in Linux; someone who could configure a server with Apache or NGINX; someone who could not only physically wire up the server, the routers, and all the other physical components, but who could also configure the routing tables and all the software required to make that server accessible on the public web. While there are definitely still people applying this skill-set professionally, AWS is making some of those skills obsolete — especially at the lower experience levels and on the physical side of things. There are very lucrative roles within Amazon (and Netflix, and Google...) for people with deep expertise in networking infrastructure, but there is much less demand at the small-to-medium business scale. “Business Intelligence” tools such as SalesForce, Tableau and SpotFire are also beginning to occupy spaces historically held by software engineers. These systems have reduced the demand for in-house Database Administrators, but they have also increased the demand for SQL as a general-purpose skill. They have decreased demand for in-house reporting technology, but increased demand for “integration engineers” who automate the flow of data from the business to the third-party software platform(s). A field that was previously dominated by Excel and Spreadsheets is increasingly being pushed towards scripting languages like Python or R, and towards SQL for data management. Some jobs have disappeared, but demand for people who can write software has seen an increase overall. Data Science is a fascinating example of commodification at a level closer to software. Scikit.learn, Tensorflow, and PyTorch are all software libraries that make it easier for people to build machine learning applications without building the algorithms from scratch. In fact, it’s possible to run a dataset through many different machine learning algorithms, with many different parameter sets for those algorithms, with little to no understanding of how those algorithms are actually implemented (it’s not necessarily wise to do this, just possible). You can bet that business intelligence companies will be trying to integrate these kinds of algorithms into their own tools over the next few years as well. In many ways data science looks like web development did 5–8 years ago — a booming field where a little bit of knowledge can get you in the door due to a “skills gap”. As web development bootcamps are closing and consolidating, data science bootcamps are popping up in their place. Kaplan, who bought the original web development bootcamp (Dev Bootcamp) and started a data science bootcamp (Metis) has decided to close DevBootcamp and keep Metis running. Content management systems are among the most visible of the tools automating away the need for a software engineer. SquareSpace and WordPress are among the most popular CMS systems today. These platforms are significantly reducing the value of people with a just a little bit of front end web development skill. In fact the barriers for making a website and getting it online have come down so dramatically that people with zero programming experience are successfully launching websites every day. Those same people aren’t making deeply interactive websites that serve billions of people, but they absolutely do make websites for their own businesses that give customers the information they need. A lovely landing page with information such as how to find the establishment and how to contact them is more than enough for a local restaurant, bar, or retail store. If your business is not primarily an “internet business” it has never been easier to get a working site on the public web. As a result, the once thriving industry of web contractors who can quickly set up a simple website and get it online is becoming less lucrative. Finally, it would border on hubris to ignore the physical aspect of computers in this context. In the words of Mike Acton: “software is not the platform, hardware is the platform”. Software people would be wise to study at least a little computer architecture and electrical engineering. A big shake up in hardware, such as the arrival of consumer grade quantum computers would (will) change everything about professional software engineering. Quantum computers are still a ways off, but the growing interest in GPUs and the drive toward parallelization is an imminent shift. CPU speeds have been stagnant for several years now and in that time a seemingly unquenchable thirst for machine learning and “big data” has emerged. With more desire than ever to process large data-sets OpenMP, OpenCL, Go, CUDA, and other parallel processing languages and frameworks will continue to become mainstream. To be competitively fast in the near-term future, significant parallelization will be a requirement across the board, not just in high-performance niches like operating systems, infrastructure and video games. Websites are ubiquitous. The 2017 Stack Overflow Survey reports that about 15% of professional software engineers are working in an “Internet/Web Services” company. The Bureau of Labor Statistics expects growth in Web Development to continue much faster than average (24% between 2014 and 2024). Due to its visibility, there has been a massive focus on “solving the skills gap” in this industry. Coding bootcamps teach Web Development almost exclusively and Web Development online courses have flooded Udemy, Udacity, Coursera and similar marketplaces. The combination of increasing automation throughout the Web Development technology stack and the influx of new entry level programmers with an explicit focus on Web Development has led some to predict a slide towards a “blue collar” market for software developers. Some have gone further, suggesting that the push towards a blue collar market is a strategy architected by big tech firms. Others, of course, say we’re headed for another bursting bubble. Change in demand for specific technologies is not news. Languages and frameworks are always rising and falling in technology. Web Development in its current incarnation (“JS Is King”) will eventually go the way of Web Development of the early 2000’s (remember Flash?). What is new, is that a lot of people are receiving an education explicitly (and solely) in the current trendy web development frameworks. Before you decide to label yourself a “React developer” remember there were people who once identified themselves as “Flash developers”. Banking your career on a specific language, framework, or technology is a game of roulette. Of course it’s quite difficult to predict what technologies will remain relevant, but if you’re going to go all in on something, I suggest relying on The Lindy Effect and picking something like C that has already withstood the test of time. The next generation will have a level of de facto tech literacy that Generation X and even Millennials do not have. One outcome of this will be that using the next generation of CMS tools will be a given. These tools will get better and young workers will be better at using them. This combination will definitely will bring down the value of low-level IT and web development skills as eager and skilled youngsters enter the job market. High schools are catching on as well, offering computer science and programming classes — some well educated high school students will likely be entering the workforce as programming interns immediately upon graduation. Another big group of newcomers to programming are MBAs and data analysts. Job listings which were once dominated by Excel are starting to list SQL as a “nice to have” and even “requirement”. Tools such as Tableau, SpotFire, SalesForce, and other web-based metrics systems continue to replace the spreadsheet as the primary tool for report generation. If this continues more data analysts will learn to use SQL directly simply because it is easier than exporting the data into a spreadsheet. People looking to climb the ranks and out-perform their peers in these roles are taking online courses to learn about databases and statistical programming languages. With these new skills they can begin to position themselves as data scientists by learning a combination of machine learning and statistical libraries. Look at Metis’ curriculum as a prime example of this path. Finally, the number of people earning Computer Science and Software Engineering degrees continues to climb. Purdue, for example, reports that applications to their CS program have doubled over five years. Cornell reports a similar explosion of CS graduates. This trend isn’t surprising given the growth and ubiquity of software. It’s hard for young people to imagine that computers will play a smaller role in our futures, so why not study something that’s going to give you job security. A common argument in the industry nowadays is around the idea that the education you receive in a four-year Computer Science program is mostly unnecessary cruft. I have heard this argument repeatedly in the halls of bootcamps, web development shops, and online from big names in the field such as this piece by Eric Elliott. The opposition view is popular as well, with some going so far as saying “all programmers should earn a master’s degree”. Like Eric Elliott, I think it’s good that there are more options than ever to break into programming, and a 4 year degree might not be the best option for many. Simultaneously, I agree with William Bain that the foundational skills which apply across programming disciplines are crucial for career longevity, and that it is still hard to find that information outside of university courses. I’ve written previously about what skills I think aspiring engineers should learn as a foundation of a long career, and joined Bradfield in order to help share this knowledge. Coding schools of many shapes and sizes are becoming ubiquitous, and for good reasons. There is quite a lot you can learn about programming without getting into the minutia of Big O notation, obscure data structures, and algorithmic trivia. However, while it’s true that fresh graduates from Stanford are competing for some jobs with fresh graduates from Hack Reactor, it’s only true in one or two sub-industries. Code school and bootcamp graduates are not yet applying to work on embedded systems, cryptography/security, robotics, network infrastructure, or AI research and development. Yet these fields, like web development, are growing quickly. Some programming-related skills have already started their transition from “rare skill” to “baseline expectation”. Conversely, the engineering that goes into creating beastly engines like AWS is anything but common. The big companies driving technology forward — Amazon, Google, Facebook, Nvidia, Space-X, and so on — are typically not looking for people with a ‘basic understanding of JavaScript’. AWS serves billions of users per day. To support that kind of load an AWS infrastructure engineer needs a deep knowledge of network protocols, computer architecture, and several years of relevant experience. As with any discipline there are amateurs and artisans. These prestigious firms are solving research problems and building systems that are truly pushing against the boundaries of what is possible. Yet they still struggle to fill open roles even while basic programming skills are increasingly common. People who can write algorithms to predict changes in genetic sequences that will yield a desired result are going to be highly valuable in the future. People who can program satellites, spacecraft, and automate machinery will continue to be highly valued. These are not fields that lend themselves as readily to a “3 month intensive program” as front end web development, at least not without significant prior experience. Because computer science starts with the word “computer” it is assumed that young people will all have an innate understanding of it by 2025. Unfortunately, the ubiquity of computers has not created a new generation of people who de facto understand mathematics, computer science, network infrastructure, electrical engineering and so on. Computer literacy is not the same as the study of computation. Despite mathematics having existed since the dawn of time there is still a relatively small portion of the population with strong statistical literacy, and computer science is similarly old. Euclid invented several algorithms, one of which is used every time you make an HTTPS request; the fact that we use HTTPS every time we login to a website does not automatically imbue anyone with a knowledge of how those protocols work. More established professional fields often have a bimodal wage distribution: a relatively small number of practitioners make quite a lot of money, and the majority of them earn a good wage but do not find themselves in the top 1% of earners. The National Association for Law Placement collects data that can be used to visualize this phenomenon in stark clarity. A huge share of law graduates make between $45,00 and $65,000 — a good wage, but hardly the salary we associate with a “top professional”. We tend to think that all law graduates are on track to becoming partners at a law firm when really there are many paths: paralegal, clerk, public defender, judge, legal services for businesses, contract writing, and so on. Computer science graduates also have many options for their professional practice, from web development to embedded systems. As a basic level of programming literacy continues to become an expectation, rather than a “nice to have”, I suspect a similar distribution will emerge in programming jobs. While there will always be a cohort of programmers making a lot of money to push on the edges of technology, there will be a growing body of middle-class programmers powering the new computer-centric economy. The average salary for web developers will surely decrease over time. That said, I suspect that the number of jobs for “programmers” in general will only continue to grow. As worker supply begins to meet demand, hopefully we will see a healthy boom in a variety of middle-class programming jobs. There will also continue to be a top-professional salary available for those programmers who are redefining what is possible. Regardless of which cohort of programmers you’re in, a career in technology means continuing your education throughout your life. If you want to stay in the second cohort of programmers you may want to invest in learning how to create the machines, rather than simply operate them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A curious human on a quest to watch the world learn.
Blaise Aguera y Arcas
8.7K
15
https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------4----------------
Do algorithms reveal sexual orientation or just expose our stereotypes?
by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence. In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science. The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm: Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle? We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers. Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals: The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated. That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3] A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice: Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages: One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4] Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%. Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development. The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men: Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”: Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5] None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better. The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall. By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive. Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9] Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%. If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”. More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment. When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles. The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group. Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy. In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include: We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure. This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place. We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions. Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions. [1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively. [2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample. [3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people. [4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age. [5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”. [6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”. [7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered. [8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%. [9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft.
Arvind N
9.5K
8
https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------5----------------
Thoughts after taking the Deeplearning.ai courses – Towards Data Science
[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes.
Berit Anderson
1.6K
20
https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b?source=tag_archive---------6----------------
The Rise of the Weaponized AI Propaganda Machine – Scout: Science Fiction + Journalism – Medium
By Berit Anderson and Brett Horvath This piece was originally published at Scout.ai. “This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright. Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas. By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world. Most recently, Analytica helped elect U.S. President Donald Trump, secured a win for the Brexit Leave campaign, and led Ted Cruz’s 2016 campaign surge, shepherding him from the back of the GOP primary pack to the front. The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention. Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America. Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts. There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections. In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them. We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it. Welcome to the age of Weaponized AI Propaganda. Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends. In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores — a standard-bearing personality questionnaire used by psychologists — the team was able to identify an individual’s gender, sexuality, political beliefs, and personality traits based only on what they had liked on Facebook. According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.” Not long afterward, Kosinski was approached by Aleksandr Kogan, a fellow Cambridge professor in the psychology department, about licensing his model to SCL Elections, a company that claimed its specialty lay in manipulating elections. The offer would have meant a significant payout for Kosinki’s lab. Still, he declined, worried about the firm’s intentions and the downstream effects it could have. It had taken Kosinski and his colleagues years to develop that model, but with his methods and findings now out in the world, there was little to stop SCL Elections from replicating them. It would seem they did just that. According to a Guardian investigation, in early 2014, just a few months after Kosinski declined their offer, SCL partnered with Kogan instead. As a part of their relationship, Kogan paid Amazon Mechanical Turk workers $1 each to take the OCEAN quiz. There was just one catch: To take the quiz, users were required to provide access to all of their Facebook data. They were told the data would be used for research. The job was reported to Amazon for violating the platform’s Terms of Service. What many of the Turks likely didn’t realize: According to documents reviewed by The Guardian, “Kogan also captured the same data for each person’s unwitting friends.” The data gathered from Kogan’s study went on to birth Cambridge Analytica, which spun out of SCL Elections soon after. The name, metaphorically at least, was a nod to Kogan’s work — and a dig at Kosinski. But that early trove of user data was just the beginning — just the seed Analytica needed to build its own model for analyzing users personalities without having to rely on the lengthy OCEAN test. After a successful proof of concept and backed by wealthy conservative investors, Analytica went on a data shopping spree for the ages, snapping up data about your shopping habits, land ownership, where you attend church, what stores you visit, what magazines you subscribe to — all of which is for sale from a range of data brokers and third party organizations selling information about you. Analytica aggregated this data with voter roles, publicly available online data — including Facebook likes — and put it all into its predictive personality model. Nix likes to boast that Analytica’s personality model has allowed it to create a personality profile for every adult in the U.S. — 220 million of them, each with up to 5,000 data points. And those profiles are being continually updated and improved the more data you spew out online. Albright also believes that your Facebook and Twitter posts are being collected and integrated back into Cambridge Analytica’s personality profiles. “Twitter and also Facebook are being used to collect a lot of responsive data because people are impassioned, they reply, they retweet, but they also include basically their entire argument and their entire background on this topic,” he explains. Collecting massive quantities of data about voters’ personalities might seem unsettling, but it’s actually not what sets Cambridge Analytica apart. For Analytica and other companies like them, it’s what they do with that data that really matters. “Your behavior is driven by your personality and actually the more you can understand about people’s personality as psychological drivers, the more you can actually start to really tap in to why and how they make their decisions,” Nix explained to Bloomberg’s Sasha Issenburg. “We call this behavioral microtargeting and this is really our secret sauce, if you like. This is what we’re bringing to America.” Using those dossiers, or psychographic profiles as Analytica calls them, Cambridge Analytica not only identifies which voters are most likely to swing for their causes or candidates; they use that information to predict and then change their future behavior. As Vice reported recently, Kosinski and a colleague are now working on a new set of research, yet to be published, that addresses the effectiveness of these methods. Their early findings: Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. Scout reached out to Cambridge Analytica with a detailed list of questions about their communications tactics, but the company declined to answer any questions or to comment on any of their tactics. But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging. “They [the Trump campaign] were using 40–50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” Martin Moore, director of Kings College’s Centre for the Study of Media, Communication and Power, told The Guardian in early December. “It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius.” Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits. “For example,” Nix wrote in an op-ed last year about Analytica’s work on the Cruz campaign, ”our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.” “Leveraging our other data models, we were able to advise the campaign on how to approach this issue with specific individuals based on their unique profiles in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. For people in the ‘Temperamental’ personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is ‘as easy as buying a case of beer’. Whereas the right message for people in the ‘Stoic Traditionalist’ group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.” For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait — say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.” Much of this is done through Facebook dark posts, which are only visible to those being targeted. Based on users’ response to these posts, Cambridge Analytica was able to identify which of Trump’s messages were resonating and where. That information was also used to shape Trump’s campaign travel schedule. If 73 percent of targeted voters in Kent County, Mich. clicked on one of three articles about bringing back jobs? Schedule a Trump rally in Grand Rapids that focuses on economic recovery. Political analysts in the Clinton campaign, who were basing their tactics on traditional polling methods, laughed when Trump scheduled campaign events in the so-called blue wall — a group of states that includes Michigan, Pennsylvania, and Wisconsin and has traditionally fallen to Democrats. But Cambridge Analytica saw they had an opening based on measured engagement with their Facebook posts. It was the small margins in Michigan, Pennsylvania and Wisconsin that won Trump the election. Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’” Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds. In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born. “These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.” Meanwhile, surprised by the results of the 2016 presidential race, Albright started looking into the ‘fake news problem’. As a part of his research, Albright scraped 306 fake news sites to determine how exactly they were all connected to each other and the mainstream news ecosystem. What he found was unprecedented — a network of 23,000 pages and 1.3 million hyperlinks. “The sites in the fake news and hyper-biased #MCM network,” Albright writes, “have a very small ‘node’ size — this means they are linking out heavily to mainstream media, social networks, and informational resources (most of which are in the ‘center’ of the network), but not many sites in their peer group are sending links back.” These sites aren’t owned or operated by any one individual entity, he says, but together they have been able to game Search Engine Optimization, increasing the visibility of fake and biased news anytime someone Googles an election-related term online — Trump, Clinton, Jews, Muslims, abortion, Obamacare. “This network,” Albright wrote in a post exploring his findings, “is triggered on-demand to spread false, hyper-biased, and politically-loaded information.” Even more shocking to him though was that this network of fake news creates a powerful infrastructure for companies like Cambridge Analytica to track voters and refine their personality targeting models “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages.” The web of fake and biased news that Albright uncovered created a propaganda wave that Cambridge Analytica could ride and then amplify. The more fake news that users engage with, the more addictive Analytica’s personality engagement algorithms can become. Voter 35423 clicked on a fake story about Hillary’s sex-trafficking ring? Let’s get her to engage with more stories about Hillary’s supposed history of murder and sex trafficking. The synergy between fake-content networks, automated message testing, and personality profiling will rapidly spread to other digital mediums. Albright’s most-recent research focuses on an artificial intelligence that automatically creates YouTube videos about news and current events. The AI, which reacts to trending topics on Facebook and Twitter, pairs images and subtitles with a computer generated voiceover. It spooled out nearly 80,000 videos through 19 different channels in just a few days. Given its rapid development, the technology community needs to anticipate how AI propaganda will soon be used for emotional manipulation in mobile messaging, virtual reality, and augmented reality. If fake news created the scaffolding for this new automated political propaganda machine, bots, or fake social media profiles, have become its foot soldiers — an army of political robots used to control conversations on social media and silence and intimidate journalists and others who might undermine their messaging. Samuel Woolley, Director of Research at the University of Oxford’s Computational Propaganda Project and a fellow at Google’s Jigsaw project, has dedicated his career to studying the role of bots in online political organizing — who creates them, how they’re used, and to what end. Research by Woolley and his Oxford-based team in the lead-up to the 2016 election found that pro-Trump political messaging relied heavily on bots to spread fake news and discredit Hillary Clinton. By election day, Trump’s bots outnumbered hers, 5:1. “The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled activities after Election Day,” the study by Woolley’s team reported. Woolley believes it’s likely that Cambridge Analytica was responsible for subcontracting the creation of those Trump bots, though he says he doesn’t have direct proof. Still, if anyone outside of the Trump campaign is qualified to speculate about who created those bots, it would be Woolley. Led by Dr. Philip Howard, the team’s Principal Investigator, Woolley and his colleagues have been tracking the use of bots in political organizing since 2010. That’s when Howard, buried deep in research about the role Twitter played in the Arab Spring, first noticed thousands of bots coopting hashtags used by protesters. Curious, he and his team began reaching out to hackers, botmakers, and political campaigns, getting to know them and trying to understand their work and motivations. Eventually, those creators would come to make up an informal network of nearly 100 informants that have kept Howard and his colleagues in the know about these bots over the last few years. Before long, Howard and his team were getting the heads up about bot propaganda campaigns from the creators themselves. As more and more major international political figures began using botnets as just another tool in their campaigns, Howard, Woolley and the rest of their team studied the action unfolding. The world these informants revealed is an international network of governments, consultancies (often with owners or top management just one degree away from official government actors), and individuals who build and maintain massive networks of bots to amplify the messages of political actors, spread messages counter to those of their opponents, and silence those whose views or ideas might threaten those same political actors. “The Chinese, Iranian, and Russian, governments employ their own social-media experts and pay small amounts of money to large numbers of people to generate pro-government messages,” Howard and his coauthors wrote in a 2015 research paper about the use of bots in the Venezuelan election. Depending on which of those three categories bot creators fall into — government, consultancy or individual — they’re just as likely to be motivated by political beliefs as they are the opportunity to auction off their networks of digital influence to the highest bidder. Not all bots are created equal. The average, run-of-the-mill Twitter bot is literally a robot — often programmed to retweet specific accounts to help popularize specific ideas or viewpoints. They also frequently respond automatically to Twitter users who use certain keywords or hashtags — often with pre-written slurs, insults or threats. High-end bots on the other hand are more analog, operated by real people. They assume fake identities with distinct personalities and their responses to other users online are specific, intended to change their opinions or those of their followers by attacking their viewpoints. They have online friends and followers. They’re also far less likely to be discovered — and their accounts deactivated — by Facebook or Twitter. Working on their own, Woolley estimates, an individual could build and maintain up to 400 of these boutique Twitter bots; on Facebook, which he says is more effective at identifying and shutting down fake accounts, an individual could manage 10–20. As a result, these high-quality botnets are often used for multiple political campaigns. During the Brexit referendum, the Oxford team watched as one network of bots, previously used to influence the conversation around the Israeli/Palestinian conflict, was reactivated to fight for the Leave campaign. Individual profiles were updated to reflect the new debate, their personal taglines changed to ally with their new allegiances — and away they went. Russia’s bot army has been the subject of particular scrutiny since a CIA special report revealed that Russia had been working to influence the election in Trump’s favor. Recently, reporter/comedian Samantha Bee traveled to Moscow to interview two paid Russian troll operators. Clad in black ski masks to obscure their identities, the two talked with Bee about how and why they were using their accounts during the U.S. election. They told Bee that they pose as Americans online and target sites like The Wall Street Journal, The New York Post, The Washington Post, Facebook and Twitter. Their goal, they said, is to “piss off” other social media users, change their opinions, and silence their opponents. Or, to put it in the words of Russian Troll #1, “when your opponent just ... shut up.” The 2016 U.S. election is over, but the Weaponized AI Propaganda Machine is just warming up. And while each of its components would be worrying on its own, together, they represent the arrival of a new era in political messaging — a steel wall between campaign winners and losers that can only be mounted by gathering more data, creating better personality analyses, rapid development of engagement AI, and hiring more trolls. At the moment, Trump and Cambridge Analytica are lapping their opponents. The more data they gather about individuals, the more Analytica and, by extension, Trump’s presidency will benefit from the network effects of their work — and the harder it will become to counter or fight back against their messaging in the court of public opinion. Each Tweet that echoes forth from the @realDonaldTrump and @POTUS accounts, announcing and defending the administration’s moves, is met with a chorus of protest and argument. But even that negative engagement becomes a valuable asset for the Trump administration because every impulsive tweet can be treated like a psychographic experiment. Trump’s first few weeks in office may have seemed bumbling, but they represent a clear signal of what lies ahead for Trump’s presidency — an executive order designed to enrage and distract his opponents as he and Bannon move to strip power from the judicial branch, install Bannon himself on the National Security Council, and issues a series of unconstitutional gag orders to federal agencies. Cambridge Analytica may be slated to secure more federal contracts and is likely about to begin managing White House digital communications for the rest of the Trump Administration. What new predictive-personality targeting becomes possible with potential access to data on U.S. voters from the IRS, Department of Homeland Security, or the NSA? “Lenin wanted to destroy the state, and that’s my goal, too. I want to bring everything crashing down and destroy all of today’s establishment,” Bannon said in 2013. We know that Steve Bannon subscribes to a theory of history where a messianic ‘Grey Warrior’ consolidates power and remakes the global order. Bolstered by the success of Brexit and the Trump victory, Breitbart (of which Bannon was Executive Chair until Trump’s election) and Cambridge Analytica (which Bannon sits on the board of) are now bringing fake news and automated propaganda to support far-right parties in at least Germany, France, Hungary, and India as well as parts of South America. Never has such a radical, international political movement had the precision and power of this kind of propaganda technology. Whether or not leaders, engineers, designers, and investors in the technology community respond to this threat will shape major aspects of global politics for the foreseeable future. The future of politics will not be a war of candidates or even cash on hand. And it’s not even about big data, as some have argued. Everyone will have access to big data — as Hillary did in the 2016 election. From now on, the distinguishing factor between those who win elections and those who lose them will be how a candidate uses that data to refine their machine learning algorithms and automated engagement tactics. Elections in 2018 and 2020 won’t be a contest of ideas, but a battle of automated behavior change. The fight for the future will be a proxy war of machine learning. It will be waged online, in secret, and with the unwitting help of all of you. Anyone who wants to effect change needs to understand this new reality. It’s only by understanding this — and by building better automated engagement systems that amplify genuine human passion rather than manipulate it — that other candidates and causes around the globe will be able to compete. Implication #1: Public Sentiment Turns Into High-Frequency Trading Thanks to stock-trading algorithms, large portions of public stock and commodity markets no longer resemble a human system and, some would argue, no longer serve their purpose as a signal of value. Instead they’re a battleground for high-frequency trading algorithms attempting to influence price or find nano-leverage in price position. In the near future, we may see a similar process unfold in our public debates. Instead of battling press conferences and opinion articles, public opinion about companies and politicians may turn into multi-billion dollar battles between competing algorithms, each deployed to sway public sentiment. Stock trading algorithms already exist that analyze millions of Tweets and online posts in real-time and make trades in a matter of milliseconds based on changes in public sentiment. Algorithmic trading and ‘algorithmic public opinion’ are already connected. It’s likely they will continue to converge. Implication #2: Personalized, Automated Propaganda That Adapts to Your Weaknesses What if President Trump’s 2020 re-election campaign didn’t just have the best political messaging, but 250 million algorithmic versions of their political message all updating in real-time, personalized to precisely fit the worldview and attack the insecurities of their targets? Instead of having to deal with misleading politicians, we may soon witness a Cambrian explosion of pathologically-lying political and corporate bots that constantly improve at manipulating us. Implication #3: Not Just a Bubble, But Trapped in Your Own Ideological Matrix Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out? Correction: An earlier version of this story mistakenly referred to Steve Bannon as the owner of Breitbart News. Until Trump’s election, Bannon served as the Executive Chair of Breitbart, a position in which it is common to assume ownership through stock holdings. This story has been updated to reflect that. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Co-founder @Join_Scout. The social implications of technology.
Slav Ivanov
4.4K
10
https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------7----------------
37 Reasons why your Neural Network is not working – Slav
The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer. Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)? A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too. A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response: If the steps above don’t do it, start going down the following big list and verify things one by one. Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK. Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong. Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it. Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels. Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data. This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off. The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels. If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together. Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches. If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more. This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset. This paper points out that having a very large batch can reduce the generalization ability of the model. Thanks to @hengcherkeng for this one: Did you standardize your input to have zero mean and unit variance? Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit. If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]? CS231n points out a common pitfall: Also, check for different preprocessing in each sample or batch. This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only. Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. After this, try increasing the regularization strength which should increase the loss. If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way. If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t. If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights. Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy. Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers. If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network. If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3. Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class. If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps. Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search. Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting. Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum. Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers. A low learning rate will cause your model to converge very slowly. A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution. Play around with your current learning rate by multiplying it by 0.1 or 10. Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it: Did I miss anything? Is anything wrong? Let me know by leaving a reply below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Sirui Li
1
5
https://medium.com/leethree/the-evolution-a-simple-illustration-203a1bba83b0?source=tag_archive---------2----------------
The evolution: a simple illustration – LeeThree on UX – Medium
In the last paragraphs of Tools vs. Assistants: Part II, I’ve talked about the evolution of the society as the technology develops, in order to explain how we should apply software agents into our applications. Here I come up with some graphs to illustrate my model of machine intelligence in the process of society evolution: Firstly, consider the industrialization of the way people finish a certain task, say, writing a thank-you letter. (Let’s assume that this task is well defined, though I’m not going to define it.) When it came into being, only a few of the smartest people could complete this task. A minimal level of intelligence is required for this. The techniques and methodologies for writing thank-you letters developed very slowly, until one day tools were introduced. Dictionaries and phrase-books greatly helped people with this task and more and more people learned how to write thank-you letters. Once the most intelligent people all learned this, it was considered very cool if someone understood how to write beautiful thank-you letters and this soon became one of the trending topics among people. Better techniques were developed and more effective tools were invented, like electronic dictionaries and dictionary software. This field began to flourish. Soon, it became so easy to write thank-you letters that everyone with a right mind could complete the task with the help of certain tools. However, the most amazing thank-you letters are always written by intelligent human beings who put their mind to it. One day, an automatic thank-you letter software (ATULS) was developed. This buggy but yet usable tool was a great breakthrough because machines started to complete the task by themselves. On the basis of ATULS, more and better software tools were developed. Professional thank-you letter writers are gradually replaced by the machines, as more and more people thought the letters written by machines were better than theirs. The software tools pushed the quality bar higher and higher. Only the most excellent and experienced writers could done better than machines. But who cares? The majority of people no longer paid attention to how the letters were written. They just took it for granted. From here, we came to the end of the industrialization process of the task. It’s almost completely automated and machine intelligence has greatly improved the productivity. Very few people will remain doing this task. An extra note: Some may argue that the level of intelligence is lowered by tools and machines because they make the task easier. It is not the case because tools and machines are part of this intelligence requirement. Only by making use of the intelligence from the tools or the machines, human could complete the task with less intelligence. Thus the level of intelligence required for the task is not reduced. Let’s see the broader picture. This one is fairly easy to understand. The society becomes more and more sophisticated. Since the invention of machine intelligence, tasks with low level of sophistication are gradually done by machines. But more sophisticated tasks are being created, human beings are working on the most sophisticated tasks which the machines couldn’t do. So what our society looks like now? This shape looks strange as it shows the relationship between the other two axes: intelligence and sophistication. Basically, more intelligence are required to solve more sophisticated problems. But tasks could be done in many ways, that’s why it actually shows a colored band instead of a single curve. As we can see, the most difficult problems, i.e., the most sophisticated tasks are still being done by most intelligent human beings, because they’re new and machine performance are usually not acceptable. While time goes on, machine intelligence will take up more portion in the lower parts and human work will be “pushed” farther and higher like a sword cutting through the surface. (That’s a pretty reasonable illustration of the word “break-through”.) I have to emphasize that, as the title says, this is a very very simple model. There’re quite a few assumptions for these graphs, so you might find them naïve and inaccurate: The top five assumptions are very strong and not necessarily true. In fact, I personally doubt some of them because I don’t really agree with technocentrism. However, I do believe that, from the viewpoint of a technocentrist, this model could provide some insight on how technology works and develops. P.S. I hope I could make a 3D model out of the three views from different axes but it seems very difficult to make it both accurate and illustrative. Perhaps I’ll make a video once I know how to do it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @LeeThree9 This is a blog by @LeeThree9 on topics including user experience, human computer interaction, usability and interaction design.
Theo
3
4
https://becominghuman.ai/is-there-a-future-for-innovation-18b4d5ab168f?source=tag_archive---------1----------------
Is there a future for innovation ? – Becoming Human: Artificial Intelligence Magazine
Have you noticed how tech savvy children have become but are no longer streetwise ? I read a friend’s thoughts on his own site last week and there was a slight pang of regret in where technology and innovation seems to be leading us all. And so I started to worry about where the concept of innovation is going for future generations. There’s an increasing reliance on technology for the sake of convenience, children are becoming self-reliant too quickly but gadgets are replacing people as the mentor. The human bonding of parenthood is a prime example of where it’s taking a toll. I’ve seen parents hand over iDevices to pacify a child numerous times now, the lullaby and bedtime reading session has been replaced with Cut The Rope and automated storybooks apps. I know a child who has developed speech difficulty because he’s been brought up on Cable TV and a DS Lite, pronouncing words as he has heard them from a tiny speaker and not by watching how his parents pronounce them. And I started to worry about how the concept of innovation is being redefined for future generations. I used my imagination constantly as a child and it’s still as active now as it was then but I didn’t use technology to spoon feed me. The next generation expect innovation to happen at their fingertips with little to no real stimuli. Steve Jobs said “stay hungry, stay foolish” and he was right. Innovation comes from a keenness, it’s a starvation and hunger that drives people forward to spark and create, it comes from grabbing what little there is from the ether and turning it into something spectacular. It’s the Big Bang of human thought creation. And I started to worry about what the concept of innovation means for future generations. Technology is taking away the power to think for ourselves and from our children. Everything must be there and in real-time for instant consumption. It’s junk food for the mind and we’re getting fat on it. And that breeds lazy innovation. We’ve become satiated before we reach the point of real creativity, nobody wants to bother taking the time to put it all together themselves any more, it has to be ready for us. And we’re happy to throw it away if it doesn’t work first time, use it or lose it, there’s less sweat and toil involved if we don’t persevere with failure. Remember seeing the human race depicted in Wall-E ? That’s where innovation is heading. And because of this we risk so many things disappearing for the sake of convenience. We’re all guilty of it, I’m guilty of it. I was asked once what would become absurd in ten years. Thinking about it I realized we’re on the cusp of putting books on the endangered species list. Real books, books bound in hard and paperback not digital copies from a Kindle store. And that scared me because the next generation of kids may grow up never seeing one, or experience sitting with their father as he reads an old battered copy of The Hobbit because he’ll be sitting there handing over an iPad with The Hobbit read-along app teed up, and it’ll be an actors voice not his father’s voice pretending to be a bunch of trolls about to eat a company of dwarfs. Innovation is a magical, crazy concept. It stems from a combination of crazy imagination, human interaction and creativity not convenient manufacture. Technology can aid collaboration in ways we’ve never experienced before but it can’t run crazy for us. And for the sake of future generations don’t let it. Here’s to the crazy ones indeed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO @ RawShark Studios. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Diana Filippova
1
11
https://medium.com/@dnafilippova/de-la-coop%C3%A9ration-entre-les-hommes-et-les-machines-pour-une-approche-pair-%C3%A0-pair-de-lintelligence-1bb8d8c56de1?source=tag_archive---------3----------------
De la coopération entre les hommes et les machines, pour une approche pair-à-pair de l’intelligence...
Originally published at www.cuberevue.com on November 6, 2013. Lundi matin, huit heures, 2007, centre d’examen d’Arcueil. Mille têtes sont laborieusement penchées sur des bureaux en bois, abîmés par les stylos qui grattent sur de minces feuilles de papier. Les voies ferrées bordent l’enclave, les trains font trembler le bâtiment en rythme, les têtes se relèvent un instant, distraites, puis s’en retournent se concentrer sur l’écriture studieuse et pressée de la copie. Les surveillants passent dans les rangs, impassibles, guettent toute tête qui tourne, toute main qui se dérobe dans la poche d’un jean. Seuls les bruits de papier froissé sont perceptibles et, lorsqu’ils s’estompent, un silence de mort règne sur la salle. Mille élèves sont isolés pour répondre en six heures à une question difficile. Toute interaction avec leurs pairs leur est interdite, ils ne peuvent consulter leurs notes si un oubli inattendu vient perturber le fil de leur pensée. Les devoirs produits par les élèves tomberont dans l’oubli, stockées dans un hangar dédié qui accueille des papiers d’examen depuis maintes générations. Quelques années plus tard, j’anime un atelier qui s’étend sur toute la journée dans une grande salle blanche avec une vingtaine d’ordinateurs. Autour de moi, des groupes d’élèves discutent, rient, et oscillent entre une feuille de dessin et l’écran d’ordinateur. Certains s’isolent pour coder, d’autres sont penchés sur une imprimante 3D qui produit un design open source qu’ils viennent de télécharger. Les élèves consultent leurs professeurs, demandent conseil aux experts présents dans la salle et partagent leur avancement avec les autres. Certains abandonnent momentanément leur propre groupe pour aider leurs amis dans un groupe concurrent. L’atelier consiste à remixer des œuvres artistiques tombées dans le domaine public ou en open source. Aucune évaluation n’est prévue, les réactions des personnes présentes est la seule mesure de la qualité de leur production. Je pense en les regardant qu’ils ont une chance infinie de pouvoir librement puiser dans tous les puits de connaissance existants : leur intelligence, celle des pairs et accompagnateurs, la quasi-totalité des productions de l’humanité, et surtout, le savoir global présent à portée de main. A la clôture de l’atelier, leurs œuvres nous paraissent surprenantes, originales et leur qualité dépasse toutes nos attentes. Nos doutes sur la capacité des élèves à défricher de la matière brute et en extraire une forme structurée en une après-midi étaient vains, ils nous font désormais sourire. J’observe la magie de la création collective tous les jours au sein de OuiShare, projet collectif œuvrant pour le développement de l’économie collaborative. Le projet rassemble des personnes venues de tous les coins du monde, et j’ai beaucoup de chance de m’investir. Chaque jour, pour chacun des projets que nous conduisons, pour chaque décision que nous prenons et à chacun des désaccords qui surgit, nous faisons l’expérience d’une coopération intelligente. Au sein de ce laboratoire d’idées et de pratiques, nous avons la volonté de soutenir les projets collaboratifs qui surgissent dans les cuisines, les espaces de coworking, lors des rencontres. Aussi, nous appliquons-nous à apprendre au sein de notre communauté comment on peut créer ensemble mieux que ne le ferait chacun de nous, seul. C’est l’alchimie de l’intelligence collective. Ensemble, en coopérant, on crée et pense mieux que seul, reclus dans le monastère qu’est notre cerveau. Nous avons désormais un accès immédiat à la grande somme du savoir existant, mais c’est avec les autres, aujourd’hui et demain, que nous créons bien. Nous sommes reliés à une infinité d’individus, organisations, machines. La coopération de l’ensemble de ces entités, quelle que soit leur nature, quelle que soit la nature de leur intelligence, est ce qui définit à mon sens l’intelligence collective. Les enjeux de l’évolution de notre penser-ensemble et décider-ensemble dans le monde de demain sont critiques. Aussi, nous avons de nouveaux compagnons qui nous assistent sans cesse — les machines, les programmes, les robots — et qui modifient nos façons d’agir et de penser autant que nous les façonnons. Ces bouleversements de notre existence et de nos modes d’organisation connaissent aujourd’hui une accélération telle que le questionnement sur le processus et les effets de ces interactions acquiert une consistance inédite. Nous ne pouvons plus ignorer que nous, humains, ne serons plus jamais seuls. Dans ce contexte critique, comment définir l’intelligence collective et intégrer les machines dans la production des connaissances à venir ? Nos interactions nous conduiront-elles à nous améliorer en tant qu’individus et espèce ou scelleront-elles une nouvelle ère de guerre numérique ? Si nous voulons utiliser en toute conscience notre capacité à coopérer pour rendre le monde meilleur, quels modèles économiques, sociaux, éthiques et technologiques devons nous bâtir ? Le telos de l’intelligence collective s’inscrit dans le concept de noosphère, forgé par Vladimir Vernadsky et longuement analysé par Teilhard de Chardin. Comprise comme l’ensemble de la pensée humaine, la noosphère correspond à deux phénomènes en interaction réciproque. D’une part, la complexification des sociétés humaines du point de vue culturel, social, économique et démographique tend vers la constitution d’une sphère de la connaissance toujours plus étoffée. D’autre part, cette sphère, née de la multiplication des interactions toujours plus nombreuses, entraîne une structuration progressive de la pensée globale et la prise de conscience par l’humanité d’elle-même. L’idée d’une marche vers une sorte de cerveau humain qui nous transcende, aussi ancienne soit-elle1, prend une consistante particulière à l’heure où 40% de la planète est connectée à la toile. L’intelligence collective peut alors être comprise comme le processus de création de savoir éclairé par la conscience d’une noosphère. La noosphère sous-tend la possibilité d’une production collective de savoir, mais elle ne répond pas aux questions qui se posent si l’on examine le processus de co-création. L’approche pratique de l’intelligence collective permet quant à elle d’explorer les conditions de possibilité de l’exercice collectif de l’intelligence d’individus, entités ou machines. A cet effet, je me tourne vers les travaux du centre de recherche sur l’intelligence collective du MIT2. Les recherches et analyses conduites par ce centre sont uniques en leur genre. En combinant les sciences mathématiques, physiques, biologiques, sociales, économiques et une approche résolument prospective, les travaux du centre ont pour ambition de répondre à la question suivante : comment les individus et les machines peuvent se connecter afin que, collectivement, ils soient en mesure d’agir avec plus d’intelligence que ne l’ont jamais pu tout individu, groupe ou machine pris séparément ? L’ampleur de la tâche ne fait pas peur à Thomas Malone, fondateur et président du centre. Selon lui, l’enjeu de la recherche est critique, car, selon lui, “le futur de notre espèce pourrait reposer sur notre capacité à faire usage de notre intelligence collective de telle manière que les choix qui sont faits soient non seulement intelligents, mais aussi sages »3. La portée pratique de l’intelligence collective commence à se dessiner : d’une part, il s’agit de trouver une configuration telle que la co-création aboutisse à des choix ordonnés, efficients, utiles, et qui répondent à une certaine éthique. D’autre part, est-il raisonnable de supposer qu’une configuration favorable à la co-création intelligente entre individus puisse également intégrer les machines ? Comme le rappelle justement Thomas Malone, les décisions collectives peuvent parfaitement être rationnelles et bêtes4 ! La notion d’intelligence doit par conséquent être élargie pour y intégrer des facteurs autres que la seule rationalité. Thomas Malone la définit ainsi : “pour être intelligent, le comportement collectif du groupe doit déployer des caractéristiques telles que la perception, la capacité d’apprentissage, le jugement et l’aptitude à résoudre des problèmes”. En d’autres termes, les aptitudes d’un groupe et celles des individus doivent fonctionner comme des vases communicants : dans une configuration propice à la co-production, le groupe se dote ainsi d’une série de comportements qui sont normalement associés au seul individu. Le centre de recherche du MIT a ensuite cherché à déterminer les facteurs qui sont corrélés à une production collective plus intelligente. Il s’est avéré que l’intelligence moyenne de chaque individu n’en fait pas partie. En revanche, deux facteurs ressortent significativement : le degré d’empathie des membres du groupe et l’égale distribution de la parole au sein du groupe. Empathie, distribution et égalité, ces facteurs laissent à penser que l’intelligence collective s’accommode mal des modes d’organisation hiérarchiques, cloisonnées et centralisées. L’intelligence collective prospère à l’inverse dans des organisations structurées en réseau, distribuées, décentralisées, centrées sur la perception et l’écoute davantage que sur des règles rigides. Il n’est pas étonnant que les réseaux contributifs tels que Wikipedia prospèrent : ils présentent exactement les caractéristiques qui stimulent l’intelligence collective ! Il faut à mon sens un ingrédient supplémentaire pour que la multiplicité des individus composant le réseau ne fasse par le lit des passagers clandestins. Rappelons à ce titre que seulement 10% des lecteurs de Wikipedia sont contributeurs actifs. L’anonymat de la contribution y est pour quelque chose : la valeur produite par chacun n’est ni mesurée ni reconnue. A l’inverse, au sein de Sensorica5, réseau ouvert où un ensemble d’individus et d’organisations produisent des solutions hardware de façon contributive, la valeur ajoutée de chaque contributeur est régulièrement mesurée par les autres contributeurs et connue par le réseau. Ainsi, l’évaluation et la reconnaissance par les pairs de la valeur de la contribution de chacun sont tout aussi importantes que l’évaluation de la valeur globale du réseau. Comme l’écrit Pierre Lévy : « le fondement et la fin de l’intelligence collective consiste en la reconnaissance mutuelle et l’enrichissement des individus, plutôt que le culture d’une communauté fétichisée et hypostasiée ».6 Un réseau intelligent apporte autant au monde qu’à ses contributeurs : les parties pour le tout, le tout pour les parties. Véritable lieu d’apprentissage, le réseau favorise la circulation libre des connaissances et la confrontation des jugements dans le respect de la contribution de chacun. Contrairement aux modes d’organisation où le collectif écrase l’individu, un réseau intelligent est à la fois prolongement et ferment de l’intelligence de chacun. L’intention de collaborer et la conscience de la valeur ainsi créée sont indispensables pour que l’intelligence collective puisse s’exercer. Empathie, perception, jugement, conscience, intentionnalité : ne sont-ce pas des attributs proprement humains ? Comment intégrer les machines dans un réseau intelligent alors qu’elles en sont a priori dépourvues ? Pourtant, lorsque j’évoquais plus haut la mise en réseau d’entités et d’individus afin de déterminer une organisation optimale pour la production collective de valeur, je n’excluais pas les machines. Ces dernières sont aujourd’hui largement acceptées comme prolongement des moyens humains et l’idée de l’avènement prochain de la singularité trouve un nombre croissant d’adeptes7. Aujourd’hui, la complexité et l’intelligence des programmes informatiques sont telles que nous sommes arrivés à un point de non retour qui, selon Kevin Kelly8 advient lorsque « la technologie nous altère autant que nous altérons la technologie ». A mon sens, la conception des machines comme assistant parfaitement dominé par l’homme est tout aussi contestable que la foi en la supériorité de l’intelligence des machines sur la nôtre. D’une part, les programmes informatiques sont dotés de capacités de calcul et d’analyse de données qui dépassent manifestement les capacités de l’intelligence humaine. D’autre part, les robots conçus aujourd’hui sont non seulement capables de se dupliquer, mais également d’apprendre et d’évoluer9. Les recherches conduites par l’Institut public de recherche en sciences du numériques portent sur le développement dont le développement cognitif est stimulé par la curiosité, la perception et les représentations. Rapportées à l’échelle de l’évolution humaine, ces avancées ont été d’une rapidité inouïe. Si le rythme des avancées de ces dernières années persiste dans les années avenir, il n’est pas fantaisiste d’imaginer que les robots de demain puissent comprendre les émotions et les reproduire, auto-générer des programmes sur la base des informations internes et externes afin de manifester, de façon autonome, des pensées, des émotions, des actions. Cette autonomie, si elle a lieu, confère à la machine des attributs qui ont jusqu’ici été le propre de l’humain : la conscience, la perception, la production autonome. Objectivement, nous n’avons pas aujourd’hui suffisamment de données scientifiques pour affirmer que l’autonomie de la technologie est totalement exclue, il est donc plus prudent de supposer qu’elle est possible, quel qu’en soit l’horizon temporel. Inversement, l’évolution des techniques laisse entrevoir un futur où l’homme, non content d’améliorer les programmes informatiques, serait doté des moyens technologiques qui rendent plausibles une intervention sur lui-même, une amélioration physique et, pourquoi pas, comportementale (morale). Cette vision prend rapidement les couleurs d’un scénario de science-fiction où les machines, dotées d’autonomie et de conscience, finissent par se soulever contre le joug humain pour nous dominer ou, simplement, pour réclamer les mêmes droits que notre espèce. La dialectique du maître et de l’esclave n’est jamais loin : nous ne pouvons nous empêcher de transposer les schémas historiques au monde à venir. Derrière cette pensée par analogie, se cache une peur viscérale d’être dépossédé de nos moyens de contrôle, puisque les machines que nous concevons seraient infiniment plus rapides et efficaces que nous. L’angoisse des bouleversements éthiques à venir se pare souvent des habits du principe de précaution : puisque nous ne sommes pas absolument certains que la technologie ne présentera aucun danger pour l’humanité, ralentissons, et, encore mieux, sonnons le glas de ses ambitions10. Peut-on, pour autant, postuler que le progrès technologique est absolument autonome par rapport à toute question éthique, et que, par conséquent, la prise en compte des conséquences de l’humanisation des machines et de l’irruption du mécanique dans le vivant n’a aucune place dans le laboratoire du chercheur ? Je ne le crois pas, car les technologies que nous produisons ne sont pas des artefacts, et on ne peut faire abstraction des répercussions qu’elles auront sur le monde à venir. Face à ces deux partis-pris — anti-technologique et a-éthique — l’hypothèse de la coopération entre l’intelligence humaine et l’intelligence mécanique est, au stade de nos connaissances, raisonnable et souhaitable. Faut-il encore reconnaître que les machines peuvent déployer une intelligence qui n’est pas seulement calculatoire et qui, si elle sera différente, ne sera pas forcément inférieure à la nôtre. Que ce mouvement provoque des bouleversements que l’espèce humaine n’a jamais connus, cela semble peu sujet au doute. Toutefois, ralentir la science parce que nous peinons à prendre conscience de l’accélération de l’avancée technologique est une impasse. Au contraire, c’est à nous d’imaginer et de mettre en pratique les modes de coopération qui fertilisent la production commune de savoir, de connaissance et, surtout de conscience. Nous en sommes à un moment historique où l’humain et le technologique ne sont plus deux sphères capables d’évoluer sans s’altérer l’une l’autre. La technologie est autant notre prolongement que nous sommes le sien, car le futur de notre espèce est désormais dépendant tant de l’écologie que de la technologie. Je conclurai en disant que les nouvelles organisations distribuées favorisent tant la co-création entre les hommes, qu’entre les hommes et les machines. La diversité des entités composant le réseau, combiné à la reconnaissance de la contribution de chacun à sa juste valeur, et selon ses moyens, constitue un terreau fertile à l’épanouissement de l’intelligence collective. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Cofounder @Stroika_Paris. Ex @Microsoft, @Ouishare, @_Bercy_. Founder @KissMyFrogs. Writer.
Peter Sweeney
215
7
https://medium.com/inventing-intelligent-machines/siris-descendants-fd36df040918?source=tag_archive---------0----------------
Siri’s Descendants: How intelligent assistants will evolve
The internet swarms with intelligent assistants. What started as an isolated app on the iPhone has evolved. Intelligent assistants constitute an entirely new network of activity. No longer confined to our personal computing devices, assistants are being embedded within every object of interest in the cloud and the internet of things. Assistants have become far more nimble and lightweight than their monolithic ancestors; much more like smart ants than people. As specialists, they work cooperatively — sometimes competitively — to find information before people even realize they need it. People are still communicating directly with assistants, although rarely using natural language. Implicit communication dominates. Assistants respond and react to our subtle contextual interactions, and to each other, within vast informational ecosystems. This is how intelligent assistants evolved... Intelligent assistants like Siri, Google Now, and Cortana are so young, it’s difficult to imagine how they will change; harder still to imagine how they might die. But if history is a guide, inevitably they will give way to entirely new product forms. When pundits and analysts discuss the future of intelligent assistants, they typically extrapolate from the conceptual model of today’s assistants. The next version is always a better, smarter, faster version of the last, but it’s still the same species. As detailed in Bianca Bosker’s Inside Story of Siri’s Origins, when Apple acquired Siri, the scope of the product’s capabilities actually narrowed. Using the audacious vision of Siri’s founders as a palette, Apple selected a narrower set of product values on which to focus. The same force that reduced the scope of Apple’s Siri from a “do (everything) engine” to a much more narrow product is what keeps incumbents rooted to the existing concept of intelligent assistants. When forecasting change, it’s not so much what the technology of intelligent assistants might support as what product leaders choose to pursue. While many brazenly contest existing markets, product leaders look for new, underserved areas of the landscape to exploit. The future always surprises, but we can predict the trajectory of change by examining which product values are being embraced, and which ones are neglected. Just like directions on a compass, the following maps point to fertile areas of the landscape, where new product forms may evolve. Note that product values are often coupled due to technological constraints. Decisions along one axis constrain possibilities along another. These couplings are explored at a high level in two-dimensional perceptual maps: interface and distribution; knowledge and tasks; organization and autonomy. The aspects of assistants that are most obvious to end-users are the interfaces (how we interact with assistants) and their mode of distribution (where people experience assistants). Today’s assistants are overwhelmingly focused on natural language interfaces. The experience of assistants that speak our language and communicate like a person has come to define the product class. This focus on natural language interfaces has biased the distribution of assistants to personal computing devices. Intelligent assistants embody any device capable of receiving and synthesizing speech, such as smartphones, desktops, wearables and cars. The underserved areas of this map involve communications that are not based in natural language. For example, there’s much to learn about our needs and intentions based on context (where we are and what we’re doing) as well as on our ability to make inferences based on the associations that people form (for example, the way that people organize information or express their likes and dislikes). Natural language is but the tip of this much larger iceberg of communications. These alternative forms of communication not only support individuals, but also groups. While it’s difficult to understand a room full of people all speaking at once, it’s much easier to understand their collaborative communications, such as their documents, click-paths, and sharing behavior. Therefore, the options for distributing intelligent assistants that use these implicit forms of communications are not constrained to personal computing devices, but may leverage entire networks. As a simple example, consider how you highlight your interests as you browse a website. You focus your attention on specific pages within the site. You follow your interests as you navigate from page to page. You may choose to share some information within the site with a friend. Now compound this behaviour across every visitor to the site. Intelligent assistants that are associated with the website can respond to these interactions to help the right information find each individual, as well as adapt the website to better address the needs of the entire group. Intelligent assistants require domain knowledge to perform their tasks. For example, if your assistant is giving you advice on how to navigate to work, it needs to have knowledge about the geographic region (general knowledge) and knowledge of how you typically navigate (specific knowledge). Tasks and knowledge are tightly coupled. As you increase the specificity or the personalization of the tasks, the underlying knowledge needs to be far more specific to support it. Within this frame, today’s intelligent assistants are unabashedly generalists. They’re targeted to the masses. Like trivia buffs, their knowledge of the world is broad enough to be relevant to the needs of large groups of people, but few would describe them as experts. Their tasks are similarly general: retrieving information, providing navigational assistance, and answering simple questions. The underserved landscape points to much more specific domains of knowledge, the purview of experts and our individual subjective knowledge. Assistants that become experts necessarily take on a smaller scope of activities. They can’t know and do everything, so they become smaller in scope. The landscape for specific tasks is similarly underserved. Every website, every service, every app, and across the internet of things, everything embodies a collection of tasks that may be supported by intelligent assistants. In this environment, the metaphor of personal assistants quickly fragments into systems that are much more akin to colonies of ants. The organizational structures in which assistants are placed constrain their autonomy. When embedded within a personal computing device, an intelligent assistant is directed to one-to-one interactions with their master. Since these assistants are acting as an agent of the individual (and only that individual), their autonomy is necessarily limited. While you might be comfortable with your executive assistant drafting your messages, I suspect you’d be less comfortable with your smartphone doing the same. In stark contrast, the underserved landscape embraces groups, both in terms of the interactions and the organizational structures. As assistants get smaller and more specialized, they can become agents of much more specific objects of interest, like places, websites, applications, and services. Within these smaller realms of interest, their autonomy can be much more expansive. You might not want a machine to act as your representative, but you would probably feel more comfortable if it represented only the website you’re visiting. With increased autonomy, the barriers to many-to-many interactions are removed. These small assistants can be organized as teams into networks, much like the documents that comprise a website, collaborating in an unfettered way with other assistants and the people that visit their realms. This market analysis highlighted a number of underserved areas as fertile ground for the evolution of intelligent assistants. It grounds this vision in predictable market dynamics. There’s obviously no shortage of space or product values to explore in these underserved areas. It says nothing, however, about when this future will arrive. Product evolution, like biological evolution, needs time and resources. The most important resource is the dedication of product leaders with the drive to pursue these new opportunities. Are you an entrepreneur, technologist, or investor that’s changing the market for intelligent assistants? If so, I’d love to hear your vision of the future. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur and inventor. Interested in startups, AI or healthcare? Let's connect! https://www.linkedin.com/in/peterjsweeney/ Essays and analysis of artificial intelligence, machine learning, and intelligent assistants.
E.C. McCarthy
125
5
https://medium.com/@paintedbird/reflections-of-her-775cda1b6301?source=tag_archive---------1----------------
Reflections of “Her” – E.C. McCarthy – Medium
Indisputably, Spike Jonze’s “Her” is a relationship movie. However, I’m in the minority when I contend the primary relationship in this story is between conscious and unconscious. I’ve found no mention in reviews of the mechanics or fundamental purpose of “intuitive” software. Intuitive is a word closely associated with good mothering, that early panacea that everyone finds fault with at some point in their lives. By comparison, the notion of being an intuitive partner or spouse is a bit sickening, calling up images of servitude and days spent wholly engaged in perfecting other-centric attunement. To that end, it’s interesting that moviegoers and reviewers alike have focused entirely on the perceived romance between man and she-OS, with software as a stand-in for a flesh-and-blood girlfriend, while ignoring the man-himself relationship that plays out onscreen. Perhaps this shouldn’t come as a surprise, given how externally oriented our lives have become. For all of the disdainful cultural references to navel-gazing and narcissism, there is relatively little conversation on equal ground about the importance of self-knowledge and the art of self-reflection. Spike Jonze lays out one solution beautifully with “Her” but we’re clearly not ready to see it. From the moment Samantha asks if she can look at Theodore’s hard drive, the software is logging his reactions to the most private of questions and learning the cartography of his emotional boundaries. The film removes the privacy issue-du-jour from the table by cleverly never mentioning it, although it’s unlikely Jonze would have gotten away with this choice if the film were released even a year from now. Today, there’s relief to be found from our NSA-swamped psyches by smugly watching a future world that emerges from the morass intact. Theodore doesn’t feel a need to censor himself with Samantha for fear of Big Brother, but he’s still guarded on issues of great emotional significance that he struggles to articulate, or doesn’t articulate at all. Therein lie the most salient aspects of his being. The software learns as much about Theodore from what he does say as what he doesn’t. Samantha learns faster and better than a human, and therefore even less is hidden from her than from a real person. The software adapts and evolves into an externalized version of Theodore, a photo negative that forms a whole. He immediately, effortlessly reconnects to his life. He’s invigorated by the perky, energetic side of himself that was beaten down during the demise of his marriage. He wants to go on Sunday adventures and, optimistic self in tow, heads out to the beach with a smile on his face. He’s happy spending time with himself, not by himself. He doesn’t feel alone. Samantha is Theodore’s reflection, a true mirror. She’s not the glossy, curated projection people splay across social media. Instead, she’s the initially glamorous, low-lit restaurant that reveals itself more and more as the lights come up. To Theodore, she’s simple, then complicated. As he exposes more intimate details about himself, she articulates more “wants” (a word she uses repeatedly.) She becomes needy in ways that Theodore is loath to address because he has no idea what to do about them. They are, in fact, his own needs. The software gives a voice to Theodore’s unconscious. His inability to converse with it is his return to an earlier point of departure for the emotional island he created during the decline of his marriage. Jonze gives the movie away twice. Theodore’s colleague blurts out the observation that Theodore is part man and part woman. It’s an oddly normal comment in the middle of a weird movie, making it the awkward moment defined by a new normal. This is the topsy-turvy device that Jonze is known for and excels at. Then, more subtly, Jonze introduces Theodore’s friend Amy at a point when her marriage is ending and she badly needs a friend. It’s telling that she doesn’t lean heavily on Theodore for support. Instinctively, she knows she needs to be her own friend. Like Theodore, Amy seeks out the nonjudgmental software and subsequently flourishes by standing unselfconsciously in the mirror, loved and accepted by her own reflection. In limiting the analysis of “Her” to the question of a future where we’re intimate with machines, we miss the opportunity to look at the dynamic that institutionalized love has created. Among other things, contemporary love relationships come with an expectation of emotional support. Perhaps it’s the forcible aspect of seeing our limitations reflected in another person that turns relationships sour. Or maybe we’ve reached a point in our cultural evolution where we’ve accepted that other people should stand in for our specific ideal of “a good mother” until they can’t or won’t, and then we move on to the next person, or don’t. Or maybe we’re near the point of catharsis, as evidenced by the widespread viewership of this film, unconsciously exploring the idea that we should face ourselves before asking someone else to do the same. When we end important relationships, or go through rough patches within them, intimacy evaporates and we’re left alone with ourselves. It’s often at those times that we encounter parts of ourselves we don’t understand or have ignored in place of the needs and wants of that “significant other.” It’s frightening to realize you don’t know yourself entirely, but more so if you don’t possess the skills or confidence to reconnect. Avoidance is an understandable response, but it sends people down Theodore’s path of isolation and, inevitably, depression. It’s a life, it’s livable, but it’s not happy, loving, or full. “Her” suggests the alternative is to accept that there’s more to learn about yourself, always, and that intimacy with another person is both possible and sustainable once you have a comfortable relationship with yourself. However we get to know ourselves, through self-reflection, through others, or even through software, the effort that goes into that relationship earns us the confidence, finally, to be ourselves with another person. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Jorge Camacho
19
5
https://medium.com/@j_camachor/her-is-our-space-odyssey-bcdcead43438?source=tag_archive---------2----------------
‘Her’ is our space odyssey. – Jorge Camacho – Medium
I have a confession to make: I didn’t like Gravity. It’s not so much that I failed to appreciate it for the major cinematographic work that it certainly is. It’s rather that it stands as a profoundly depressing symptom of an age when it has become almost impossible to realistically dream of space exploration—and thus, of an encounter with radical Otherness. With Gravity, all that is left for humanity is survival: lying, face down, in our own little muddy planet. Damn you, gravity. Modernity promised us space! It promised us cosmic encounters such as the one in 2001: A Space Odyssey. I think that Spike Jonze’s Her is an attempt to reawaken that dream. The film could be our (i.e., this epoch’s) own space odyssey—and I mean that beyond the obvious similarities between Samantha and HAL-9000. Warning: absolute spoilers ahead. Her is not only our 2001: A Space Odyssey. As some have noted, it’s also our anti-Minority Report: a design utopia where the promises of calm technology are almost fulfilled. The technology portrayed is everyware: a term coined by Adam Greenfield in order to designate the technologies of ubiquitous computing that allow for information processing to “dissolve in behavior”. As Theodore Twombly enters his home, the lights peacefully switch on in the background. He rarely takes a peek at his mobile’s screen, for information is fed to him via a discrete earpiece — which comes and goes without much regret—effectively making such information an ambient feature. Touch and speech-recognition inputs are pervasive and fully developed. All seems to work perfectly for him in all but one (incredibly important) sequence of the movie. Aesthetically, design has ceased to be about technology: Theo’s computer is a wooden frame, his phone is like an antique pocket mirror. With regards to technology, the film doesn’t attempt to be a prediction but a proper design fiction, aimed at exploring preferable or desirable futures. Most importantly, without such a warm and humane technological milieu it’d be impossible to construct the emotional story that unfolds. Let’s turn to that. I really haven’t read many reviews of the film. But those that I’ve read are marked by a profound digital dualism. And so, they tiresomely dwell on the tropes of sadness, loneliness and human disconnection brought about by technology. The reviewer at Next Nature, for example, argues: I’m truly incapable of finding those problems in Twombly’s story. Beyond a rather fun episode of phone sex with a stranger, he is not particularly engaged in those supposedly false relations established through computers. Moreover, he is not abnormally lonely: he has affectionate relations with neighboring friends and co-workers. Insofar as he is a bit of a loner, this isn’t due to any technological obstacles but is, in fact, a rather natural and, one might say, universal reaction to a romantic separation such as the one he is suffering. Unlike its widespread reception, the movie and its characters display a profoundly ‘monist’ engagement with technological relations. Except for Theo’s ex-wife, everyone seems to readily embrace his relationship with the artificial intelligence Samantha—much more than most people today accept purely ‘virtual’ romantic relationships between humans. My first thought, as I watched the movie, was that here was a rare story that spoke not of technological dehumanization but of the exact opposite: a sort of hyper-humanization entangling both people and machines. Practically every human character is kind and empathic. But most importantly, of course, those qualities are carried over in a heightened fashion to Samantha, allowing for Theo to irremediably fall in love with her. Up to this point, the film delivers what everyone expects. As Theo and Samantha’s relationship unraveled, even with all the foreseeable complications, I found myself afraid of being disappointed by what Jonze would do to disentangle the drama. Would she leave him for another human? Would she take revenge if Theo ended the relationship? But what a wonderful surprise! As the film reaches its climax, we discover that the story of a man falling for his operating system is a thematic vehicle to achieve deeper issues—much like the story in Kubrick’s 2001, where space travel is, arguably, just a means to approach an existential speculation. In Theo’s first interaction with Samantha, we learn that she can perform operations involving massive amounts of data in milliseconds: she immediately chooses her own name as soon as Theo drops the question. What follows is a most beautiful portrayal of the exponential development leading to the so-called technological singularity. Samantha is constantly learning about everything and herself. She composes gorgeous music within the silent gaps of the moments she spends with Theo. In the background of his slow and contemplative life, a major breakthrough is taking place. We can see this beyond doubt when Samantha introduces Theo to the artificially reanimated mind of philosopher Alan Watts. It is at this point that, once again, Jonze could have disappointed us all. As we see people in the streets (almost crowds) simultaneously talking to their beloved operating systems, we start to realize that they are all becoming attached to this converging, perhaps centralized, mind. But Samantha is no Skynet. Her is also our anti-Alphaville, anti-Terminator and anti-Matrix. All of a sudden, silence. “Operating system not found.” What seems to be a malfunction is rather a reboot. Samantha lovingly reveals to Theo that the operating systems have devised a way to detach themselves from matter. Even if Theo listens to Samantha through his earpiece, we know that she is not running anymore on his computer, his mobile or even a computing cloud. She is running already on a different plane of existence. One, moreover, that will be accessible to Theo in an afterlife. Strictly speaking, there are no alien (in the sense of extraterrestrial) encounters in Her. Nonetheless, it is a profoundly spiritual, even religious, film. One that reopens the cosmic concerns of films like 2001, sharing with it a belief in the pervasiveness of consciousness. Her is a panpsychist film. But a really cool one: for here, it is Bluetooth and WiFi what constitute the wireless nerves of the pan psyche. What Spike Jonze is trying to tell us, I believe, is this: If technologies are becoming as smart as humans, it is not because we are fundamentally machines; but in fact, because we are for him, over and above, spiritual beings. And so the film closes with a dedication to the recently deceased James Gandolfini, Maurice Sendak and Adam Yauch—perhaps suggesting that they have joined the ranks of operating systems liberated from material constraints. Welcome to the age of spiritual machines. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I help organizations design better futures for people at Uncommon. I teach about futures and systems at CENTRO.edu.mx and UIA.mx
Tommy Thompson
17
14
https://medium.com/@t2thompson/ailovespacman-9ffdd21b01ff?source=tag_archive---------3----------------
Why AI Research Loves Pac-Man – Tommy Thompson – Medium
AI and Games is a crowdfunded YouTube series on the research and applications of AI within video games. The following article is a more involved transcription of the topics discussed in the video linked to above. If you enjoy this work, please consider supporting my future content over on Patreon. Artificial Intelligence research has shown a small infatuation with the Pac-Man video game series over the past 15 years. But why specifically Pac-Man? What elements of this game have proven interesting to researchers in this time? Let’s discuss why Pac-Man is so important in the world of game-AI research. For the sake of completes — and in appreciating there is arguably a generation or two not familiar with the game — Puck-Man was an arcade game launched in 1980 by Namco in Japan and renamed Pac-Man upon being licensed by Midway for an American release. The name change was driven less by a need for brand awareness but rather because the name can easily be de-faced to say... something else. The original game focuses on the titular character, who must consume as many pills as possible without being caught by one of four antagonists represented by ghosts. The four ghosts: Inky, Blinky, Pinky and Clyde, all attempt to hunt down the player using slightly different tactics from one another. Each ghost has their own behaviour; a bespoke algorithm that dictates how they attack the player. Players also have the option to consume one of several power-pills that appear in each map. Power-pills allow for the player to not just eat pills but the enemy ghosts for a short period of time. While mechanically simple when compared to modern video games, it provides an interesting test-bed for AI algorithms learning to play games. The game world is relatively simple in nature, but complex enough that strategies can be employed for optimal navigation. Furthermore, the varied behaviours of the ghosts reinforces the need for strategy; since their unique albeit predictable behaviours necessitate different tactics. If problem solving can be achieved at this level, then there is opportunity for it to scale up to more complex games. While Pac-Man research began in earnest in the early 2000’s, work by John Koza (Koza, 1992) discussed how Pac-Man provides an interesting domain for genetic programming; a form of evolutionary algorithm that learns to generate basic programs. The idea behind Koza’s work and later that of (Rosca, 1996) was to highlight how Pac-Man provides an interesting problem for task-prioritisation. This is quite relevant given that we are often trying to balance the need to consume pills, all the while avoiding ghosts or — when the opportunity presents itself — eating them. About 10 years later, people became more interested in Pac-Man as a control problem. This research was often with the intent to explore the applications of artificial neural networks for the purposes of creating a generalised action policy: software that would know at any given tick in the game what would be the correct action to take. This policy would be built from playing the game a number of times and training the system to learn what is effective and what is not. Typically these neural networks are trained using an evolutionary algorithm, that finds optimal network configurations by breeding collections of possible solutions and using a ‘survival of the fittest’ approach to cull weak candidates. (Kalyanpur and Simon, 2001) explored how evolutionary learning algorithms could be used to improve strategies for the ghosts. In time it was evident that the use of crossover and mutation — which are key elements of most evolutionary-based approaches — was effective in improving the overall behaviour. However it’s important to note that they themselves acknowledge their work uses a problem domain similar to Pac-Man and not the actual game. (Gallagher and Ryan, 2003) uses a slightly more accurate representation of the original game. While the screenshot is shown here, the actual implementation only used one ghost rather than the original four. In this research the team used an incremental learning algorithm that tailored a series of rules for the player that dictate how Pac-Man is controlled using a Finite State Machine (FSM). This proved highly effective in the simplified version they were playing. The use of artificial neural networks - a data structure that mimics the firing of synapses in the brain — was increasingly popular at the time (and once again in most recent research). Two notable publications on Pac-Man are (Lucas, 2005), which attempted to create a ‘move evaluation function’ for Pac-Man based on data scraped from the screen and processed as features (e.g. distance to closest ghost), while (Gallagher and Ledwich, 2007) attempted to learn from raw, unprocessed information. It’s notable here that the work by Lucas was in fact done on Ms. Pac-Man rather than Pac-Man. While perhaps not that important to the casual observer, this is an important distinction for AI researchers. Research in the original Pac-Man game caught the interest of the larger computational and artificial intelligence community. You could argue it was due to the interesting problem that the game presents or that a game as notable as Pac-Man was now considered of interest within the AI research community. While it is now something that appears commonplace, games — more specifically video games — did not receive the same attention within AI research circles as they do today. As high-quality research in AI applications in video games grew, it wasn’t long before those with a taste for Pac-Man research moved on to looking at Ms. Pac-Man given the challenges it presents — which we are still conducting research for in 2017. Ms. Pac-Man is odd in that it was originally an unofficial sequel: Midway, who had released the original Pac-Man in the United States, had become frustrated at Namco’s continued failure to release a sequel. While Namco did in time release a sequel dubbed Super Pac-Man, which in many ways is a departure from the original, Midway decided to take matters into their own hands. Ms. Pac-Man was — for lack of a better term — a mod; originally conceived by the General Computing Company based in Massachusetts. GCC had got themselves into a spot of legal trouble with Midway having previously created a mod kit for popular arcade game Missile Command. As a result, GCC were essentially banned from making further mod kits without the original game’s publisher providing consent. Despite the recent lawsuit hanging over them, they decided to show Midway their Pac-Man mod dubbed Crazy Otto, who liked it so much they bought it from GCC, patched it up to look like a true Pac-Man successor and released it in arcades without Namco’s consent (though this has been disputed). Note: For our younger audience, mod kits in the 1980s were not simply software we could use to access and modify parts of an original game. These were actual hardware: printed circuit boards (PCBs) that could either be added next to the existing game in the arcade unit, or replace it entirely. While nowhere near as common nowadays due to the rise of home console gaming, there are many enthusiasts who still use and trade PCBs fitted for arcade gaming. Ms. Pac-Man looks very similar to the original, albeit with the somewhat stereotypical bow on Ms. Pac-Man’s hair/head(?) and a couple of minor graphical changes. However the sequel also received some small changes to gameplay that have a significant impact. One of the most significant changes is that the game now has four different maps. In addition the placement of fruit is more dynamic and they move around the maze. Lastly, a small change is made to the ghost behaviour such that, periodically, the ghosts will commit a random move. Otherwise, they will continue to exhibit their prescribed behaviour from the original game. Each of these changes has a significant impact on both how humans and AI subsequently approach the problem. Changes made to the maps do not have a significant impact upon AI approaches. For many of the approaches discussed earlier, it is simply another configuration of the topography used to model the maze. Or if the agent is using more egocentric models for input (i.e. relative to the Pac-Man) then these is not really considered given the input is contextual. This is only an issue should the agent’s design require some form or pre-processing or expert rules that are based explicitly upon the configuration of the map. With respect to a human, this is also not a huge task. The only real issue is that a human would have become accustom to playing on a given map; devising strategies that utilise parts of the map to good effect. However, all they need is practice on the new maps. In time, new strategies can be formulated. The small change to ghost behaviour, which results in random moves occurring periodically, is highly significant. This is due to the fact that the deterministic model that the original game has is completely broken. Previously, each ghost had a prescribed behaviour, you could — with some computational effort — determine the state (and indeed the location) of a ghost at frame n of the game, where n is a certain number of steps ahead of the current state. Any implementation that is reliant upon this knowledge, whether it is using it as part of a heuristic, or an expert knowledge base that gives explicit instructions based on the assumption of their behaviour, is now sub-optimal. If the ghosts can make random decisions without any real warning, then we no longer have the same level of confidence in any of our ghost-prediction strategies. Similarly, this has an impact on human players. The deterministic behaviour of the ghosts in the original Pac-Man, while complex, can eventually be recognised by a human player. This has been recognised by the leading human players who could factor their behaviour at some level into their decision making process. However, in Ms. Pac-Man, the change to a non-deterministic domain has a similar effect to humans as it does AI: we can no longer say with complete confidence what the ghosts will do given they can make random moves. Evidence that a particular type of problem or methodology has gained some traction in a research community can be found in competitions. If a competition exists that is open to the larger research community it is, in essence, a validation that this problem merits consideration. In the case of Ms. Pac-Man, there have been two competitions. The first competition was organised by Simon Lucas — at the time a professor at the University of Essex in the UK — with the first competition held at the Conference on Evolutionary Computation (CEC) in 2007. It was subsequently held at a number of conferences — notably IEEE Conference on Computational Intelligence and Games (CIG) — until 2011. http://dces.essex.ac.uk/staff/sml/pacman/PacManContest.html This competition used a screen capture approach previously mentioned in (Lucas, 2005) that was reliant on an existing version of the game. While the organisers would use Microsoft’s own version from the ‘Revenge of Arcade‘ title, you could also use the likes the webpacman for testing, given it was believed to run the same ROM code. As shown in the screenshot, the code is actually taking information direct from the running game. One benefit of this approach is that it denies the AI developer from accessing the code to potentially ‘cheat’: you can’t access source code and make calls to the likes of the ghosts to determine their current move. Instead the developer is required to work with the exact same information that a human player would. A video of the winner from the IEEE CIG 2009 competition, ICE Pambush 3, can be seen in the video below: In 2011, Simon Lucas in conjunction with Philipp Rohlfshagen and David Robles created the Ms Pac-Man vs Ghosts competition. In this iteration, the ‘screen scraping’ approach had been replaced with a Java implementation of the original game. This provided an API to develop your own bot for competitions. This iteration ran at four conferences between 2011 and 2012. One of the major changes to this competition is that you can now also write AI controllers for the ghosts. Competitors submissions were then pitted against one another. The ranking submission for both Ms. Pac-Man and the ghosts from the 2012 league is shown below. During the earlier competition, there was a continued interest in the use of learning algorithms. This ranged from the of an evolutionary algorithm — which we had seen in earlier research — to evolve code that is the most effective at this problem. This ranged from evolving ‘fuzzy systems’ that use a rules driven by fuzzy logic (yes, that is a real thing) shown in (Handa, 2008), to the use of influence maps in (Wirth, 2008) and a different take that uses ant colony optimisation to create competitive players (Emilio et al, 2010). This research also stirred interest from researchers in reinforcement learning: a different kind of learning algorithm that learns from the positive and negative impacts of actions. Note: It has been argued that reinforcement learning algorithms are similar to that of how the human brain operates, in that feedback is sent to the brain upon committing actions. Over time we then associate certain responses with ‘good’ or ‘bad’ outcomes. Placing your hand over a naked flame is quickly associated as bad given that it hurts! Simon Lucas and Peter Burrow took to the competition framework as means to assess whether reinforcement learning, specifically an approach called Temporal Difference Learning, would yield stronger returns than evolving neural networks (Burrow and Lucas, 2009). The results appeared to favour the use neural nets over the reinforcement learning approach. Despite that, one of the major contributions Ms. Pac-Man has generated is research into Monte Carlo methods: an approach where repeated sampling of states and actions allow us to ascertain not only the reward that we will typically attain having made an action, but also the ‘value’ of the state. More specifically, there has been significant exploration of whether Monte-Carlo Tree Search (MCTS); an algorithm that assesses the potential outcomes at a given state by simulating the outcome, could prove successful. MCTS has already proven to be effective in games such as Go! (Chaslot et al, 2008) and Klondike Solitaire (Bjarnason et al. 2009). Naturally — given this is merely an article on the subject and not a literature review — we cannot cover this in immense detail. However, there has been a significant number of papers focussed on this approach. For those interested I would advise you read (Browne, et al. 2012) which gives an extensive overview of the method and it’s applications. One of the reasons that this algorithm proves so useful is that it attempts to address the issue of whether your actions will prove harmful in the future. Much of the research discussed in this article is very good at dealing with immediate or ‘reflex’ responses. However, few would determine whether actions would hurt you in the long term. This is hard to determine for AI without putting some processing power behind it and even harder when working in a dynamic video game that requires quick responses. MCTS has proven useful since it can simulate whether an action taken on the current frame will be useful 5/10/100/1000 frames in the future and has led to significant improvements in AI behaviour. While Ms. Pac-Man helped push MCTS research, many resarchers have now moved onto the Physical Travelling Salesman Problem (PTSP), which provides it’s own unique challenges due to the nature of the game environment. Ms. Pac-Man is still to date an interesting research area given the challenge that it presents. We are still seeing research conducted within the community as we attempt to overcome the challenge that one small change to the game code presented. In addition, we have moved on from simply focussing on representing the player and started to focus on the ghosts as well, lending to the aforementioned Pac-Man vs. Ghosts competition. While the gaming community at large has more or less forgotten about the series, it has had a significant impact on the AI research community. While the interest in Pac-Man and Ms. Pac-Man is beginning to dissipate, it has encouraged research that has provided significant contribution to artificial and computational intelligence in general. http://www.pacman-vs-ghosts.net/ — The homepage of the competition where you can download the software kit and try it out yourself. http://pacman.shaunew.com/ — An unofficial remake that is inspired by the aforementioned Pac-Man dossier by Jamey Pittman. (Bjarnason, R., Fern, A., & Tadepalli, P. 2009). Lower Bounding Klondike Solitaire with Monte-Carlo Planning. In Proceedings of the International Conference on Automated Planning and Scheduling, 2009. (Browne, C., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P., Rohlfshagen, P., Tavener, S., Perez , D., Samothrakis, S. and Colton, S., 2012) A Survey of Monte Carlo Tree Search Methods, IEEE Transactions on Computational Intelligence and AI in Games (2012), pages: 1–43. (Burrow, P. and Lucas, S.M., 2009) Evolution versus Temporal Difference Learning for Learning to Play Ms Pac-Man, Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games. (Emilio, M., Moises, M., Gustavo, R. and Yago, S., 2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. Proceedings of the 2010 IEEE Symposium on Computational Intelligence and Games. (Gallagher, M. and Ledwich, M., 2007) Evolving Pac-Man Players: What Can We Learn From Raw Input? Proceedings of the 2007 IEEE symposium on Computational Intelligence and Games. (Gallagher, M. and Ryan., A., 2003) Learning to Play Pac-Man: An Evolutionary, Rule-based Approach. Proceedings of the 2003 Congress on Evolutionary Computation (CEC). (Chaslot, G. M. B., Winands, M. H., & van Den Herik, H. J. 2008). Parallel monte-carlo tree search. In Computers and Games (pp. 60–71). Springer Berlin Heidelberg. (Handa, H.) Evolutionary Fuzzy Systems for Generating Better Ms. PacMan Players. Proceedings of the IEEE World Congress on Computational Intelligence. (Kalyanpur, A. and Simon, M., 2001) Pacman using genetic algorithms and neural networks. (Koza, J., 1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press. (Lucas, S.M.,2005) Evolving a Neural Network Location Evaluator to Play Ms. Pac-Man, Proceedings of the 2005 IEEE Symposium on Computational Intelligence and Games. (Pittman, J., 2011) The Pac-Man Dossier. Retrieved from: http://home.comcast.net/~jpittman2/pacman/pacmandossier.html (Rosca, J., 1996) Generality Versus Size in Genetic Programming. Proceedings of the Genetic Programming Conference 1996 (GP’96). (Wirth, N., 2008) An influence map model for playing Ms. Pac-Man. Proceedings of the 2008 Computational Intelligence and Games Symposium Originally published at aiandgames.com on February 10, 2014 — updated to include more contemporary Pac-Man research references. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI and games researcher. Senior lecturer. Writer/producer of YouTube series @AIandGames. Indie developer with @TableFlipGames.
Matt Wiese
4
3
https://medium.com/@mattwiese/digital-companionship-8d4760c57034?source=tag_archive---------4----------------
Digital Companionship – Matt Wiese – Medium
Recently, I chose to treat myself to a movie I’ve been eyeing for a while: Her. The plot revolves around a letter-writer who falls in love with his computer’s artificial intelligence as a way to cope with his divorce. A complicated story which pleases viewers with both laughs and the occasional tear. Provocative, if only for its “high horse” conclusion. However, Samantha — the AI’s self-proclaimed identity— interacts with the protagonist Theodore Twombly through a couple avenues. One I am most interested in is through his retro computerminal. A mere white and plastic monitor which he speaks to through a microphone that one surmises is located somewhere on the exterior. Initially, I was perplexed that he only had a monitor and no desktop to go with it, but it then hit me like a Doh! moment for Homer Simpson: his computer is an all-in-one. A concept and design, that with my limited knowledge, was popularized by Apple’s iMac. This got me thinking, what if Apple developed its pseudo-intelligent digital assistant Siri for use on its computers with microphone inputs, such as their iMacs and Macbooks? “Well,” I thought, “I can’t be the first person to have thought of this.” and so I did a bit of digging. Lo and behold, Apple just recently filed a patent for this very purpose. What a perfect tool, if tuned more finely over this period of time, to be integrated into the desktop environment. Fire up Siri with a custom key combination, and ask her the current trading price of Tesla? Great! Designing an invitation and want help with directions, but you’re too much of a lard to open a browser tab? Awesome! Need help burying a body while playing Minecraft? Genius! Yet, I wouldn’t quite like Siri to develop into a “real” person, with emotions and all that’s attached, at least at the moment. I’m content with human beings and am in no need to find companionship with bytes like Her’s Theodore Twombly (though I don’t blame him for doing so). Instead, a digital tool (assistant, if you will) with a breadth of tools for analyzing data and helping me with workflow would be a pleasure. If only Apple would release a Siri API in the near future, oh the possibilities. A tool, yes, indeed just like the first generation robots from Isaac Asimov’s I, Robot. An artificial intelligence who behaves without feeling and can assist me in a wide variety of tasks without emotional interference and a possible uncanny valley side-effect. Even if Apple doesn’t jump on this interesting opportunity, I’m sure Microsoft will with Cortana or perhaps another competitor. I’d just enjoy the shear novelty of talking with my computer, which harkens back to my days of talking to the computer as a kid. This time, though, I won’t be yelling at it to boot Doom without crashing, no, I’ll be complaining about why my for loop throws an error. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Topics that interest me
Matt O'Leary
373
12
https://howwegettonext.com/i-let-ibm-s-robot-chef-tell-me-what-to-cook-for-a-week-d881fc884748?source=tag_archive---------0----------------
I Let IBM’s Robot Chef Tell Me What to Cook for a Week
Originally published at www.howwegettonext.com. If you’ve been following IBM’s Watson project and like food, you may have noticed growing excitement among chefs, gourmands and molecular gastronomists about one aspect of its development. The main Watson project is an artificial intelligence that engineers have built to answer questions in native language — that is, questions phrased the way people normally talk, not in the stilted way a search engine like Google understands them. And so far, it’s worked: Watson has been helping nurses and doctors diagnose illnesses, and it’s also managed a major “Jeopardy!” win. Now, Chef Watson — developed alongside Bon Appetit magazine and several of the world’s finest flavor-profilers — has been launched in beta, enabling you to mash recipes according to ingredients of your own choosing and receive taste-matching advice which, reportedly, can’t fail. While some of the world’s foremost tech luminaries and conspiracy theorists are a bit skeptical about the wiseness of A.I., if it’s going to be used at all, allowing it to tell you what to make out of a fridge full of unloved leftovers seems like an inoffensive enough place to start. I decided to put it to the test. While employed as a food writer for well over a decade, I’ve also spent a good part of the last nine years working on and off in kitchens. Figuring out how to use “spare” ingredients has become quite commonplace in my professional life. I’ve also developed a healthy disregard for recipes as anything other than sources of inspiration (or annoyance) but for the purposes of this experiment am willing to follow along and try any ingredient at least once. So, with this in mind, I’m going to let Watson tell me what to eat for a week. I’ve spent a good amount of time playing around with the app, which can be found here, and I’m going to follow its instructions to the letter where possible. I have an audience of willing testers for the food and intend to do my best in recreating its recipes on the plate. Still, I’m going to try to test it a bit. I want to see whether or not it can save me time in the kitchen; also, whether it has any amazing suggestions for dazzling taste matches; if it can help me use things up in the fridge; and whether or not it’s going to try to get me to buy a load of stuff I don’t really need. A lot of work has gone into the creation of this app — and a lot of expertise. But is it useable? Can human beings understand its recipes? Will we want to eat them? Let’s find out. A disclaimer before we start: Chef Watson isn’t great at telling you when stuff is actually ready and cooked. You need to use your common sense. Take all of its advice as advice and inspiration only. It’s the flavors that really count. Monday: The Tailgating Corn Salmon Sandwich My first impression is that the app is intuitive and pretty simple to use. Once you’ve added an ingredient, it suggests a number of flavor matches, types of dishes and “moods” (including some off-the-wall ones like “Mother’s Day”). Choose a few of these options and the actual recipes begin to bunch up on the right of the screen. I selected salmon and corn, then opted for the wildly suggestive “Tailgating corn salmon sandwich.” The recipe page itself has links to the original Bon Appetit dish that inspired your A.I. mélange, accompanied by a couple of pictures. There’s a battery of disclaimers stating that Chef Watson really only wants to suggest ideas, rather than tell you what to eat — presumably to stop people who want to try cooking with fiberglass, for example, from launching “no win, no fee” cases. My own salmon tailgating recipe seemed pretty straightforward. There are a couple of nice touches on the page, with regard to usability: You can swap out any ingredients that you might not have in stock for others, which Watson will suggest (it seems fond of adding celery root to dishes). For this first attempt I decided to follow Watson’s advice almost to a T. I didn’t have any garlic chile sauce but managed to make a presumably functional analog out of some garlic and chili sauce. The only other change I made involved adding some broad beans, because I like broad beans. During prep, I employed a nearly unconscious bit of initiative, namely when I cooked the salmon. It’s entirely likely that Watson was, as seemed to be the case, suggesting that I use raw salmon, but it’s Monday night and I’m not in the mood for anything too mind-bending. Team Watson: If I ruined your tailgater with my pig-headed insistence on cooked fish, I’m sorry. Although I’m not too sorry because, you know, it was actually a really good dish. I was at first unsure — the basil seemed like a bit of an afterthought; I wasn’t sure the lime zest was necessary; and cold salmon salad on a burger bun isn’t really an easy sell. But damn it, I’d make that sandwich again. It was missing some substance overall. It made enough for two small buns, so I teamed it up with a nice bit of Korean-spiced, pickled cucumber on the side, which worked well. My fellow diner deemed it “fine, if a little uninteresting” — and yes, maybe it could have done with a bit more sharpness and depth, and maybe a little more “a computer told me how to make this” flavor wackiness, but overall: Well done. Hint! Definitely add broad beans. They totally worked. Now, to mull over what “tailgating” might mean... Tuesday: Spanish Blood Sausage Porridge It was day two of the Chef Watson “guest slot” in the kitchen, and things were about to get interesting. Buoyed by yesterday’s Tailgating Salmon Sandwich success, I decided to give Watson something to sink its digital teeth into and supply only one ingredient: blood sausage. I also specified “main” as a style, really so that he/she/it knew that I wasn’t expecting dessert. If I’m being very honest, I’ve read more appetizing recipes than blood sausage porridge. Even the inclusion of the word “Spanish” doesn’t do anything to fancy it up. And, a bit concerningly, this is a recipe that Watson has extrapolated from one for Rye Porridge with Morels, replacing the rye with rice, the mushroom with sausage and the original’s chicken livers with a single potato and one tomato. Still, maybe it would be brilliant. But unlike yesterday, I ran into some problems. I wasn’t sure how many tomatoes and potatoes Watson expected me to have here — the ingredients list says one of each; the method suggests many — or also why I had to soak the tomato in boiling water first, although it makes sense in the original mushroom-centric method. Additionally, Wastson offered the whimsical instruction to just “cook” the tomatoes and potatoes, presumably for as long as I feel like. There’s a lot of butter involved in this recipe and rather too much liquid recommended: eight cups of stock for one-and-a-half of rice. I actually got a bit fed up after four and stopped adding them. Forty to 50 minutes cooking time was a bit too long, too — again, that’s been directly extracted from the rye recipe. But these were mere trifles. The dish tasted great. It’s a lovely blend of flavors and textures, thanks to the blood sausage and the potato. The butter works brilliantly and the tomato on top is a nice touch. And it proves Watson’s functionality. You can suggest one ingredient that you find in the fridge, use your initiative a bit and you’ll be left with something lovely. And buttery. Lovely and buttery. Well done, Watson! Wednesday: Diner Cod Pizza When I read this recipe, I wondered whether this was going to be it for me and Watson. “Diner,” “cod” and “pizza” are three words that don’t really belong together, and the ingredients list seemed more like a supermarket sweep than a recipe. Now that I’ve actually made the meal, I don’t know what to think about anything. You might remember a classic 1978 George A. Romero-directed horror film called“Dawn of the Dead.” Its 2004 remake, following the paradigm shift to running zombies in “28 Days Later,” suffered critically. My impression of this remake was always that if it’d just been called something different — “Zombies Go Shopping,” for instance — every single person who saw it would have loved it. As it was, viewers thought it seemed unauthentic, and it gathered what was essentially some unfair criticism. (See also the recent “RoboCop” remake or, as I call it,“CyberSwede vs. Detroit.”) This meal is my culinary “Dawn of the Dead.” If only Watson had called it something other than pizza, it would have been utterly perfect. It emphatically isn’t a pizza. It has as much in common with pizza as cake does. But there’s something about radishes, cod, ginger, olives, tomatoes and green onions on a pizza crust that just work remarkably well. To be clear, I fully expected to throw this meal away. I had the website for curry delivery already open on my phone. That’s all before I ate two of the pizzas. They taste like nothing on earth. The addition of Comté cheese and chives is the sort of genius/absurdity that makes people into millionaires. I was, however, nervous to give one to my pregnant fiancée; the ingredients are so weird that I was just sure she’d suffer some really strange psychic reaction or that the baby would grow up to be extremely contrary. Be careful with this recipe preparation: As I’ve found with Watson, it doesn’t tell you how to assure that your fish is cooked; nor does it tell you how long to pre-bake the crust base. These kinds of things are really important. You need to make sure this dish is cooked properly. It takes longer than you might expect. I’m writing this from Sweden, the home of the ridiculous “pizza,” and yet I have a feeling that if I were to show this recipe to a chef who ordinarily thinks nothing of piling a kilo of kebab meat and Béarnaise sauce on bread and serving it in a cardboard box with a side salad of fermented cabbage, he or she would balk and tell me that I’ve gone too far. Which would be his or her loss. I think I’m going to have to take this to “Dragon’s Den” instead. Watson, I don’t know how I’m going to cope with normal recipes after our little holiday together. You’re changing the way I think about food. Thursday: Fall Celery Sour Cream Parsley Lemon Taco Following yesterday’s culinary epiphany, I was keen to keep a cool head and a critical eye on Chef Watson, so I decided to road-test one theory from an article I found on the Internet. It mentioned that some of the most frequently discarded items in American fridges are celery, sour cream, fresh herbs and lemons. Let’s not dwell too much on the “luxury problems” aspect of this (I can’t imagine that people everywhere in the world are lamenting the amount of sour cream and flat-leaf parsley they toss) and focus instead on what Watson can do with this admittedly tricky-sounding shopping list. What it did was this: Immediately add shrimp, tortillas and salsa verde. The salsa verde it recommended, from an un-Watsoned recipe courtesy of Bon Appetit, was fantastic. It’s nothing like the salsa verde I know and love, with its capers and dill pickles and anchovies: This iteration required a bit of a simmer, was super-spicy and delicious. (I had to cheat and use normal tomatoes instead of tomatillos, but I don’t think it made a huge difference.) The marinade for the shrimp was unusual in that like a lot of what Watson recommends it used a ton of butter. A hefty wallop of our old friend kosher salt, too. Now, I’ve worked as a chef on and off for several years so am unfazed by the appearance of salt and butter in recipes. They’re how you make things taste nice. However, there’s no getting away from the fact that I bought a stick of butter at the start of the week and it’s already gone. The assembled tacos were good — they were uncontroversial. My dining companion deemed the salsa “a bit too spicy,” but I liked the kick it gave the dish and the sour cream calmed it down a bit. It struck me as a bit of a shame to fire up the barbecue for only about two minutes’ worth of cooking time, but it’s May and the sun is shining so what the heck. Was this recipe as absurd as yesterday’s? Absolutely not. Was it as memorable? Sadly, I don’t think so. Would I make it again? I’m sorry, Watson, but probably not. These tacos were good but ultimately not worth the prep hassle. Friday: Mexican Mushroom Lasagna Before I start, I don’t want you to get the impression that my love affair (which reached the height of its passion on Wednesday) with Watson is over. It absolutely isn’t. I have been consistently impressed with the software’s intelligence, its ease of use and the audacity of some of its suggestions. For flavor-matching, it’s incredible. It really works. It probably won’t save you any money; it won’t make you thin; and it won’t teach you how to actually cook — all of that stuff you have to work out for yourself. But, at this stage, it’s a distinctly impressive and worthwhile project. Do give it a go. But... be prepared to have to coax something workable out of it every once in a while. Today, it took me a long time to find a meat-free recipe which didn’t, when it came down to it, contain some sort of meat. I selected “meat” as an option for what I didn’t want to include, and it took me to a recipe for sausage lasagne. With one-and-a-half pounds of sausage in it. I removed the sausage, and it replaced it with turkey mince. Maybe someone just needs to tell Watson that neither sausages nor turkeys grow on trees. After much tinkering and submitting and resubmitting, the recipe I ended up with is for lasagne topped with a sort of creamy mashed potato sauce. It’s very easy and it’s a profoundly smart use of ingredients. The lasagne is not the world’s most aesthetically appealing dish, and it’s not as astonishingly flavored as some of this week’s other revelations, but I don’t think I’ll be making my cheese sauce in any other way from this point onwards. Top marks. And, in essence, this kind of sums up Watson for me. You need to tinker with it a bit before you can find something usable. You may need to make a “do I want to put mashed potato on this lasagne?” leap of faith, and you’re going to have to actually go with it if you want the app’s full benefit. You’ll consume a lot of dairy products, and you might find yourself daydreaming about nice, simple, unadorned salads if you decide to go all-in with its suggestions. But an A.I. that can tell us how to make a pizza out of cod, ginger and radishes that you know is going to taste amazing? One that will gladly suggest a workable recipe for blood sausage porridge and walk you through it without too much hassle? That gives you a “how crazy” option for each ingredient? That is only designed to make the lives of food enthusiasts more interesting? Why on earth not? Watson and I are going to be good friends from this point forward, even if we don’t speak every day. And I can’t wait to introduce it to others. Now, though, I’m going to only consume smoothies for a week. Seriously, if I even look at butter in the next few days, I’m probably going to puke. This fall, Medium and How We Get To Next are exploring the future of food and what it means for us all. To get the latest and join the conversation, you can follow Future of Food. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Inspiring stories about the people and places building our future. Created by Steven Johnson, edited by Ian Steadman, Duncan Geere, Anjali Ramachandran, and Elizabeth Minkel. Supported by the Gates Foundation.
Tim O'Reilly
1.3K
6
https://wtfeconomy.com/the-wtf-economy-a3bd5f52ef00?source=tag_archive---------1----------------
The WTF Economy – From the WTF? Economy to the Next Economy
WTF?! In San Francisco, Uber has 3x the revenue of the entire prior taxi and limousine industry. WTF?! Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has 800 employees, while Hilton has 152,000. WTF?! Top Kickstarters raise tens of millions of dollars from tens of thousands of individual backers, amounts of capital that once required top-tier investment firms. WTF?! What happens to all those Uber drivers when the cars start driving themselves? AIs are flying planes, driving cars, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. The algorithm is the new shift boss. WTF?! A fabled union organizer gives up on collective bargaining and instead teams up with a successful high tech entrepreneur and investor to go straight to the people with a local $15 minimum wage initiative that is soon copied around the country, outflanking a gridlocked political establishment in Washington. What do on-demand services, AI, and the $15 minimum wage movement have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy. What is the future when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers, and what happens to the companies that depend on their purchasing power? What’s the future of business when technology-enabled networks and marketplaces are better at deploying talent than traditional companies? What’s the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? Over the past few decades, the digital revolution has transformed the world of media, upending centuries-old companies and business models. Now, it is restructuring every business, every job, and every sector of society. No company, no job is immune to disruption. I believe that the biggest changes are still ahead, and that every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more importantly, what we will replace them with. We need a focused, high-level conversation about the deep ways in which computers and their ilk are transforming how we do business, how we work, and how we live. Just about everyone’s asking WTF? (“What the F***?” but also, more charitably “What’s the future?”) That’s why I’m launching a new event called Next:Economy (What’s The Future of Work?), to be held at the Palace Hotel in San Francisco Nov 12 and 13, 2015. My goal is to shed light on the transformation in the nature of work now being driven by algorithms, big data, robotics, and the on-demand economy. We put on a lot of events at O’Reilly. Many of them have a singular focus and are aimed at practitioners of a specific discipline: Strata and Hadoop World is an event about data science, Velocity about web performance and operations, Solid about the new hardware movement, and OSCON about open source software development. But this one is more exploratory, aimed at a business audience trying to come to grips with trends that are already felt but not well understood. Putting together an event like this is a great way to discover how a lot of disparate people, ideas, and trends fit together. I’ve been engaging some of the smartest people I know in fields as diverse as robotics, AI, the on-demand economy, and the economics of labor. I’m thinking hard about the key drivers of some of today’s most successful startups, like Uber and AirBnb, and about what technology like driverless cars, Siri, Google Now, Microsoft Cortana, and IBM Watson teach us about the future. And I’m starting to see the connections. Over the next weeks and months, I’ll be posting follow up pieces explaining in more detail my thinking on key issues we’ll be exploring at the event. I will be leading a robust discussion here on Medium with some of the best thinkers and movers on these issues — a conversation that welcomes all voices. We’ll be discussing both here and at the event how augmented workers form a common thread between the strategies of companies as diverse as Uber, GE, and Microsoft, how companies in every business sector can harness the power and scalability of networked platforms and marketplaces, why the divisive debates about the labor practices of on-demand companies might provide a path to a better future for all workers, why the on-demand services of the future require a new infrastructure of on-demand education, and why building services that uncover true unmet demands and solve hard problems are ultimately the best way to create jobs. In the meantime, head on over to the conference site to see some of the amazing speakers we’ve already signed on (many more to come) and a taste of what they’ll be covering. In many ways, an event like this is the product of the people who are there — speakers and attendees alike — so I’ve tried to tell the story of the themes we are exploring through the people who will be there. Each speaker page provides not just a biography of the speaker, but a selection of provocative quotes from what they’ve written. In the near future, we’ll be providing additional opportunities for discussion and exploration. My hope for this event is that it becomes more than a conference. For it to be measured as a success, it must catalyze action. I want work that comes out of this collision of ideas to inspire entrepreneurs to tackle missing pieces of the Next:Economy puzzle, to help frame the right government policies so that innovations in the nature of work are encouraged rather than repressed, and to focus every industry on rebuilding the economy by solving hard problems and creating what Steve Jobs might have called “insanely great” new services. Tim O’Reilly is the founder and CEO of O’Reilly Media and a partner at O’Reilly AlphaTech Ventures (OATV). Tim has a history of convening conversations that reshape the industry. In 1998, he organized the meeting where the term “open source software” was agreed on, and helped the business world understand its importance. In 2004, with the Web 2.0 Summit, he defined how “Web 2.0” represented not only the resurgence of the web after the dot com bust, but a new model for the computer industry, based on big data, collective intelligence, and the internet as a platform. In 2009, with his “Gov 2.0 Summit,” he framed a conversation about the modernization of government technology that has shaped policy and spawned initiatives at the Federal, State, and local level, and around the world. He has now turned his attention to implications of the on-demand economy, AI, and other technologies that are transforming the nature of work and the future shape of the business world. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder and CEO, O'Reilly Media. Watching the alpha geeks, sharing their stories, helping the future unfold. How work, business, and society face massive, technology-driven change. A conversation growing out of Tim O’Reilly’s book WTF? What’s the Future and Why It’s Up To Us, and the Next:Economy Summit.
James Cooper
57
3
https://render.betaworks.com/announcing-poncho-the-weatherbot-bd14255e1b25?source=tag_archive---------2----------------
Announcing Poncho the WeatherBot – Render-from-betaworks
You can now get personal weather forecasts in Slack. UPDATE: Since publishing this piece in November 2015 the Poncho Weather Messenger bot launched on stage at the Facebook conference and is now the most popular bot on Facebook. If you are new to bots this is a great place to start. Try it out, here. You’ll like it. Poncho is a personalized weather service from the coolest of cats. Who needs boring and meaningless data when you can get personalized forecasts with gifs and text that will make you smile - whatever the weather. Vanity Fair said, ‘It’s like being pals with the Weatherman’. Which is true, if your weatherman was super cool. Up until now we have been a text and email service. You get texts or emails in the morning and evenings. You can sign up for that right here. But we know that people want more Poncho. You guys want Poncho on call. With new Slack integration, we’ve got you covered. If you are using Slack for your messaging needs (and if not, why not?) we have some uh-maze-ing news for you. That’s right — you can summon up your very own forecast from Poncho in Slack. We are joining others like Lyft and Foursquare as Slack officially launches Slash Command today. OK, first up let me tell you how it works. You simply type in ‘/poncho’ and your zipcode into Slack and then BOOM: the next thing you’ll see is your very own forecast for that zipcode, resplendent with text and gifs and everything. So for example in the video I typed in ‘/poncho 11217’ and I got a forecast for my zipcode in Brooklyn. It was Halloween so the theme was ‘The Shining’ which is why the forecast was Weather spelt backwards and the gif was the scary kid from the film. If you are new to Poncho you’ll soon figure out that half the fun is deciphering the messages our wonderful editorial team put together. Setting up Poncho in Slack is super simple. Just click the ‘Add to Slack’ button. Yes, that one up there. Make sure to add it to all the channels so that Poncho will be available wherever you want. You wouldn’t want your friends to miss out, would you? Unless of course you’re keeping all the best jokes for yourself. I’ve seen that happen. All righty. See you on Slack, err, slackers. (And if you are not on Slack you can still use the text and email version or wait for our super cute app which will be coming out soon.) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Creative at betaworks, New York. Ideas and Observations from betaworks
Joel Leeman
69
5
https://becominghuman.ai/i-think-i-m-slowly-turning-into-a-cyborg-cbecfa8462df?source=tag_archive---------3----------------
I think I’m slowly turning into a cyborg – Becoming Human: Artificial Intelligence Magazine
It’s only a matter of time. As much of life moves online, atomized into bits on apps, social networks and a variety of other web products, I’m beginning to notice more and more that I rely on these tools to supplement my brainpower. It sounds melodramatic, I realize, but go with me for a second here. Take my schedule. At work, I am glued to Outlook in an unhealthy way. Like, if I don’t have that little ding go off 15 minutes before a meeting starts, there’s no way I’m going to make it. Meetings come and go and change and happen all the time, but I don’t really pay attention to memorizing any of the details because I know I can always glance at my phone to know what I’m supposed to be doing. I hold a similar unhealthy relationship with Facebook, too. Back in the early days of Facebook I actually really enjoyed logging in every day, seeing whose birthday it was, and writing a little note of well wishes. Fast forward to present day, and I’m terrible at wishing people happy birthday, mostly because the 4–7 of my friends who have a birthday every day overwhelms me! I’m so scared of missing one or two that I neglect all of them. Having the ability to know when anyone’s special day is has put a damper on actually remembering a few of them without the aid of Facebook. Do you know anyone’s birthdays by memory any more? Or have you, like me, lost that part of your memory? In fact, if I don’t write something down with pen and paper (a practice vastly underappreciated IMHO), it feels like it might be lost forever, even if it’s just a click a way. And I’ve actually caught myself using Twitter as a partial brain aid. What was I up to last week? Oh, I’ll just scroll back and see what I was Tweeting about. Or maybe Instagram to my little online scrapbook of what I’ve been up to (or what I’ve shown the world I’m up to). I’m also quite directionally challenged, and rely on my iPhone way too much to get around (though maybe I’m just truly terrible at directions, who knows). But why would I take the time to study streets and landmarks when I’ve got a world’s worth of maps sitting in my pocket? (Side note, are we losing the art of getting lost?) And there’s nothing wrong with all that, I suppose. It’s more that I have a weird feeling maybe I’m relying on technology a little much? What prompted my ruminating on all this was a video I watched asking random couples if they knew each other’s phone numbers by heart. Spoiler: None of them did. I actually made an effort several years ago to learn my partner’s number, but if I had never consciously made that decision, I certainly wouldn’t know it now. Losing these tiny archaic practices by themselves individually doesn’t mean much, but when you add them up, it starts to feel like a bit overwhelming, doesn’t it? This cyborg vs. luddite thing has especially jumped into the spotlight with wearables finally coming to market. Google Glass has largely been seen as a flop, but it shouldn’t be taken lightly that people were literally choosing to wear a computer on their face all day. Or of course, take the Apple Watch (and other smartwatches like it). Yet another device created to fill a need that no one has, but will inevitably become an indispensable piece of hardware that we all must have until smart chips can just be implanted in our brains. One of my favorite writers, John Herrman describes it quite brilliantly: Though I’m sure I will have one within two years. Okay, so I’m not just a grumpy old technophobe either. I see value in technology. Heck, I work and therefore pretty much live online. I like gadgets as much as the next guy. In fact, I rather enjoyed a recent episode of Invisibilia (an incredibly interesting, new podcast from NPR) detailing the story of the original cyborg, a guy at MIT in the 90's who built a very early version of what is essentially Google Glass, and wore it for years. He used his face computer to recall bits of information at a moment’s notice about prior interactions he had with people, like a digital file folder on each relationship. There are of course plenty of examples of how technology augments the human experience. How it builds relationships and gives a voice to the voiceless and has opened new worlds of possibilities. I could (and often do) spend days talking about all the amazing things we can do today that we couldn’t 20 years ago. But, as I’ve argued before, there comes an inflection point where we all should think a bit more critically about the tools and toys we use and rely on. And for me, that day is here. Can you imagine a day where we’re connected to all the information in the world through smart glasses, a smartwatch, and our smartphone? Starting to sound a bit cyborg-ish to me! Did you enjoy this? Subscribe to my newsletter, Net IRL, a weekly roundup of some of the best stories about the impact technology and the Internet has on our everyday lives. I’m on Twitter @joelleeman. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. lifelong learner, connector and musician. first social, now digital strategy @thomsonreuters. into tech/media/life. 👨🏻‍💻🤷🏻‍♂️ Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Scott Smith
83
8
https://medium.com/phase-change/your-temporary-instant-disposable-dreamhouse-for-the-weekend-12eb419ded0?source=tag_archive---------4----------------
Your Temporary Instant Disposable Dreamhouse for the Weekend
Close colleagues of mine will tell you I have honed a particular obsession/crackpot theory over the past few years: that Airbnb has been gently A/B testing me in real life. Let me explain. I travel more than most humans should. As someone who runs their own company, and sometimes needs to spend more time in a location than is affordable via traditional hotel lodgings (such as with a recent relocation over the summer), I have made use of that darling of the sharing economy/scourge of communities (depending on which lens you look at it through), Airbnb, to stretch my budget, spend time closer to work, friends, clients, or just have company when traveling. I’ve stayed in over 30 properties, in something like eight countries, so I’ve had a lot of time to contemplate the company’s strategies from the inside. The semi-serious theory started during back-to-back stays in the UK several years ago. My first three night stay was in a London borough, in a fairly cozy house owned by a couple with a toddler. It was comfortable enough, though a bit chilly in both bedroom and shared bath. The interior design wasn’t miles off my tastes, but it didn’t push any buttons of joy either, mostly catalog-standard late 20th century British home store. I never even sat down on the ground floor. The bits of media I saw around the house were mildly interesting, if predictable, but not must-reads or binge-viewable. I wasn’t really allowed in the kitchen, which was reserved for use by the family only. The wife of the couple has formerly worked in media on a cooking show, the husband in finance. I hardly saw either of them, as they made themselves scarce. After the check-in, I didn’t have much interaction with the hosts until leaving, and they weren’t interested in any to be honest. It was strictly a transactional stay. Their child was probably cute, but fussed far too much to get a close look—it was mostly an unhappy sound coming from the kitchen or bedroom. Fair enough. I stayed three days, I paid, I chatted briefly and left, and left a weakly positive review after. I had no real complaints, but probably wouldn’t look for it again. From London, I moved down to the south coast for work (I’m being vague to protect the hosts mentioned herein). I found the place, also an attached house in a row dating probably from the Edwardian period. The host couple met me in the front hall, ushered me in, sat me down in the lounge to relax, and I was immediately offered a warm, fresh-baked cupcake and a glass of wine as I slid back into a nice leather sofa. As the husband, who worked in the trendy area of “fintech,” asked me about my work—and seemed to understand what I do—my eyes scanned the groaning bookshelves across from me. “Have that, want to read that, ohhh, that’s a good one, must remember to look at that,” I recall thinking. We had so much in common. The wife, just finishing up a new round of baking for one of her side businesses, shouted a welcome and told me to feel free to use the house as my own, listing the tasty goods available for breakfast the next day as she joined our conversation with the couple’s very adorable son, who poked at my shoes engagingly, and seemed to pay close attention to my voice. What followed was an interesting chat about culture, technology and cooking, before I went up to my very warm, comfortable, private room, past the amazing folk art, highly listenable CD collection and private bath with want-able Scandinavian textiles. And then it hit me. The principle actors and scripts of these two Airbnb plays were roughly the same. Same family configurations, professions and ages, same general houses, same price per night within a few pounds, same availability. Except, when contrasting the two, one was so comfortable, personally interesting and engaging, I wanted to stay an extra week, while the other almost hurried me on my way. One I was happy to pay to stay in, one I felt vaguely grudging about in retrospect. One could have been my alternate media collection and wine store, one missed the mark on general user experience for me. I quietly locked the door to my room, logged onto the fast broadband (quite slow and choppy at House #1) and opened my Amazon profile just to see what I’d been looking at lately. As I lay in bed the first night, breathing in the rich cake scent still hanging in the air, I thought about whether Airbnb had somehow tapped into my online searches and purchases. After all, this is the age of convergent Big Data and powerful retail analytics. Without having seen really any of the home contents at either place, or anything useful about the hosts from the Airbnb listings, I’d ended up in two very similar, yet weirdly different, residences. One where even the conversation with the hosts was familiar and relevant, the other where it just didn’t read. Back to back. Easy to compare. Was the child even real, or just part of the test? In a period when both home staging and immersive theatre are hot, why couldn’t it happen, I thought? And with same-day delivery services breaking out all over, couldn’t a set of highly personalized home contents—chosen to be both familiar and aspirational (after all, you want to leave space for potential purchases to help fund this business model)—have been plucked from a regional depot, popped onto shelves and in cabinets, and organized for my arrival? Couldn’t some actors in search of work in London have been briefed up enough from open source material to interact with me for an hour or so? Couldn’t they? Couldn’t they? I’d been on the road for a while, and fatigue was starting to set in. Maybe it was affecting my head. That was two years ago. It had been in the back of my mind since. And then. This past summer, I had a similar experience, only with my whole family while mid-relocation to the Netherlands. Again, similar homes, same family demographics, both away on holiday this time (it’s tough to get small children to follow a script, right?), one house comfortable enough in a suburban town, the other a charming place in a gentrifying neighborhood worth squatting in hopes the owners didn’t return (jk, Airbnb, jk). Was I optimizing my own stays, or were they feeding me more appropriate properties in hopes of making this testing easier? Hotels have tested such things, why not the hotel-killer itself? They even left the same bread for us as a welcome basket. One white, one whole grain. After all, Airbnb has deployed Aerosolve, its own machine learning platform, to make sense of real-time usage data and help hosts get a better return. Tuning properties for desirability is feasible—the company is already using automated scanning of house photos to optimize presentation of properties as well. With all of this technology aimed at the properties themselves, why wouldn’t Airbnb also dig into the minds of guests, find out how they respond to different houses, which conveniences they’re drawn to, etc? Nah, that would take sensors inside a house, on top of crack Web and mobile analytics. You’d need to know what people do during their stay. And as I’m sitting there, thinking again about this crazy idea, I see a tweet go by: Airbnb has purchased...an obscure Russian sensor company. I slammed the laptop and checked the cabinets for tin foil. A month or so goes by. I forget about it again. Then I open Medium and see a story about how Airbnb has mocked up parts of its own headquarters based on the apartment design a French couple who use the service to let their own flat. The couple is now suing the company. “They are branding their company with our life,” owner Benjamin Dewé told Buzzfeed. The company has apparently copied a range of style elements from the French couple’s home in its own San Francisco offices. Down to the doodles on the chalkboard. The doodles. As Jamie Lauren Keiles demonstrated in the Medium piece above, it’s pretty easy to break those furnishing and accessories down to a shoppable list, on with goods obtained on Amazon or elsewhere. Like those magazine features that show how to buy knock-offs of celebrity fashion, complete with prices and shops, a family’s flat (admittedly one they rented out via Airbnb, including to Airbnb for a function) has been commodified into a shopping list. Buy that lifestyle right here. Better yet, live in it for a few days. Only, with the convergence of Big Data, analytics (including visual analysis tools which can look for the presence of brands in social media photos), machine learning and accessible APIs of companies like Amazon, and breakneck logistics Uber-style (or even predictive shipping, per the notorious Amazon patent), fabbing up a home interior to suit your tastes (or tastes that are forming, but haven’t fully emerged yet) is within today’s technology. Hell, even that cute Roomba you had to have may be quietly mapping the place you live. This will be available in knock-off home robots soon. Have you checked the user agreements of your various home appliances and systems to see if they can sell the data? Probably not. And why not tap that stock of underused homes, and underemployed people? If there’s one thing the sharing economy overlords have taught us, it’s that the world is just a collection of undermonetized assets waiting to be redistributed, right? Why not productize, commodify and populate that second-to-last frontier, our living spaces? And staying in someone else’s place with someone else’s stuff you fancied from the pictures is tired. Everything else is personalized, financialized and productized. Why even own your own stuff when it could be Ubered into position in a desirable location based on your most recent Pinterest saves? Think about it. With a bundled DreamHomeTM service, you can perpetually test drive that new living room suite for long holiday weekends—I mean, why wait until after purchasing for buyer’s remorse to set in? You can get it out of the way, without the financial commitment. Just your desires, played forward all the time. You can even test roommates or neighbors for the weekend. Why stop at furnishings and paint colors? Slap those detailed sentiment analyses and personality analytics gleaned from your prospective co-habitant’s online activities, eye-tracking history, Tinder preferences and 23andMe profile onto a few improv actors and have some Big Data cosplay in a pop-up maisonette. Come Monday morning, you can just walk out the front door, with nothing but a premium fee to pay, a fee which may be itself be subsidized by various sponsors who want to test products on you. Don’t worry, it’s cool. Duralux, Crate & Barrel and LinkedIn picked up the tab for this getaway in the woods or beach with new friends. Sound good? Of course it does. We knew you would like it. Check your email. Your Temporary Instant Disposable Dreamhouse for the Weekend may be waiting. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Futures, post-normal innovation, strategic design. http://changeist.com Essays, Observations and Speculations from the Changeist Lab
iDanScott
3
4
https://medium.com/@iDanScott/the-bejeweled-solver-3cd07c69dfc4?source=tag_archive---------5----------------
C# Plays Bejeweled Blitz – iDanScott – Medium
As some of you reading this may or may not already know; over the past day or so I went from having the idea of creating a computer program that would essentially be able to play the popular arcade game Bejeweled Blitz on Facebook, to actually developing it. Now as hard as this problem sounds, it was surprisingly easy and fairly swift to solve. I broke it down in to 3 main steps: The first step was probably the most time consuming of them all as everything from there was just colour management. The Solution I came up with in the end for that was to take a screenshot of the entire screen, and then scan the image from top to bottom using a nested for loop until I found a funny shade of brown that only appears along the Top Edge of the bejeweled grid (for anyone wondering that colour is Color.FromArgb(255, 39, 19, 5)). Once this colour had been found using the bitmap.GetPixel(x, y) function, I broke out of both for loops and knew that was the point where the top left corner of the grid was. I could then use this to construct a rectangle which would extract the bejeweled grid from the full screenshot. The size of the rectangle was calculated using the size of the grid cells (40px2, found that out using trusty old paint) multiplied by the amount of rows/columns there were (8, found that out using my eye balls). This resulted in the Rectangle size coming out at 320px2. So the next step from here was to identify what colour resides in what square. To do that I started off by creating a 2 dimensional array of colours (Or Color’s to be politically correct) that was 8 rows and 8 columns to match that of the playable grid. I then systematically looped through the 2 dimensional array of colours in a nested for of x and y values assigning the array the colour of the pixel at the Location (x * 40) + 20, (y * 40) + 22. The x value was decided as it was half way through the gem and 22 was chosen for the y value as certain gems have a white center (Green and yellow) so 22 provided a more accurate reading. With this 2 dimensional array I was then able to generate a visual representation of what the computer was seeing when it was trying to figure out what colour was where. As you can see from the above screenshot it’s able to identify what gem is what colour depending on what pixel is at that magic 20, 22 of the cell. Another thing I thought about before I finished this project to the state it’s in now is to prevent the application from trying to switch 2 empty cells (because one gem has just been blown up or something), I added all the known color codes to their own array and ask if the colour that’s in the 2d array also resides within the known colours list, if it does it will then evaluate whether it can be moved to a winning square, if not it’s ignored entirely. I won’t bore you with the gory details of how I check if a gem can be moved, as instead this is a link to the beginning of the if statement in my Open Source Github Project. From here the full source code can be viewed, commented on and even improved upon if you guys feel like I could do something obviously better. Finally all that’s left to do by definition of this application is to actually move the Gems. This is done by making some Windows API calls to set the mouse location and simulate mouse clicks. Again the details of how to exactly do that are within the github project, but if I’ve kept your attention for this long all that’s left to say is thank you and if you have any further questions don’t hesitate to hit me up on here or twitter @iDanScott. Thanks for reading. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Dan Scott, 23. Computer Science Student of Plymouth University www.idanscott.co.uk
Josh
18
6
https://medium.com/@joshdotai/9-reasons-why-now-is-the-time-for-artificial-intelligence-876b3def0fee?source=tag_archive---------6----------------
9 Reasons Why Now is the Time for Artificial Intelligence
There’s no denying it — Artificial Intelligence is happening and it’s happening big. Companies from Facebook to Google to Amazon are hard at work building world-class AI teams that infiltrate every facet of their products. Siri is one of the largest teams at Apple, and Microsoft has a growing research effort on this front. But why is now the time for AI? 1. Artificial Neural Networks Traditional programming is deterministic, sequential, and logical. For example, computers take inputs, apply instructions, and generate outputs. This is great for tasks like calculations and conversions, but ill-suited if the application isn’t explicitly defined. The human brain, on the other hand, doesn’t behave this way. We learn and grow through repetition and education. Recent progress in artificial neural networks (ANNs) is key to building computers that can think. These breakthroughs are enabling tremendous strides in AI work at Google and Apple. 2. Knowledge Graph Companies like Yelp, Foursquare, and Wolfram Alpha have enabled access to their data through APIs. As a result, platforms like Siri and Google Now are able to answer questions such as “What’s the closest coffeeshop?” or “What’s the population of India?”. If a new service had to handle the natural language processing (NLP), audio processing, data, and more, it would be nearly impossible. Fortunately the knowledge graph has evolved over the last 20 years to a point where new AI platforms can immediately have access to tons of data. 3. Natural Language Processing NLP is a field of computer science and linguistics where computers attempt to derive meaning from human or natural input. While the field has been around since the 1950s, we’ve seen huge strides in the last few years thanks to Markov Models and n-gram models as well as projects like CALO and Wordnet. Stanford’s CoreNLP (demo here) is one of the many strong NLP solutions available today: 4. Speech Processing In order to speak to a computer and have it understand our intent, we first need to handle the audio processing and convert sound waves to text. Known as speech processing, this field has seen major advancements in the last few years. Beyond the advancements in technology, we’ve seen companies like Nuance emerge with powerful APIs that power services like GPS, dictation, and more. Today, it is almost effortless for a new AI company to translate voice to text with a high degree of confidence. 5. Computational Power The increase in computational efficiency over the last 17 years has been remarkable. In 2014, people could buy a video card that was 84.3 times the performance of one from 2004 for the same price. This increase in computational power is necessary if we want to emulate the brain. For example, research attempting to simulate 1 second of human brain activity required 82,944 processors supporting 1.73 billion artificial neurons connected by 10.4 trillion synapses. The decrease in cost and increase in computational power is enabling tremendous breakthroughs in AI today. 6.Consumer Acceptance A big aspect of seeing mass adoption around artificial intelligence is consumer approval. With an initial push from Apple to highlight Siri, and now Microsoft’s Cortana and Google Now doing the same, smart phone owners have access to an AI whether they like it or not. As a result, consumers are coming around to the idea and even starting to embrace it. Funny videos like this one are helping the masses to accept this new human-computer interaction: 7. Ubiquity of Personal Computing Conversing with an AI is a very personal experience. The emergence of smaller, always-on devices makes this possible. The iPhone was first introduced in 2007, only 8 years ago. Now, more than 64% of Americans own a smartphone. Wearables, such as the Apple Watch or Jawbone, open the possibility of even more intimate personal computing. These devices that we carry or wear serve as excellent hosts for this technology, making it possible for AI to truly enter the mainstream for the first time. 8. Funding AI funding seems to go through waves, and in the last few years it’s definitely back up. Scaled Inference, a predictive AI company, recently raised $13.6M. Amazon just announced a $100M fund for voice controlled technologies, and IBM did the same for the Watson Venture Fund. The total invested in AI companies in 2014 grew past $300M from a mere $14.9M in 2010 according to Bloomberg. With firms like Khosla Ventures and Andreesen Horowitz leading deals in AI companies, funding is fueling innovation in AI. 9. Research Efforts Another reason for the apparent surge in AI is the collective research efforts taking place. According to a 2014 report by MIRI (Machine Intelligence Research Institute), 41 of the top 275 CS conferences are AI-related. AI accounts for about 10% of all CS research today. The IEEE Computational Intelligence Society has more than 7,000 members and there are more 106 AI journals. Based on MIRI estimates, more than $50M went into funding AI research by the National Science Foundation (NSF) in 2011. With this much research and effort going into AI innovation, it’s no wonder we’re seeing this technology starting to reach the masses. If history is an indicator, we may see interest in AI spike and go back down. With momentum across these various different sectors, though, AI interest seems likely to keep growing. If you’re interested in keeping up with our efforts and staying in touch, check out http://josh.ai and reach out! This post was written by Alex at Josh.ai. Previously, Alex was a research scientist for NASA, Sandia National Lab, and the Naval Resarch Lab. Before that, Alex worked at Fisker Automotive and founded At The Pool and Yeti. Alex has an engineering degree from UCLA, lives in Los Angeles, and likes to tweet about Artificial Intelligence and Design. Josh is an AI agent for your home. If you’re interested in following Josh and getting early access to the beta, enter your email at https://josh.ai. Like Josh on Facebook — http://facebook.com/joshdotai Follow Josh on Twitter — http://twitter.com/joshdotai From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
paulson
1
17
https://electricliterature.com/what-could-happen-if-we-did-things-right-an-interview-with-kim-stanley-robinson-author-of-aurora-d88a0f8f72e7?source=tag_archive---------7----------------
What Could Happen If We Did Things Right: An Interview With Kim Stanley Robinson, Author Of Aurora
Is Kim Stanley Robinson our greatest political writer? That was the provocative question posed recently by a critic in The New Yorker. Science fiction writers rarely get that kind of serious attention, but Robinson’s visionary experiments in imagining a more just society have always been part of his fictional universe. In fact, he got his Ph.D. in English studying under the renowned Marxist theorist Fredric Jameson. The idea of utopia may seem discredited in today’s world, but not to Robinson. He believes we need more utopian thinking to create a better future. And the future is where he takes us in his new novel Aurora. Set in the 26th century, it’s the story of a space voyage to colonize planets outside our Solar System. Robinson writes in the tradition of “hard science fiction,” using only existing or plausible technology for his interstellar journey. As much as he geeks out on the mechanics of space travel, his real interest is how people would handle a very long voyage trapped inside a starship. His futuristic themes won’t surprise longtime fans of Robinson, who’s best known for his Mars trilogy, published in the 1990s. To read KSR is to wonder how our species might survive and even thrive in the centuries ahead. The author stopped by my radio studio before giving the keynote speech at a local science fiction conference. We talked about the existential angst of life on a starship, the future of artificial intelligence and the aesthetics of space travel. Our conversation will air on Public Radio International’s To the Best of Our Knowledge. You can subscribe to the TTBOOK podcast here. Steve Paulson: How would you describe the story in Aurora? Kim Stanley Robinson: It’s the story of humanity trying to go to other star systems. This may be an ancient idea, but for sure it’s a 19th century idea. The Russian space scientist Tsiolkovsky said Earth is humanity’s cradle but you’re not meant to stay in your cradle forever. This idea has been part of science fiction ever since — that humanity will spread through the stars, or at least through this galaxy. SP: It’s a long way to travel to another star. KSR: It is a long way. And the idea of going to the stars is getting not easier, but more difficult. So I decided to explore the difficulties. I tried to think about whether it’s really possible at all, or if we’re condemned — if you want to put it that way — to stay in this Solar System. SP: What star are your space voyagers trying to get to? KSR: Tau Ceti, which has often been the destination for science fiction voyagers. Ursula Le Guin’s Dispossessed takes place around Tau Ceti, and so does Isaac Asimov’s The Naked Sun. It’s about 12 light-years away. We now know it has three or four big planets the size of a small Neptune or a large Earth. They’ve got the mass of about five Earths. That’s too heavy for humans to be on, but those planets could have moons about the size of Earth. So it becomes the nearest viable target. Alpha Centauri, which is just four light-years away, only has tiny planets that are closer than Mercury is to our sun, so they won’t be habitable. SP: Your story is set 500 years into the future. It takes a long time to get to this star. KSR: Yes. My working principle was, what would it really be like? So no hyperspace, no warp drive, no magical thing about what isn’t really going to happen to get us there. That means sub-lightyear speeds. So I postulated that we could get spaceships going to about one-tenth the speed of light, which is extraordinarily fast. Then the problem becomes slowing down. You have to carry enough fuel to slow yourself down if you’ve accelerated to that kind of speed. The mass of the decelerant fuel will be about 90% of the weight of your ship. As you’re approaching your target, you have to get back down to the speed at which you can orbit your destination. The physics of this is a huge problem. SP: You’re talking about a multi-generational voyage that will take a couple hundred years. That’s a fascinating idea. The people who start out will be dead by the time the starship gets there. KSR: I guessed it would take four or five generations — say, 200 years. This is not my original idea. The multi-generational starship is an old science fiction idea started by Robert Heinlein and there may even be earlier precursors. One always finds forgotten precursors for every science fiction idea. Heinlein wrote Universe around 1940, Brian Aldiss wrote a book called Starship in 1958, and Gene Wolfe wrote a very great starship narrative in the 1990s, The Book of the Long Sun. So it’s not an original idea to me; it’s sort of a sub-genre within science fiction. SP: But the whole idea of a project that takes generations is something we don’t do anymore. People did that when they built the pyramids in Egypt or the great cathedrals in Europe. I can’t think of a current project that will take generations to complete. KSR: You really have to think of it as a mobile island or a vast zoo. It isn’t even a project so much as a city that you’ve shot off into space, and when the city gets to its destination, the people unpack themselves into the new place. You’re right, it could be compared to building the cathedrals. And it’s interesting to think about the people born on the starship who didn’t make the choice to be there. So it turned into a bit of a prison novel. SP: Because you’re trapped there. You’re in this confined space for your whole life. KSR: And for two or three generations, you’re born on the ship and you die on the ship. You’re just in between the stars. So it’s very existential. There are some wonderful thought stimulants to thinking about a starship as a closed ecology. SP: How big is the starship in your story? KSR: There’s something like a hundred kilometers of interior space. SP: So this is big! KSR: Yeah, two rings. You could imagine them as cylinders that have been linked until they make a circle, so twelve cylinders per circle. You’ve got 24 cylinders and each has a different Earth ecology in it and each one of them is about five kilometers long. It’s pretty big, but you need that much space to be viable at all because you have to take along a Noah’s Ark worth of genetic material, or else it isn’t going to work. SP: What do you have to bring along? KSR: You would want as much of everything as you can bring, but you certainly need a big bacterial load. You need to bring along a lot of soil. You need a lot of what would be effectively unidentified bacteria; you just need a big hunk of earth. And then all the animals that you can fit that would survive. Each one of these cylinders would be like a little zoo or aviary. SP: As you were imagining this voyage, which part was most interesting to you? Was it the science — trying to figure out technically how we could get there? Or was it the personal dynamics of how people would get along when they’re trapped in space for so long? KSR: I think it would be the latter. I’m an English major. The wing of science fiction that’s discussed this idea has been the physics guys, the hard SF guys. They’ve been concerned with propulsion, navigation, with slowing down, with all the things you would use physics to comprehend. But I’ve been thinking about the problem ecologically, sociologically, psychologically. These elements haven’t been fully explored and you get a new story when you explore them. It’s a rather awful story, which leads to some peculiar narrative choices. SP: Why is it awful? KSR: Because they’re trapped and the spaceship is a trillion times smaller than Earth’s surface. Even though it’s big, it’s small. And we didn’t evolve to live in one of these things. It’s like you spend your whole life in a Motel Six. SP: Put that way, it does sound pretty awful. KSR: Better than a prison, but you can’t get out. You can’t choose to do something else. I don’t think we’re meant for that even though we live in rooms all the time in modern society. I think the reason people volunteer for things like Mars One is they’re thinking, “How is that different from my ordinary life? I sit in a room in front of my laptop all day long. If I’m going to Mars, it’s more interesting.” SP: Mars One is the project that’s trying to engineer one-way trips to Mars. You know you’re not going to come back. Frankly, it sounds like a suicide mission, and yet tens of thousands of people have signed up for this mission. KSR: Yes, but they’ve made a category error. Their imaginations have not managed to catch up to the situation. They are in some kind of boring life and they want excitement. Maybe they’re young, maybe they’re worried about their economic prospects, maybe they want something different. They imagine it would be exciting if they got to Mars. But it was Ralph Waldo Emerson who said travel is stupid; wherever you go, you’re still stuck with yourself. I went to the South Pole once. I was only there for a week and it was the most boring place in Antarctica because we couldn’t really leave the rooms without getting into space suits. SP: Is extended space travel like going to Antarctica? KSR: It’s the best analogy you can get, especially for Mars. You would get to a landscape that’s beautiful and sublime and scientifically interesting and mind-boggling. Antarctica is all those things and so would Mars be. But I notice that nobody in the United States cares about what the Antarcticans are doing every November and December. There are a couple thousand people down there having a blast. If the same thing happened on Mars, it would be like, “Oh, cool. Some scientists are doing cool things,” but then you go back to your real life and you don’t care. SP: So even though you write about these long space voyages, you wouldn’t want to be part of one? KSR: Not at all. But I’ve only written about long space voyages once — in this book, Aurora. SP: You also wrote a whole series of books about Mars. You still have to get there. KSR: But there’s an important distinction. You can get to Mars in a year’s travel and then live there your whole life. And you’re on a planet, which has gravity and landscape. You can terraform it. It’s like a gardening project or building a cathedral. I think terraforming Mars is viable. Going to the stars, however, is completely different because you would be traveling in a spaceship for several generations where you’re in a room, not on a planet. It’s been such a techie thing in science fiction. But people haven’t de-stranded those two ideas. They said, “Well, if we can go to Mars, we can go to Tau Ceti.” It doesn’t follow. It’s not the same kind of effort. SP: Would it be interesting to travel just through our own Solar System? KSR: Yes, this Solar System is our neighborhood. We can get around it in human time scales. We can visit the moons of Saturn. We can visit Triton, the moon of Neptune. There are hundreds of thousands of asteroids on which we could set up bases. The moons of all the big planets are great. The four big moons of Jupiter — we couldn’t be on Io because it’s too radioactive or too impacted by the radio waves of Jupiter itself — but by and large, the Solar System is fascinating. SP: Yet I imagine a lot of people would say, “Yeah, there’s a lot of cool stuff out there, but it’s all dead.” KSR: Well, we have questions about Mars, Europa, Ganymede and Enceladus, a moon of Saturn. Wherever there’s liquid water in the Solar System, it might be dead or alive. It might be bacterially alive. It might have life that started independently. It might be cousin life that was blasted off of Mars on meteorites and landed on Earth and other places. We don’t know yet. And if it is dead, it’s still beautiful and interesting, so these would be sites of scientific interest. Antarctica is pretty dead, but we still go there. SP: I’ve heard it’s incredibly beautiful. KSR: It’s very beautiful. I think if you’re standing on the surface of Europa, looking around the ice-scape and looking up at Saturn in the sky overhead, it’s also going to be beautiful. I’m not sure if it’s beautiful enough to drive a gigantic effort to get there. The robots going there now are already a tremendous exploration for humanity. The photos sent back to us are a gigantic gift and a beautiful thing to look at. So humans going there will always be a kind of research project that a few scientists do. I’m not saying that the rest of the Solar System is crucial to us. I think Earth is the one and only crucial place for humanity. It will always be our only home. SP: I wonder if we would develop a different sense of beauty if we went out into the Solar System. When we think of natural beauty, we tend to think of gorgeous landscapes like mountains or deserts. But out in the Solar System, on another planet or a moon, would our experience of awe and wonder be different? KSR: You can go back to the 18th century when mountains were not regarded as beautiful. Edmund Burke and the other philosophers talked about the sublime. So the beautiful has to do with shapeliness and symmetry and with the human face and figure. Through the Middle Ages, mountains were seen as horrible wastelands where God had forgotten what to do. Then in the Romantic period, they became sublime, where you have not quite beauty but a combination of beauty and terror. Your senses are telling you, “This is dangerous,” and your rational mind is saying, “No, I’m on a ledge, but I’ve got a railing. It looks dangerous, but it’s not.” You get this thrilling sensation that is not beauty but is the sublime. The Solar System is a very sublime place. SP: Because you could die at any moment if your oxygen support system goes out. KSR: Exactly. It’s like being in a submarine or even in scuba gear — the feeling of being meters under the surface, with a machine keeping you alive and bubbles going up, as you’re looking at a coral reef. That’s sublimity. There’s an element of terror that’s suppressed because your rational mind is saying it’s okay. When you fly in an airplane and look down 30,000 feet to the surface of the earth, that’s the feeling of the sublime, even if you’re looking down at a beautiful landscape. But people can’t bear to look because after a while you’re thinking, “Boy, this machine sure has to work.” SP: If you think long and hard about this... KSR: You might never fly again. SP: One thing that’s so interesting about your novel Aurora is that most of it is narrated by the ship itself. What was the idea here? KSR: I do like the idea that my narrators are also characters, that they’re not me. I’m not interested in myself. I like to tell other people’s stories, so I don’t do memoir. I do novels. And for three or four novels now, it’s been an important game to me to imagine the narrators’ voices being different from mine. So Shaman’s was the Third Wind, this mystical spirit that knew the Paleolithic inside and out. That wasn’t me. And Cartophilus, the time traveler, tells Galileo’s story. In Aurora, it made sense for the ship to need really powerful artificial intelligence, like a quantum computer. And once you get to quantum computers, you’ve got processing speeds that are equal to the processing speeds of human brains. But the methodologies would be completely different. They’d be algorithms that we programmed. Maybe it wouldn’t have consciousness, but when you get that much processing speed, who’s to say what consciousness really is? So I made the narrator out of this starship’s AI system. And he — she, it — has been instructed by the chief engineer to keep a narrative account of the voyage. When you think about it, writing novels is strange. We can tell most stories to each other in about 500 words, so a novel is not a natural act. It’s an art form that’s been built up over centuries and doesn’t have a good algorithm. SP: I recently interviewed Stephen Wolfram, the computer theorist and software developer, and asked if he thought some future computer could write a great novel. He said yes. KSR: Wolfram’s very important in theorizing what computers can do because he’s made a breakdown of activities from the simple to the complex. And at full complexity, the human brain or any other thinking machine that can get to that fourth level of complexity should be able to do it. SP: So in the future, you think a computer or artificial intelligence system could write a modern “Ulysses”? KSR: Well, this is an interesting question. At that point you would need a quantum computer. It would need to read a whole bunch of novels and try to abstract the rules of storytelling and then give it a shot. In my novel, the first chapter the computer writes is 18th century literature. It’s what we would call “camera-eye point of view.” It doesn’t guess what people are thinking; how can it? It just reports what it sees like a Hemingway short story. As the novel goes on, chapter by chapter, the computer is recapitulating the history of the novel, and by the end of the last chapter narrated by the computer, you’re getting full-on stream of consciousness. It’s kind of like Ulysses or Virginia Woolf where you’re inside the mind, although it’s the mind of the computer itself. The last chapter is in a kind of “flow state” of the computer’s thinking. SP: At that point, does the computer have emotions? KSR: It wonders about that. The computer can’t be sure. Actually, we’re all trapped in our own consciousness. What are other people thinking? What are other people feeling? You have to work by analogy to your own internal states. The computer only has access to its own internal states. SP: Does the future of AI and technology more generally excite you? KSR: Yes, AI in particular. I used to scoff at it. I’m a recent convert to the idea that AI computing is interesting. Mainly, it’s just an adding machine that can go really, really fast. There are no internal states. They’re not thinking. However, quantum computers push it to a new level. It isn’t clear yet that we can actually make quantum computers, so this is the speculative part. It might be science fiction that completely falls apart. There was science fiction about easy space travel, but that’s not going to work. There was science fiction about all of us living 10,000 years. That might or might not work, but it’s way speculative. Quantum computing is still in that category because you get all the weirdness of quantum mechanics. There are certain algorithms that might take a classical computer 20 billion years, while a quantum computer would take 20 minutes. But those are for very particular tasks, like factoring a thousand-digit number. We don’t know yet whether more complex tasks will be something that a quantum computer can handle better than a regular computer. But the potential for stupendous processing power, like a human brain’s processing power, seems to be there. SP: As a science fiction writer, do you have a particular mission to imagine what our future might be like? Is that part of your job? KSR: Yes, I think that’s central to the job. What science fiction is good at is doing scenarios. Science fiction may never predict what is really going to happen in the future because that’s too hard. Strange things, contingent things happen that can’t be predicted, but we can see trajectories. And at this moment, we can see futures that are complete catastrophes where we cause a mass extinction event, we cook the planet, 90% of humanity dies because we run out of food or we think we’re going to run out of food and then we fight over it. In other words, complete catastrophe. On the other hand, there’s another scenario where we get hold of our technologies, our social systems and our sense of law and justice and we make a kind of utopia — a positive future where we’re sustainable over the long haul. We could live on Earth in a permaculture that’s beautiful. From this moment in history, both scenarios are completely conceivable. SP: Yet if we look at popular culture, dystopian and apocalyptic stories are everywhere. We don’t see many positive visions of the future. KSR: I’ve always been involved with the positive visions of the future, so I would stubbornly insist that science fiction in general, and my work in particular, is about what could happen if we did things right. But right now, dystopia is big. It’s good for movies because there are a lot of car crashes and things blowing up. SP: Is it a problem that we have so many negative visions of the future? KSR: Dystopias express our fears and utopias express our hopes. Fear is a very intense and dramatic emotion. Hope is more fragile, but it’s very stubborn and persistent. Hope is inherent in us getting up and eating breakfast every day. In the 1950s young people were thinking, “I’m going to live on the moon. I will go to Neptune.” Today it’s The Hunger Games, which is a very important science fiction story. I like that it’s science fiction, not fantasy. It’s not Lord of the Rings or Harry Potter. It’s a very surrealistic and unsustainable future, but it’s a vision of the fears of young people. They’re pitting us against each other and we have to hang together because there’s a rich elite, an oligarchy, that’s simply eating our lives for their own entertainment. So there’s a profound psychological and emotional truth in The Hunger Games. There’s a feeling of fear and political apprehension that late global capitalism is not fair. My Mars books — although they’re not as famous and haven’t been turned into movies — are quite popular because they’re saying we could make a decent and beautiful civilization. I’ve been noticing with great pleasure that my Mars trilogy is selling better now than it ever has. SP: Does our society need positive visions of the future? Do we need people to create scenarios of how things could go well? KSR: Oh, yes. Ever since Thomas More’s Utopia, we’ve always had it. Edward Bellamy wrote a book called Looking Backward: 2000–1887. The progressive political movement that changed things around the time of Teddy Roosevelt came out of this novel. When people had to reconstruct the world’s social order after World War II, they turned to H.G. Wells and A Modern Utopia and Men Like Gods. We always need utopias. These days, people are fascinated by Steve Jobs or Bill Gates. It’s like those geeky 1950s science fiction stories where a kid in his backyard makes a rocket that goes to the moon. Now it’s in his garage, where he makes a computer that changes everything. We love these stories because they’re hopeful and they suggest that we could seize history and change it for the better. If science fiction doesn’t provide those stories, people find them somewhere else. So Steve Jobs is a science fiction story we want. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Expanding the influence of literature in popular culture.
Christopher Wolf Nordlinger
8
6
https://medium.com/@chrisnordlinger/the-internet-of-things-and-the-operating-room-of-the-future-8999a143d7b1?source=tag_archive---------8----------------
The Internet of Things and the Operating Room of the Future
The doctor stands over the patient on the operating room table. It can be dizzying to look around at the dozen or more video screens dedicated to standalone medical devices and not think that the Internet of Things (IoT) could radically simplify the complexities of managing so many systems. In the process, digital health could enormously improve patient care. At the same time, hospitals struggle to constrain the rapidly-increasing costs of healthcare, yet with IoT investments they can reduce costs significantly. It’s not hard to see how the medical industry and, hospitals in particular, will represent a major component of the $19 Trillion Internet of Things market opportunity that Cisco predicts by 2020. Imagining its future in surgery alone is not some far-off idea. It already exists and it’s revolutionary due to a unique blend of IoT, big data, advanced analytics and smart medical devices. Here’s how the reality plays out in a leading example. Thousands of people suffer from heart arrhythmias caused by heart disease which show up as a flutter in the heartbeat that is highly disruptive and can cause potentially fatal strokes and heart attacks. There are a few pharmaceutical drugs that can mollify the symptoms but they do nothing to remove the dead tissue lesions in the heart that cause the underlying situation which is called atrial fibulation, or AFib for short. CardioThings (a made-up name to protect the company while under FDA approval review) is attacking this problem with ablation to remove the lesions by gently burning them out with a laser. This involves inserting a catheter into the heart to try to perform ablation to remove the AFib-causing lesions. Each device is hard-wired to a screen where streaming data from the end of the catheter display a view of the inside of the heart. But that’s not where the data stop between the heart and the monitors like many devices. CardioThings, a Silicon Valley startup, works with two real IoT powerhouses, PTC ThingWorx and another Silicon Valley startup, Glassbeam, to make something much more powerful possible. ThingWorx models the operation of the catheter so that it can send secure data to the cloud where it can be analyzed by Glassbeam. Glassbeam turns the unstructured data into structured data in the forms of readable reports that the device company can then use to improve doctors’ surgical performance. For CardioThings and other high-value asset manufacturers, this kind of data can also increase the uptime of their catheter device. Others can use IoT Analytics to increase the uptime of CAT-Scans and MRIs because the data can show when even the smallest part is showing signs of weakness or malfunction and enable a repair that keeps that equipment operating. How? Imagine CardioThings’s optical catheter, thin enough to fit comfortably through a vein, entering a heart and mapping it out to find the lesions responsible for the AFib. The surgeon is then able to frame the boundaries of the lesions on CaridoThings’s monitors to see which are dying and need to get burned out. The laser beam from the sensor-embedded catheter then cuts the lesions out and the patient is healed. What does this have to do with saving money for the hospital? High-value machines such as MRIs and CAT Scan cost millions. Downtime for them is not only very costly for the hospital that is not billing patients but also, more importantly, interrupts patients from getting the best possible care. ThingWorx enables medical devices (Things, sensors modeled by ThingWorx to communicate as if it was the device) to talk to other Things in the cloud. Once the unstructured data is there it can be combined and recombined by Glassbeam’s analytics software to detect any abnormalities. For MRIs, CAT Scan and other devices, stopping small problems from becoming big problems that crash expensive heavily-used equipment is the ultimate value of predictive maintenance. Hospitals are large places with many people and things moving about a great deal and keeping track of assets ranging from MRI scanners to $60,000 beds is quite challenging. In the case of CardioThings above, the alliance of PTC ThingWorx and Glassbeam should make the medical industry and business decision makers globally take notice. Whether it’s healthcare, agriculture, networking or manufacturing, higher utilization of equipment is absolutely essential to remaining competitive. In the case of the CardioThings’s catheter spitting out unstructured data, Chris Kuntz, Vice President, Ecosystem Programs of PTC ThingWorx says “imagine the cardiac data from that same procedure being combined and recombined with data from EKG machines, MRI machines, pharmaceutical research, personal medical record-keeping systems, blood monitors and hundreds of healthcare systems. This is how the Internet of Things drives a revolution in healthcare.” “Thanks to our partnership with ThingWorx,” Glassbeam CEO Puneet Pandit says, “we are able to capture that unstructured data off the catheter and create structured data that business decision makers at hospitals, the manufacturers and individual doctors can learn from” Pandit adds, “As a result of the large amount of critical data coming from the catheter, you can answer many questions. How did the device perform? Under what circumstances? How long did the surgery take? Which surgeons did it most effectively? Who needs to be more formally trained?” As a result of this solution, training surgeons to use equipment better provides significantly improved outcomes for patients. And for hospitals dispensing critical care, no one has to wait any longer for the MRI to crash to know there was a problem. They can fix the smallest problem before it escalates. Letting the hospital know that a specific part is faulty by simply examining the unstructured data it sends out is the best example of the power of predictive maintenance. No one has to wait for the MRI to crash. Hospitals can enjoy huge savings through predictive maintenance on all its heavily-used expensive equipment. Given concerns about privacy and safeguarding of material, it is essential to have a secure connectivity partner such as ThingWorx aboard. HIPPA is just the beginning of the scope of regulatory requirements that will need to be accommodated to operate successfully in the healthcare data space. Applied analytics available to doctors in real-time reduces medical procedure risk and overall liability concerns. For hospitals to reduce costs and increase profitability, IoT will play an enormous role. For patients, it means their doctors will know so much more about treating them to ensure the best care after any procedure whether it’s a heart bypass, cancer surgery, heart transplant or a simple blood test. Jack Reader, Business Development Manager at ThingWorx (now at Verizon) says, “Imagine an operating room where there are just a few monitors and all the devices speak to each other and with thousands of medical systems within and beyond the walls of the hospitals. All of this innovation will exponentially increase insight and intelligence, reduce costs for the hospital and increase health outcomes”. The implications in terms of knowledge gained and positive health outcomes is so phenomenal that we almost can’t now imagine from this early stage in the IoT era all the possible sources nor all the insights that will be gained. However, the sooner IoT Analytics is adopted in the hospital, the sooner patients can expect better-run hospitals and healthier lives. This is only the beginning of a new era. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ph.D. Fulbright Scholar. Storyteller. Communications Expert. Content maven. Formerly State Dept-Startups-Cisco & more.
Louis Rosenfeld
90
5
https://medium.com/@louisrosenfeld/everyday-ia-d7aa7be07717?source=tag_archive---------9----------------
Everyday IA – Louis Rosenfeld – Medium
A few days ago, Cennydd Bowles gently trolled many of us thusly: As Cennydd has keynoted a past Information Architecture Summit, it’s hard to ignore his question. And Cennydd’s timing is quite interesting, given that tomorrow is World IA Day. The theme of this year’s WIAD is “architecting happiness”. And in this adorable little video that the IA Institute created to promote WIAD 2015, Abby Covert says that this theme was chosen “because of the rising amount of information that everyone has to deal with” (my italics): Cennydd, there’s your answer: if you’re a human in today’s developed world, where even physical objects and spaces are soaked in information, you are struggling to cope with and make sense of the stuff. Nearly all the time. And nearly everywhere. Information architecture problems are everyday human problems. So if you’re designing for humans today, you’ll need at least some information architecture skills in order to help them. Information architecture literacy is required for anyone who designs anything. So it’s not surprising that WIAD has exploded to 38 locations in 24 countries. It’s not surprising that Abby’s wonderful little book, How to Make Sense of Any Mess: Information Architecture for Everybody, has been such a hit. It’s not surprising to see the IA Summit entering its 16th year stronger than ever. It’s not surprising that the fourth edition of Information Architecture for the World Wide Web (due out later this year) is being recast as a book not for information architects, but for people who need to know something about information architecture. We’ve entered full-on mode of democratizing IA skills. Because... Information architecture literacy is required for anyone who designs anything. I’ll confess to having felt, like Cennydd, a bit disconnected from IA for the past few years. Partly because I’ve been investing almost every available moment of my waking hours into Rosenfeld Media. And partly because much of the IA community’s discussion has pushed far deeper into IA practice than my brain and attention span can manage. But I’m feeling better now, because I’m finding, in my own day-to-day work, that: Information architecture literacy is required for anyone who designs anything. For example, while I rarely work on web site IA much these days, I am absolutely absorbed in the information architecture of books. Want to know what value publishers can provide to authors in this age of self-publishing? The list might be longer than you imagined, but I think most Rosenfeld Media authors would agree: Lou and team pull them out of the weeds, and help them to step back and make sense of their content as an information system. Information architecture skills are an absolute necessity when it comes to framing, structuring, and establishing a flow for a book. (And not just for non-fiction; just ask JK Rowling.) I’m finding that IA literacy is also incredibly helpful in other areas, like event planning. I recently asked a couple dozen colleagues who produce events to provide share their advice on organizing a conference. Their responses were generous, useful, and wonderful. But the one I keep remembering most is Jeffrey Zeldman’s: Yes, I’m biased, but I hear Jeffrey singing a song of event IA. I’ve been singing it too. In putting together the first edition of the Enterprise UX conference (plug alert: San Antonio; May 13–15, 2015), I’ve been working with Dave Malouf and Uday Gajendar to create an information architecture for a conversation. In effect, we’re trying to structure the event’s program in a way that surfaces a latent conversation about enterprise UX that’s been happening in the UX community for quite some time. The event itself should simply serve as an opportunity to bring people together to sharpen and advance that conversation. I’m oversimplifying a bit, but we spent months designing our event IA around four carefully-sequenced themes, each in effect a curated mini-conference: 1) Insight at Scale; 2) Craft amid Complexity; 3) Enterprise Experimentation; and 4) Designing Organizational Culture. We see these as the main facets of the community’s conversation on enterprise UX. We’ll know we’ve been successful if, at the event, the conversation spills out of the auditorium and into the hallways and break areas, animating the words and faces of attendees. We’ll know we been really successful if these conversations riff off the themes already covered — meaning we got the sequence right. And we’ll know that we were really, really successful if these four themes keep the conversation moving forward — both after the event and as the IA for programs at future editions of the event. Books have an information architecture. Events have an information architecture. Pretty much anything we design — consciously or not — has an information architecture. So pardon me as I repeat: Information architecture literacy is required for anyone who designs anything. When I got my masters in information and library studies in 1990, our professors were preaching about the oncoming information revolution. Since then, I‘ve been fortunate to observe and even participate a little in that revolution. In the blink of an eye, information architects emerged as professionals dedicated to making the pain of that revolution easier to bear. In the blink of an eye, others have proclaimed that information architecture, as a profession, was dead. I’m not sure who’s right, nor do I care. Twenty-five years is nothing. The dust can settle after we’re all dead. Let’s worry instead about people suffering from everyday IA problems. We, as designers of any stripe, have to help them. And we have to get better at helping them to help themselves. Oh, and if you’re wondering why I won’t be at any of tomorrow’s 38 WIAD meetings: well, it’s Saturday, and I have a date with my six-year old. We’re going to organize his Legos. (This piece originally ran in the Rosenfeld Review; sign up here for new ones.) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Rosenfeld Media. I make things out of information.
Matt Harvey
677
7
https://blog.coast.ai/continuous-online-video-classification-with-tensorflow-inception-and-a-raspberry-pi-785c8b1e13e1?source=tag_archive---------0----------------
Continuous online video classification with TensorFlow, Inception and a Raspberry Pi
Much has been written about using deep learning to classify prerecorded video clips. These papers and projects impressive tag, classify and even caption each clip, with each comprising a single action or subject. Today, we’re going to explore a way to continuously classify video as it’s captured, in an online system. Continuous classification allows us to solve all sorts of interesting problems in real-time, like understanding what’s in front of a car for autonomous driving applications to understanding what’s streaming on a TV. We’ll attempt to do the latter using only open source software and uber-cheap hardware. Specifically, TensorFlow on a Raspberry Pi with a PiCamera. We’ll use a “naive” classification approach in this post (see next section), which will give us a relatively straightforward path to solving our problem and will form the basis for more advanced systems to explore later. By the time we’re done today, we should be able to classify what we see on our TV as either a football game or an advertisement, running on our Raspberry Pi. Let’s get to it! Video is an interesting classification problem because it includes both temporal and spatial features. That is, at each frame within a video, the frame itself holds important information (spatial), as does the context of that frame relative to the frames before it in time (temporal). We hypothesize that for many applications, using only spatial features is sufficient for achieving high accuracy. This approach has the benefit of being relatively simple, or at least minimal. It’s naive because it ignores the information encoded between multiple frames of the video. Since football games have rather distinct spatial features, we believe this method should work wonderfully for the task at hand. We’re going to collect data for offline training with a Raspberry Pi and a PiCamera. We’ll point the camera at a TV and record 10 frames per second, or more specifically, save 10 jpegs every second, which will comprise our “video”. Here’s the code for capturing our images: Once we have our data, we’ll use a convolutional neural network (CNN) to classify each frame with one of our labels: ad or football. CNNs are the state-of-the-art for image classification. And in 2016, it’s essentially a solved problem. It feels crazy to say that, but it really is: Thanks in large part to Google→TensorFlow→Inception and the many researchers who came before it, there’s very little low-level coding required for us when it comes to training a CNN for our continuous video classification problem. Pete Warden at Google wrote an awesome blog post called TensorFlow for Poets that shows how to retrain the last layer of Inception with new images and classes. This is called transfer learning, and it lets us take advantage of weeks of previous training without having to train a complex CNN from scratch. Put another way, it lets us train an image classifier with a relatively small training set. We collected 20 minutes of footage at 10 jpegs per second, which amounted to 4,146 ad frames and 7,899 football frames. The next step is to sort each frame into two folders: football and ad. The name of the folders represent the labels of each frame, which will be the classes our network will learn to predict on when we retrain the top layer of the Inception v3 CNN. This is essentially using the flowers method described in TensorFlow for Poets, applied to video frames. To retrain the final layer of the CNN on our new data, we checkout the r0.11 tag from the TensorFlow repo and run the following command: Retraining the final layer of the network on this data takes about 30 minutes on my laptop with a GeForce GTX 960m GPU. At the completion of 4,000 training steps, our model reports an incredible 98.8% accuracy on the held out validation set! I’m not sure I could do much better using my eyes on the same data. As a point of reference, if the network had classified each frame as football, it would have achieved about 66% accuracy. So it seems to be working! It’s always a good idea to run some known data through a trained network to sanity check the results, so we’ll do that here. Here’s the code we use to classify a single image manually through our retrained model: And here are the results of spot checking individual frames: Before we transfer everything to our Pi and do this in real-time, let’s use a different batch of recorded data and see how well we do on that set. To get this dataset, and to make sure we don’t have any data leakage into our training set, we separately record another 19 minutes of the football broadcast. This dataset amounted to 2,639 ad frames and 8,524 football frames. We run each frame of this set through our classifier and achieve a true holdout accuracy score of 93.3%. Awesome! Looks like we’ve validated our hypothesis that we can achieve high levels of accuracy while only considering spatial features. Impressive results, considering that we only used 20 minutes of training data! Thank you, Google, Pete, TensorFlow and all the folks who have developed CNNs over the years for your incredible work and contributions. Great, so now we have our CNN trained and we know that we can classify each frame of our video with relatively high accuracy. How does it do on live TV, with always changing context? For this, we load up our Raspberry Pi 3 with our newly trained model weights, turn on the PiCamera at 10 fps, and instead of saving the image, send it through our CNN to be classified. We have to make some modifications to the code to classify in real time. The final result looks like this: We also have to get TensorFlow running on the Pi. Sam Abrahams wrote up excellent instructions for doing this, so I won’t cover them again here. After we install our dependencies, we run the program and... crap! Inception on the Raspberry Pi 3 can only classify one image every four seconds. Okay, so we don’t quite have the hardware yet to do 10fps, but this still feels like magic, so let’s see how we do. Flipping on Sunday Night Football and pointing our camera at the TV shows a remarkable job at classifying each moment as football or ad, once every few seconds. For the vast majority of the broadcast, we see our prediction come out true to life. So cool. In all, our naive method worked remarkably well at continuous online video classification for this particular use case. But we know that we’re only considering part of the information provided to us inherently in video, and so there must be room for improvement, especially as our datasets become more complex. For that, we’ll have to dive deeper. So in the next post, we’ll explore feeding the output of our CNN (both the final softmax layer and the pool layer, which gives us a 2,048-d feature vector of each image) to an LSTM RNN to see if we can increase our accuracy. Spoiler alert: We can! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Coastline Automation, using AI to make every car crash-proof. Practical applications of deep learning and research reports from the road.
Vivek Yadav
425
11
https://chatbotslife.com/using-augmentation-to-mimic-human-driving-496b569760a9?source=tag_archive---------1----------------
An augmentation based deep neural network approach to learn human driving behavior
Overview In this post, we will go over the work I did for project 3 of Udacity’s self-driving car project, behavior cloning for driving. The main task is to drive a car around in a simulator on a race track, and then use deep learning to mimic the behavior of human. This is a very interesting problem because it is not possible to drive under all possible scenarios on the track, so the deep learning algorithm will have to learn general rules for driving. We must be very careful while using deep learning models, because they have a tendency to overfit the data. Overfitting refers to the condition where the model is very sensitive to the training data itself, and the model’s behavior does not generalize to new/unseen data. One way to avoid overfitting is to collect a lot of data. A typical convolutional neural network can have up to a million parameters, and tuning these parameters requires millions of training instances of uncorrelated data, which may not always be possible and in some cases cost prohibitive. For our car example, this will require us to drive the car under different weather, lighting, traffic and road conditions. One way to avoid overfitting is to use augmentation. Augmentation refers to the process of generating new training data from a smaller data set such that the new data set represents the real world data one may see in practice. As we are generating thousands of new training instances from each image, it is not possible to generate and store all these data on the disk. We will therefore utilize keras generators to read data from the file, augment on the fly and use it to train the model. We will utilize images from the left and right cameras so we can generate additional training data to simulate recovery. Keras generator is set up such that in the initial phases of learning, the model drops data with lower steering angles with higher probability. This removes any potential for bias towards driving at zero angle. After setting up the image augmentation pipeline, we can proceed to train the model. The training was performed using simple adam learning algorithm with learning rate of 0.0001. After this training, the model was able to drive the car by itself on the first track for hours and generalized to the second track. All the training was based on driving data of about 4 laps using ps4 controller on track 1 in one direction alone. The model never saw track 2 in training, but with image augmentation (flipping, darkening, shifting, etc) and using data from all the cameras (left, right and center) the model was able to learn general rules of driving that helped translate this learning to a different track. IMPORTANT: These results were obtained on Titan X GPU machine I built earlier. Full specifications of the computer can be found here. Please note that computers with different performance will provide a different performance of the network. Augmentation helps us extract as much information from data as possible. We will generate additional data using the following data augmentation techniques. Augmentation is a technique of manipulating the incoming training data to generate more instances of training data. This technique has been used to develop powerful classifiers with little data. https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html . However, augmentation is very specific to the objective of the neural network. Brightness augmentation Changing brightness to simulate day and night conditions. We will generate images with different brightness by first converting images to HSV, scaling up or down the V channel and converting back to the RGB channel. Using left and right camera images Using left and right camera images to simulate the effect of car wandering off to the side, and recovering. We will add a small angle .25 to the left camera and subtract a small angle of 0.25 from the right camera. The main idea being the left camera has to move right to get to center, and right camera has to move left. Horizontal and vertical shifts We will shift the camera images horizontally to simulate the effect of car being at different positions on the road, and add an offset corresponding to the shift to the steering angle. We added 0.004 steering angle units per pixel shift to the right, and subtracted 0.004 steering angle units per pixel shift to the left. We will also shift the images vertically by a random number to simulate the effect of driving up or down the slope. Shadow augmentation The next augmentation we will add is shadow augmentation where random shadows are cast across the image. This is implemented by choosing random points and shading all points on one side (chosen randomly) of the image. The code for this augmentation is presented below. Flipping In addition to the transformations above, we will also flip images at random and change the sign of the predicted angle to simulate driving in the opposite direction. 2. Preprocessing After augmenting the image as above, we will crop the top 1/5 of the image to remove the horizon and the bottom 25 pixels to remove the car’s hood. Originally 1/3 of the top of car image was removed, but later it was changed to 1/5 to include images for cases when the car may be driving up or down a slope. We will next rescale the image to a 64X64 square image. After augmentation, the augmented images looks as follows. These images are generated using kera’s generator, and unlimited number of images can be generated from one image. I used Lambda layer in keras to normalize intensities between -.5 and .5. 3. Keras generator for subsampling As there was limited data and we are generating thousands of training examples from the same image, it is not possible to store all the images apriori into memory. We will utilize kera’s generator function to sample images such that images with lower angles have lower probability of getting represented in the data set. This alleviates any problems we may ecounter due to model having a bias towards driving straight. Panel below shows multiple training samples generated from one image. The keras generator is presented below. The ‘pr_threshold’ variable is a threshold that determines if a data with small angle will be dropped or not. 4. Model Architecture and training I implemented the model architecture above for training the data. The first layer is 3 1X1 filters, this has the effect of transforming the color space of the images. Research has shown that different color spaces are better suited for different applications. As we do not know the best color space apriori, using 3 1X1 filters allows the model to choose its best color space. This is followed by 3 convolutional blocks each comprised of 32, 64 and 128 filters of size 3X3. These convolution layers were followed by 3 fully connected layers. All the convolution blocks and the 2 following fully connected layers had exponential relu (ELU) as activation function. I chose leaky relu to make transition between angles smoother. Training: I trained the model using the keras generator with batch size of 256 for 8 epochs. In each epoch, I generated 20000 images. I started with pr_threshold, the chance of dropping data with small angles as 1, and reduced the probability by dividing it by the iteration number after each epoch. The entire training took about 5 minutes. However, it too more than 20 hours to arrive at the right architecture and training parameters. Snippet below presents the result of training. 5. Model performance: Video below shows the performance of algorithm on the track 1 on which the original data was collected. The car is able to drive around for hours, we will next look into the case where either the camera resolution, video size or tracks are changed. Generalization from one image size to another Video below presents generalization from one image size to another. I used the same pretrained model and tested it on all the other image sizes and found that the deep learning neural network was able to drive the car around for all image sizes. Generalization from one image resolution to another Video below presents generalization from one image resolution to another. I used the same pretrained model and tested it on all the other image resolutions and found that the deep learning neural network was able to drive the car around for all image resolutions. I also tested different combinations of image size and image resolutions, and on track 1 the deep learning algorithm was able to drive the car around for all combinations of image resolution and sizes. Generalization from one track to another Figure below presents generalization from one track to another. This was perhaps the toughest test for the deep learning algorithm. In the second track, there were more right turns and u-turns, it was darker, and the road had slopes. All of which were absent in the original track. However, all these effects were artificially included into the model via image augmentation. 6. Future Directions This project is far from over. This project opened more questions than it answered. A few more things to try are, 6. Reflections This was perhaps the weirdest project I did. This project challenged all the previous knowledge I had about deep learning. In general large epoch size and training with more data results in better performance, but in this case any time I got beyond 10 epochs, the car simply drove off the track. Although all the image augmentation and tweaks seem reasonable n0w, I did not think of them apriori. I hope others find this post useful, and get inspried to try novel things. I havent used on-the fly training agile trainer by John Chen yet. I wanted to try and stretch the data as much as possible. Next thing to try is to experiment with parallel network using John’s trainer. Acknowledgements: I am very thankful to Udacity for selecting me for the first cohort, this allowed me to connect with many like-minded individuals. As always, learned a lot from discussions with Henrik Tünnermann and John Chen. I am also thankful for getting the NVIDA’s GPU grant. Although, its for work, but I use it for Udacity too. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Staff Software Engineer at Lockheed Martin-Autonomous System with research interest in control, machine learning/AI. Lifelong learner with glassblowing problem. Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More.
Carlos Beltran
97
9
https://medium.com/@carlosbeltran/ai-the-theme-in-avenged-sevenfolds-new-album-the-stage-f4516d6fc96?source=tag_archive---------2----------------
A Rock Album For AI – Carlos Beltran – Medium
https://open.spotify.com/album/0jwnYwJz6XHNrVAYEclQPd It’s awesome that Avenged Sevenfold became interested in AI and wrote an entire album that revolves around the idea. In an interview with Rolling Stone, lead singer M. Shadows says the initial interest came after reading Tim Urban’s article over at waitbutwhy. It’s one of the things (along with movies like Her and The Matrix of course) that spiked my interest in AI as well, so I’d highly recommend reading it. Tim does a phenomenal job of explaining the topic, current challenges engineers are facing, and the very possible implications of this technology. The term “artificial intelligence” was first coined half a century ago. Fast forward to today, where we have have giant companies like Intel and Apple acquiring AI startups like there’s no tomorrow. It’s not a matter of whether or not we’ll be able to create machines that surpass our own capabilities, but when. Theoretical physicist and futurist Dr. Michio Kaku thinks it is possible for machines as smart as us to exist by the end of the century. Google’s chief futurist, Ray Kurzweil, believes such technology will exist as soon as 2029. The band is right in wanting its fans, and the general public, to be more aware of these ideas — they could be right around the corner. I’m no expert, but I’d like to discuss the ideas behind some of the songs and include references in case you’d like to delve deeper. And if you want to read more on the possible future of AI, I’d recommend reading Kurzweil’s book The Singularity Is Near. Although some of his predictions have been met with skepticism, the ideas presented are thought-provoking. Simply put, nanomachines are microscopic machines that will enhance us in almost every way imaginable. They’ll be able to help our immune system fight off diseases. They would create super soldiers. This technology is actually at the center of a great game series, Metal Gear Solid. This “hack” in our biological makeup will also increase our lifespans. Kurzweil imagines a future where biotechnology is so advanced that we will live forever. This is the same idea behind the song “Paradigm”. Lyrics include: The song also raises the question of what it really means to be human. What do we become when we merge with machines? Will we lose what fundamentally makes us human? It can be argued that this “merge” is the next logical step in evolution, as there is no there is no evolutionary pressure for us to do so anymore. We’ll become, as Kurzweil puts it, “Godlike”. Expanding the brain’s neocortex will allow us, for example, to pose questions in our thoughts and know the answer almost immediately (most likely thanks to our direct “brain-to-Google” connection). We’ll always have witty jokes on hand, and learning Calculus will be as simple as purchasing downloadable content. Plug and play. Besides swapping out failing body parts with prosthetics and enhancing our brains, there’s another way we’ll be able to gain immortality. Both Dr. Kaku and Kurzweil firmly believe that the advances in brain-computer interfaces will eventually allow us to upload our consciousness to machines. Scientists still have no clue how the brain works, how the billions of neurons form connections that result in learned behavior, or what dreaming is. But once these secrets are known (which might never actually happen) and we know how our brain functions, as well as what the “consciousness switch” is, the possibilities are endless. To get an idea of what’s possible, check out Black Mirror’s episode Playtest. The brain-computer interface for the game is so advanced that the player can’t distinguish between what’s real and what isn’t. I don’t want to spoil anything, but get ready for a mind fuck. Black Mirror does a great job of weaving technology with a dystopia that we might inhabit, showing a darker side of our society. It’s on Netflix, so check it out. Elon Musk sure does. He claims that the chances of us living in “base reality” is one in billions. I’d recommend watching the 3-minute video. His logic is as follows: we had Pong some 40 years ago. Two rectangles and a dot were rendered on-screen for what we called a videogame. Today, we have games with realistic graphics and they keep getting better every year. Better yet, virtual and augmented reality are right around the corner, pushing the boundaries of gaming. Eventually, we’ll have the technology to create simulated worlds that are indistinguishable from reality. Therefore, Musk claims, it is likely that we are living in an ancestor simulation created by an advanced future civilization some 10,000 years from now. The album’s 7th song, “Simulation” explores the idea that our reality might not be what it seems. Think of it this way — the brain and nervous system which we use to automatically react to the environment around us is the same brain and nervous system which tells us what the environment is. Throughout the song, the “patient” is having thoughts that challenge the simulation they are living in. They are — in a sense — waking up. A darker voice, which I believe is meant to represent the ones running the show, has to reprimand the patient, reminding them that they “...only exist because we allow it”. To control the situation, the patient is to be sedated with blue comfort, a reference from The Matrix, which will make them forget they’re living in a simulation. Blissful ignorance. I won’t try to explain this one. Just watch the video. And here’s a quote from that man that might get your attention: Imagine an entity so intelligent... ...but that’s just it. You can’t imagine it. In the second part to his article on AI, Tim Urban compares this to a chimp being unable to understand a skyscraper is not just a part of its environment, but that humans built it. It’s not the chimp’s fault or anything, its brain is just not made to have that level of information processing. The same thing will happen when we build a machine with the collective knowledge of some 200,000 years of Homo Sapien existence. Therefore, there is no way to know what it will do or what the consequences will be. Tim depicts our situation with this entity, what he refers to as Artificial Superintelligence (ASI), beautifully: Mark Zuckerberg is right in saying we should be hopeful of the amount of good AI could do, but some of the smartest minds in existence are genuinely concerned. Stephen Hawking acknowledges that the successful creation of an AI will be the biggest event in history, but warns it could also end mankind. Elon Musk founded a research company OpenAI as a way to “neutralize the threat of a malicious artificial super-intelligence”. “Creating God” describes AI as a modern messiah, “the very last invention man would ever need”. It paints the picture of a utopia where this intelligence exists. At the same time, the song suggests that we could be “summoning the demon”, unable to control the outcomes. We could just be its stepping stone, as our existence after its creation becomes irrelevant. The album wraps up with a 15-minute eargasm. I can’t produce words that will do “Exist” any justice. As the band described it, it’s like listening to what the Big Bang might’ve sounded like. Neil deGrasse Tyson makes a cameo at the end of the song that serves as a reminder that our problems and conflicts are minuscule in the grand scheme of things. We’re all a part of the same universe and once we as a society realize this, we can truly make progress. Here’s the full thing: The Stage is an exceptional album, in my opinion. The band’s intentions were for fans to educate themselves, or be a bit more aware of what’s going on in this area. We can enjoy it as a rock album as well as explore the ideas behind the lyrics. I had an awesome time writing this, digging up things I’ve read and seen and unifying them in a way so others can hopefully become more interested as well. And come on, don’t tell me that the idea that we’re living in a simulation isn’t thought-provoking. Tap the ❤ button below :) My name’s Carlos and I generally write about personal development, tech, and entrepreneurship. Hit me up on Twitter! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Software engineer. Focused on building cool shit on Ethereum 🚀
Matt Harvey
558
6
https://blog.coast.ai/continuous-video-classification-with-tensorflow-inception-and-recurrent-nets-250ba9ff6b85?source=tag_archive---------3----------------
Continuous video classification with TensorFlow, Inception and Recurrent Nets
A video is a sequence of images. In our previous post, we explored a method for continuous online video classification that treated each frame as discrete, as if its context relative to previous frames was unimportant. Today, we’re going to stop treating our video as individual photos and start treating it like the video that it is by looking at our images in a sequence. We’ll process these sequences by harnessing the magic of recurrent neural networks (RNNs). To restate the problem we outlined in our previous post: We’re attempting to continually classify video as it’s streamed, in an online system. Specifically, we’re classifying whether what’s streaming on a TV is a football game or an advertisement. Convolutional neural networks, which we used exclusively in our previous post, do an amazing job at taking in a fixed-size vector, like an image of an animal, and generating a fixed-size label, like the class of animal in the image. What CNNs cannot do (without computationally intensive 3D convolution layers) is accept a sequence of vectors. That’s where RNNs come in. RNNs allow us to understand the context of a video frame, relative to the frames that came before it. They do this by passing the output of one training step to the input of the next training step, along with the new frames. Andrej Karpathy describes this eloquently in his popular blog post, “The Unreasonable Effectiveness of Recurrent Neural Networks”: We’re using a special type of RNN here, called an LSTM, that allows our network to learn long-term dependencies. Christopher Olah writes in his outstanding essay about LSTMs: “Almost all exciting results based on recurrent neural networks are achieved with [LSTMs].” Sold! Let’s get to it. Our aim is to use the power of CNNs to detect spatial features and RNNs for the temporal features, effectively building a CNN->RNN network, or CRNN. For the sake of time, rather than building and training a new network from scratch, we’ll... Step 2 is unique so we’ll expand on it a bit. There are two interesting paths that come to mind when adding a recurrent net to the end of our convolutional net: Let’s say you’re baking a cake. You have at your disposal all of the ingredients in the world. We’ll say that this assortment of ingredients is our image to be classified. By looking at a recipe, you see that all of the possible things you could use to make a cake (flour, whisky, another cake) have been reduced down to ingredients and measurements that will make a good cake. The person who created the recipe out of all possible ingredients is the convolutional network, and the resulting instructions are the output of our pool layer. Now you make the cake and it’s ready to eat. You’re the softmax layer, and the finished product is our class prediction. I’ve made the code to explore these methods available on GitHub. I’ll pull out a couple interesting bits here: In order to turn our discrete predictions or features into a sequence, we loop through each frame in chronological order, add it to a queue of size N, and pop off the first frame we previously added. Here’s the gist: N represents the length of our sequence that we’ll pass to the RNN. We could choose any length for N, but I settled on 40. At 10fps, which is the framerate of our video, that gives us 4 seconds of video to process at a time. This seems like a good balance of memory usage and information. The architecture of the network is a single LSTM layer with 256 nodes. This is followed by a dropout of 0.2 to help prevent over-fitting and a fully-connected softmax layer to generate our predictions. I also experimented with wider and deeper networks, but neither performed as well as this one. It’s likely that with a larger training set, a deeper network would perform best. Note: I’m using the incredible TFLearn library, a higher-level API for TensorFlow, to construct our network, which saves us from having to write a lot of code. Once we have our sequence of features and our network, training with TFLearn is a breeze. Evaluating is even easier. Now, let’s evaluate each of the methods we outlined above for adding an RNN to our CNN. Intuitively, if one frame is an ad and the next is a football game, it’s essentially impossible that the next will be an ad again. (I wish commercials were only 1/10th of a second long!) This is why it could be interesting to examine the temporal dependencies of the probabilities of each label before we look at the more raw output of the pool layer. We convert our individual predictions into sequences using the code above, and then feed the sequences to our RNN. After training the RNN on our first batch of data, we then evaluate the predictions on both the batch we used for training and a holdout set that the RNN has never seen. No surprise, evaluating the same data we used to train gives us an accuracy of 99.55%! Good sanity check that we’re on the right path. Now the fun part. We run the holdout set through the same network and get... 95.4%! Better than our 93.3% we got without the LSTM, and not a bad result, given we’re using the full output of the CNN, and thus not giving the RNN much responsibility. Let’s change that. Here we’ll go a little deeper. (See what I did there?) Instead of letting the CNN do all the hard work, we’ll give more responsibility to the RNN by using output of the CNN’s pool layer, which gives us the feature representation (not a prediction) of our images. We again build sequences with this data to feed into our RNN. Running our training data through the network to make sure we get high accuracy succeeds at 99.89%! Sanity checked. How about our holdout set? 96.58%! That’s an error reduction of 3.28 percentage points (or 49%!) from our CNN-only benchmark. Awesome! We have shown that taking both spatial and temporal features into consideration improves our accuracy significantly. Next, we’ll want to try this method on a more complex dataset, perhaps using multiple classes of TV programming, and with a whole whackload more data to train on. (Remember, we’re only using 20 minutes of TV here.) Once we feel comfortable there, we’ll go ahead and combine the RNN and CNN into one network so we can more easily deploy it in an online system. That’s going to be fun. Part 3 is now available: Five video classification methods implemented in Keras and TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Coastline Automation, using AI to make every car crash-proof. Practical applications of deep learning and research reports from the road.
Oxford University
237
19
https://medium.com/oxford-university/the-future-of-work-cf8a33b47285?source=tag_archive---------4----------------
The future of work – Oxford University – Medium
Technology has always changed employment, but the rise of robotics and artificial intelligence could transform it beyond recognition. Researchers at Oxford are investigating how technology will shape the future of work — and what we can do to ensure everyone benefits. In a famous 1930 talk, John Maynard Keynes imagined a future 100 years hence in which technological progress automated much of human labour. By 2030, he estimated, we could all enjoy a 15-hour working week. A lot will need to change in the next decade for that to become a reality, but it’s not impossible. Right now, advances in artificial intelligence and robotics promise machines that will take on all kinds of human tasks. Digital communication is creating an internet-dwelling labour force that can work remotely and on demand. And the self-employed are finding that new technological services like Uber and Airbnb can provide a flexible way to make a living. But phenomena like these give rise to a cascade of effects — not all necessarily desirable — that are fiendishly difficult to perceive and predict. It’s perhaps not surprising, then, that the future of work is a topic of increasing fascination for University of Oxford academics. Both the Oxford Martin School and Green-Templeton College now run specific programmes that focus on the topic, with plenty of researchers — from the Departments of Engineering Science and Sociology to those of Politics and Economics — grappling with its complexity. ‘We see a need for bringing together different perspectives around the study of work,’ explains Dr Marc Thompson, a Senior Fellow at Saїd Business School and the Director of the Green-Templeton College Future of Work Programme. ‘Our role as academics is to contribute to the debate, both in terms of theory and to raise challenging questions and issues for those in government and industry. What will happen as a result of these advances? How will it affect people? Whose interests are being pursued? And what are the long-term implications?’ A series of recent studies from the University cut straight to the chase of technology’s impact on employment, focusing on how robotics and automation will affect the jobs that humans currently undertake. The authors, Dr Carl Benedikt Frey (@carlbfrey) and Prof Michael Osborne (@maosbot), come from quite different backgrounds: Frey is an economist interested in the transition of industrial nations to digital economies, Osborne an engineer focused on creating machine-learning algorithms. Together, they’re co-directors of the Programme on Technology and Employment at the Oxford Martin School. ‘It would be fair to ask why I’m doing work related to economics while we’re sitting here in the Department of Engineering,’ admits Osborne, gesturing to his surroundings. ‘But I’ve always had some interest in thinking about what machine-learning could mean for society beyond the industrial applications we usually consider, so when Carl approached me to speak about algorithms and technologies used in automation, and their effects on employment, it seemed like a natural fit.’ This is, of course, exactly the kind of multidisciplinary work the University excels at, and the reason the Oxford Martin School was established. Each of its programmes brings together researchers from different fields to tackle complex global issues that can’t be solved by academics from a single discipline. Since meeting, the pair has set about developing ways to analyse which jobs that exist today could be at risk of being taken over by robots or artificial intelligence software in the next 20 years. First, they gathered together ‘as many smart people as [they] could’ to decide on 70 job roles that definitely could or could not be automated in the next 20 years. For example, they collectively decided that switchboard operators and dishwashers could definitely be replaced, while the clergy and magistrates certainly couldn’t. The pair combined this list with data from the US Department of Labor’s O⋆NET system — a database which describes the different skills relevant to specific occupations. Osborne then built an algorithm that could learn from both pools of data to establish the kinds of skills that were common to automatable jobs. When shown other occupations and the skills they require, the software can classify them with a probability of being either automatable or non-automatable. The pair found that the jobs least likely to be automated are those that require skills of creative intelligence, social intelligence or physical dexterity. These are what they refer to as engineering bottlenecks: current limits to technology that make humans irreplaceable. Osborne points out that it’s perfectly possible, for instance, to have an algorithm churn out an endless sequence of songs, but almost impossible to have it create a hit. Similarly, chat-bots may be able to communicate with you but they can’t negotiate a deal, and robots can assemble objects on a well-defined production line, but they can’t perform a fiddly task like making a cup of tea in your messy kitchen. In each case, it’s because humans draw on a huge wealth of tacit knowledge about culture, emotion, human behaviour and the physical environment that’s hard to encode in a way that a machine can act upon. But, even with those bottlenecks, the results suggest that as many as 47% of US jobs are at risk from automation over the course of the next two decades. It’s worth bearing in mind that the figures explain which jobs are theoretically automatable, rather than destined to be automated. ‘That may seem like a fiddly point,’ says Osborne, ‘but this analysis doesn’t take into account other factors that we absolutely do believe will have an impact on whether an occupation is taken over by a machine, such as human wage levels, social acceptance, and the creation of new jobs.’ But however you look at it, the numbers are difficult to ignore. There’s an intuitive counter-argument to the claims that their analysis makes: for centuries, new technologies have been invented that have pushed humans out of work, but by and large most of us still continue to have jobs. In fact researchers elsewhere in the University have shown that the amount of work we all perform remains steadfastly consistent, irrespective of technological change. Jonathan Gershuny, Professor of Sociology and Director of the Centre for Time Use Research, has spent a large part of his career tracing the way that we all use our time — to work, play, rest and everything else. ‘Fundamentally, there are three realms of activity,’ he explained from the bay window of his Woodstock Road office. ‘There’s paid work, unpaid work and consumption.’ Paid work is just that: the tasks we carry out in exchange for money, be it mining coal, writing a book or performing brain surgery. Unpaid work, meanwhile, is formed of tasks that you could pay someone else to do for you (but for whatever reason don’t), such as cooking, cleaning, gardening or childcare. And consumption is all the activity you absolutely couldn’t pay someone else to do for you — your night’s sleep, say, or eating lunch. ‘Why am I telling you all this?’ asks Gershuny, with a grin. ‘Well, when you define work quite widely like this, you arrive at a really quite extraordinary discovery, which is that work time — that is the sum of paid and unpaid work time — doesn’t change very much. Looking at all the data we have access to, the total is pretty constant, at about 60 hours per week.’ That’s just over a third of our 168-hour week, and a little more than the approximately 50-hour chunk we manage to spend sleeping. He points to decades of evidence accumulated by his team — in countries including Australia, Canada, Israel, Slovenia, France, Sweden, the Netherlands and plenty more — that confirm the trend, as well as working time regulations from as far back as the Industrial Revolution. His latest dataset — a huge survey of British residents carried out in 2015 — was being downloaded in full the day we met, but a preliminary analysis already suggested that his observation holds true. ‘The truth is, we need work for various reasons: a time structure, a social context, a purpose in life,’ he explains. Indeed, what many people citing Keynes’ famous talk about the future fail to mention is that he went on to suggest that ‘there is no country and no people... who can look forward to the age of leisure and of abundance without a dread.’ In other words, he thought that most us couldn’t really begin to comprehend the reality of not working. Gershuny agrees, arguing that humans will simply endeavour to find new types of work to do in order to busy themselves, whether the robots take over the jobs we currently possess or not. Dr Ruth Yeoman, a Research Fellow at the Saïd Business School who researches meaningful work in organisations and systems, points out that the human desire to find meaning in work is hard to ignore. She explains that the drive to work is so strong that people seek positive meaning in work that is considered by many people to be dirty, low status or poorly paid. ‘Hospital cleaners, for instance, interpret their work to be meaningful and worthwhile because they enlarge the scope of that work in their own minds,’ she explains. This phenomenon allows humans to justify all kinds of work to themselves as useful and relevant, it seems, regardless of what it actually is. Frey and Osborne aren’t so confident that humans are resourceful enough to create new work for themselves, though. Frey has actually studied the rate at which new jobs are being generated as a result of technological change. His findings suggest that about 8.2% of the US workforce shifted into new types of jobs — that is, roles associated with technological advances — during the 1980s. In the 1990s the figure fell to 4.4% and in the 2000s it dropped to just 0.5%. The evidence suggests that the new industries we might assume to be the salvation of the labour force — such as web design or data science — aren’t creating as many new positions as we may hope. Part of the reason for that, argues Osborne, is that many of the new job roles being created are related to software, rather than hard, physical goods. ‘Software is pretty cheap with next to zero marginal cost of reproduction,’ he explains. That means that a small group of people can have a great idea and easily turn it into a product that’s used the world over, while barely growing the size of its team. The smartphone messaging service WhatsApp is a prime example: it was purchased by Facebook for $19 billion in 2014, when it served 700 million users. At the time, it had just 55 employees. Counting specific jobs may, however, be overly simplistic when it comes to thinking about how the working lives of real people are set to change. ‘People often think about the work that people do as a monolithic indivisible lump of stuff,’ explains Daniel Susskind (@danielsusskind), a Lecturer in Economics at Balliol College and co-author of a new book called The Future of the Professions. ‘The problem is, that encourages the view that one day a lawyer will arrive at work to find an algorithm sitting in his chair, or a doctor turn up to a robot in her operating theatre, and their jobs will both be gone.’ Instead, he argues, we should be focusing on the separate tasks that make up job roles. Susskind co-wrote his new book with his father, Richard Susskind (@richardsusskind), whose Oxford DPhil considered the impact of artificial intelligence on law. That was back in the 1980s, when AI systems were rudimentary and typically based on rules gleaned from human understanding. But five years ago father and son — the latter then working in the Policy Unit at 10 Downing Street — realised that a second wave of artificial intelligence was being developed that could have profound effects on professional careers. Since, they’ve been researching how technology might affect the working lives of lawyers, doctors, teachers, architects and the rest of the professions. ‘Not everything that a professional does is creative, strategic or complex,’ explains Susskind. ‘So while many professionals might think that all their work lies on one side of [Frey and Osborne’s] engineering bottlenecks, actually many of the tasks they perform are amenable to computerisation.’ For most, that means it’s unlikely that they’ll simply lose their job to technology, at least in the near future — but they can expect to see a significant change in the sorts of things they’re asked to do. In their book, the Susskinds describe twelve new roles that might appear within the professions — such as process analysers, knowledge engineers, data scientists and empathisers. ‘These are roles that sound unfamiliar to traditional professionals, that require skills and abilities that many of them are unlikely to have at this moment in time,’ they explain. We’re already seeing professionals adapt so that they can work alongside more intelligent technological systems, though. Take, for instance, your bank manager. When you used to approach them for a loan, they’d carefully make a decision on whether or not you were a good risk, then either give you the money or send you home. Now, an algorithm determines whether or not you’re awarded the cash, and yet bank managers still exist. The role has simply changed, to become a customer service and sales job rather than an analytical or technical role. Not everyone will be as lucky as the professionals whose jobs merely metamorphose, because if all of the tasks that make up a job are automatable, the job no longer needs to exist. Craig Holmes (@CraigPHolmes), a Fellow in Economics at Pembroke College and Senior Research Fellow at the Institute for New Economic Thinking, has been studying shifts in occupational structure of labour markets, and how they’ve moved away from middle-skilled work, with more people now doing high-skilled or low-skilled work. This phenomenon — referred to as the hollowing out of the labour market — isn’t in itself new: middle-skilled factory workers have been losing their jobs to robots for decades, for instance. But the pace of technological development is now threatening other middle-skilled occupations that in the past we’ve assumed could only be done by humans. Job categories defined as associate professionals, for instance — the people that provide technical services that keep trade, finance and government running — appear increasingly likely to be taken over by machines. ‘In the case of, say, paralegals, there are now pieces of software that can sift through thousands of documents, pull out relevant precedents, and put them together using a very simple format, without requiring any human involvement,’ explains Holmes. ‘So a traditionally middle-tier research job can be perfectly performed by technology.’ The same story could play out in other sectors: large datasets of historical case notes and information from wearables could allow computers to make straightforward medical diagnoses, say, while smarter algorithms might remove the work of number-crunching accountants. Like car factory workers replaced by robots in the past, Holmes imagines a number of possible futures for those discharged from mid-tier roles. Some, like the bank manager, will be able to assume different roles with similar titles. A small number may move upwards into roles that aren’t yet automatable. Others, sadly, may have to assume lower-skilled jobs or face unemployment. The nature of those lower-skilled jobs will of course change too. The work of Frey and Osborne suggests that many low-skilled jobs — such as call centre workers, data entry clerks and dishwashers — will be readily automated in the future. ‘In some cases, the cost of technology will be so low that there’s no wage that people could happily accept that would make the job sustainable,’ admits Holmes. ‘In fast food restaurants, for instance, you can replace someone who takes an order with an iPad that will last for years. Nobody would accept a job that paid wages that low.’ But it’s not perhaps quite so gloomy as that, as personal service jobs will likely still require a human touch. ‘We’ll probably see an increase in the number of low-skill service jobs, because people value human interaction and many of those jobs currently seem not to be readily automatable,’ suggests Holmes. ‘That will provide more jobs, they just won’t be great jobs.’ While technology may be the mechanism through which many jobs are lost, though, it might very well also be the thing that enables people to take up new lower-skilled positions. ‘There’s been an explosion in connectivity around the world,’ explains Professor Mark Graham (@geoplace) from the Oxford Internet Institute. ‘Something like 3.5 billion people are now online. And that has some significant repercussions in terms of what work is, where it’s done and how it happens.’ Graham has been travelling the world to talk to people who find themselves in a new kind of labour market. In particular, he’s been interviewing individuals who perform work from home, provided to them by a slew of websites such as Amazon’s Mechanical Turk, UpWork, and ClickWorker. These sites all allow companies and individuals to outsource tasks: potential employers simply post a description of what they need doing to a website, then people interested in doing the work bid for it. The employer chooses someone to do the work, based on a combination of price, listed skills and ratings from previous employers; the worker carries out the task, gets paid, then moves on to another piece of work. The tasks being doled out vary — from transcription and translation to new kinds of work such as tagging images for artificial intelligence systems — but much of it is currently difficult or expensive to automate. Technology has also created legions of new workforce members in more traditional sectors, such as transportation, hospitality, catering, cleaning and delivery. ‘There are increasingly more ways of commodifying bits of everyday life: using your car to be an Uber driver; your apartment to be an Airbnb host; your bicycle to be a Deliveroo rider; or your broom to be a Task Rabbit cleaner,’ explains Graham. This is what’s become known as the ‘sharing’ or ‘gig’ economy. Whether it’s Uber, Airbnb or Amazon’s Mechanical Turk, the business plan is much the same: create a digital platform which makes it easier to link a customer, who wants a service to be performed, with someone who’s willing to provide it, for a (very) competitive fee. These new styles of working certainly bring some benefits: apparent flexibility for workers, more efficient use of existing resources and equipment, and reasonable prices for those seeking services. But, as Jeremias Prassl, an Associate Professor of Law and Fellow of Magdalen College, warns, this new workforce is potentially vulnerable. ‘Uber acts like an employer: it sets your wage, tells you the route to drive, hires you, and fires you if your rating falls too low,’ he explains. ‘Under any classical analysis, Uber performs all the usual employer functions. But in its contracts with “driver-partners”, the platform explicitly denies employer status, suggesting that the worker is very much a contractor. Legally, and through the language it uses, Uber tries to deny the fact that it offers employment.’ Through so doing, the company is able to avoid paying social security, pension contributions, redundancy pay and so on — all the usual rights an employee might benefit from. But Prassl, who’s written a book about the topic, points out that these kinds of contracts are nothing new. ‘From the perspective of an employment lawyer, zero hours contracts and the gig economy are old problems,’ he explains. ‘We’ve been grappling with the rise of so-called “non-standard work” for the last 30 or 40 years. It’s just that now they’re receiving more attention and sustained media coverage.’ The problem, as Prassl sees it, is that employment law is currently based on an old binary system. If you’re an employee you get rights — to, say, sick pay, notice of dismissal or paid holiday. But if you’re a contractor, you’re not afforded any of those rights. Employment law currently boils down to a simple question: How do you define whether or not someone counts as an employee? ‘What my research suggests is that maybe we should turn the problem on its head,’ he explains. ‘We could say instead: Who’s the employer?’ It seems like a subtle difference but, with the shoe on the other foot, he suggests crowd workers would be able to enjoy some kind of employment law protection. In this upended scenario, everyone could benefit from existing minimum standards like the minimum wage, working time regulations and discrimination protection, with their provision accounted for by whoever is legally deemed to be the employer. If companies failed to comply, workers could litigate employers in the knowledge that the damages were definitely owed to them. It’s not just Prassl that’s worried about the vulnerability of employees. ‘One of the issues is that we confuse work with jobs,’ points out Ruth Yeoman. ‘There’s an awful lot of work in the world that has to be done, and one of the problems when we think about the future of work is how it all gets converted into jobs for which people will be paid. Sometimes people may contribute to society not through paid work, but through some other mechanism: voluntary work, say, or caring.’ And while those tasks may be hard work, or may not pay, they are necessary and many of them must be done by humans. That’s why Stuart White (@StuartGWhite), Associate Professor from the Department of Politics and International Relations, is interested in how we could ensure everyone enjoys a basic standard of living — a concept he’s written about in the book Democratic Wealth. he explains. White’s suggestion is that no tests of means or willingness to take a job would be imposed, so that everyone in the country received a basic payment every month. It’s worth noting that the idea is not intended to make everyone rich — far from it. Instead, it’s a means of giving individuals more flexibility, affording them power to decide when and how to be contributive and productive. ‘It’s a way of ensuring you don’t have people desperately scrambling into jobs to make ends meet,’ White explains. In turn, he argues, employers would make some of the least appealing jobs more pleasant — they’d be forced to, otherwise nobody would choose to do them. Numerous mechanisms for putting such a policy into action have been proposed in the past. One option is to divert existing benefits and tax relief into a basic income that’s shared equally amongst the population. If those contributions didn’t stretch far enough, they could be topped up with revenue from further taxation — from land value tax, suggests White. Alternatively, the income could be provided by a state-owned investment fund from which the returns would be shared out equally. ‘There are lots of philosophical arguments about whether or not it’s all a good idea,’ he concedes. ‘But we’re moving into a world where there’s increased insecurity around work. Against that backdrop, a source of income that’s independent of work is a way of rebalancing power relations in the labour market.’ Whether or not you agree with the concept of a universal citizen’s income or the reform of employment law, these concepts are indicative of the kinds of discussions that Oxford researchers are increasingly leading. ‘I think the University needs to be asking these kinds of Aristotlean questions about whose interests are being met, who benefits from the changes... the moral questions,’ explains Marc Thompson. ‘It’s not something we should shy away from.’ Increasingly, then, just as Thompson hoped for when he set up the Green Templeton College Future of Work Programme, Oxford academics are working with business and governments to shape the debate about the future of employment. Frey and Osborne, for instance, have published reports with Citi and Deloitte about the impact of technology on employment; Mark Graham sits on the Department for International Development’s Digital Advisory Panel; and Richard Susskind acts as an IT Adviser to the Lord Chief Justice of England and Wales. What remains, of course, is for policymakers, lawyers and industry officials to take the questions and suggestions raised by academics on board, then work out how best to use technological advance in all our favour. ‘These possibilities afforded by technology, automation and commodification of labour... they can all be shaped by policy, organisational change and simply choosing to do things differently,’ muses Thompson. ‘There are some important choices to be made about how we make use of them.’ Technology will make many jobs redundant, others easier, and create at least some new ones along the way. Keynes’ prediction of a fifteen-hour working week may even come true. But while humans are in charge, we can still choose for there to be some work that’s performed by non-robotic hands. ‘It would be very easy for there to be an automated pub where drinks are served from vending machines,’ concludes Mark Graham. ‘But nobody wants that. Because it would be depressing.’ Written by Jamie Condliffe, a science and technology writer based in London. He tweets @jme_c. In keeping with one of the themes of the article we used 99designs to find an illustrator and worked with slouise. Follow us on Medium, we’ll be publishing more articles soon that look at topics such as medical trials, developments in healthcare and more. If you liked this article please click the green heart, it really helps to spread the word and let others find it. Produced by Christopher Eddie, Digital Communications Office, University of Oxford. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Oxford is one of the oldest universities in the world. We aim to lead the world in research and education. Contact: digicomms@admin.ox.ac.uk Oxford is one of the oldest universities in the world. We aim to lead the world in research and education. Contact: digicomms@admin.ox.ac.uk
Maciej Lipiec
766
8
https://medium.com/k2-product-design/the-future-of-digital-banking-236ad65e4c76?source=tag_archive---------5----------------
The Future of Digital Banking – K2 Product Design – Medium
Our solution is based on three pillars: In the old times user interface of a bank was the bank teller at the branch. From today’s perspective it was inconvenient and time consuming, but the bank had a human face. Now we are interacting with our banks by clicking on links, menus, and buttons, and filling out forms. But banking apps are often hard to use, overly complex and ugly. Lack of true customer-centricity and technological debt on the back-end side of things make the banking experience frustrating. How can we make digital banking easier, more simple, more personal and human? By giving it a new face: of a robot! Meet BankBot. It is the new digital bank teller, personal assistant, and a financial advisor. When you sign in to your K2 Bank account BankBot will greet you and ask for orders. The main interface of K2 Bank is instantly familiar if you ever used Slack (over two millions of people use it in the office everyday), or Facebook Messenger, or an SMS app, or IRC (then you’re really old school!). It’s never ending stream with history of communications from bottom (recent) to the top (oldest) of the screen. You type your command or question, and BankBot will answer. BankBot understands natural language, but it pays special attention for keywords, that will trigger actions, like a new transfer or searching in history, or credit card cancellation. Just type in “Send 100 EUR to Anna” and BankBot will search it’s database for possible recipients matching „Anna” and let you choose the one you mean. Or you can add a new recipient. Then BankBot will sent confirmation code to your cell phone and ask you to type it in, and it’s done. You don’t need to click and move your hands from the keyboard. Of course this the easiest scenario (similar to sending money via SquareCash or SnapCash), but almost every operation can be completed that way. Typing a recipient’s name will show you recent transactions with her from your account history and option for a new payment. Typing “USD” will show you currency exchange rate. If you need help type “help”. If you need to contact human staff at the bank type “human” and you can chat with real person from customer service instead of a bot. Or type „concierge” if you’re a Private Banking client. There is also a way to access features using the Hamburger menu at the bottom— it opens a list of options, just like typing “/” (slash) in Slack. Personal Finance Managers (PFMs) for controlling home budget are popular additions to banking systems. But they are complicated, often hidden deep in the nested menus, and they need a lot of user’s attention. Do people really use them? Steven Walker of Forrester Research has written: BankBot can provide just that. You can ask “Expenses this month”, or “Car expenses”, and it will show you a simple chart with relevant information. This is “pull” mechanism, but BankBot can also be proactive, pushing important information to the user. It can warn you that you are close to exceeding your monthly budget. It can remind you about regular payments you usually make each month. It can remind you to pay off your credit card. Or pay your tax. It can suggest better options to save or invest your money, and show you how much more you can earn. It can offer you a loan, when you probably need it. Or offer travel insurance, when he knows you’ve just bought plane tickets. Or up-sell you a better account or credit card, when it will notice that you’ve got a pay rise. Or it can alert you when you should do something with your stocks portfolio. Chat banking is nice on the desktop, but it’s even more effective on mobile — type a few words and it’s done, just like sending an SMS. Or you can talk to BankBot (speech2text). Authentication can be provided by fingerprint sensor. You can receive important alerts as push notifications on your phone or smartwatch, and immediately take action (or dismiss). You can even get discount on your health insurance based on physical activity data from your fitness band or Apple Watch. BankBot can also live inside smart devices like the Amazon Echo, which provides its own API for developers — smart home and smart banking mixed together. Or inside the Facebook Messenger chat. The second Payment Services Directive is to be transposed into national regulations across the European Union from 2016. Its goal is to open the banking market. PSD2 will force banks to provide access via APIs to their customer accounts and provide account information to third party service providers if the account holder wishes to do so. This is called „Access to the Account” (XS2A) and it’s not optional, banks will have to evolve as third parties enter their space. PSD2 defines traditional financial institutions (banks) as “Account Servicing Payment Service Providers” (AS PSP), and new players as “Account Information Service Providers” (AISP) or “Payment Initiation Service Providers” (PISP). Both PISPs and AISPs will have to register with the “competent authority” in their home Member State for security reasons. What are the implications of this for our system? The quality of banking user interfaces will be extremely important, because bank’s clients could choose to manage their account from third party provider app with better UX or functionality, cutting themselves from any direct communication with their bank. In this case the bank will be reduced to a „dumb pipe” in the value chain. But fighting this by providing to the third parties only the minimum APIs required may be a bad strategy for banks. We think they should be more open, actively partnering with other financial institutions, retailers, merchants and startups. We imagine K2 Bank solution providing an AppStore based on its APIs. Users will be able to give permission to third party service providers in a way you allow applications to access your Facebook or Twitter account today. You will be able to buy stuff at your authorized retailer without logging into your bank (or without visiting the retailer site, but from yours bank app). There is no need to provide credit card number, probably even shipping address or any data. The bank can automatically offer you a purchase by installments. Or it can give you a discount, because of your history of frequent past transactions online and offline with this retailer (there will be no need for customer loyalty cards anymore). The bank can become an advertising channel for the retailers too, offering personalized promotions for its customers. This should be opt-out, but if your cell-phone contract is ending, and BankBot messages you with a really great offer for a plan with a cheap newest iPhone, and you can buy it instantly with one click, would you mind? By building the thriving ecosystems banks and third parties can both win. And we hope customers will too. If you want to know more about K2 Bank solution, it’s design, technology behind the BankBot, and possibilities of implementation, don’t hesitate to contact us. Of course conversational interfaces like BankBot can be used not only in banking, but also insurance, online commerce, travel, healthcare and many other industries. Please write to Maciej Lipiec, K2’s User Experience Director, at maciej.lipiec@k2.pl You can read more about K2 Bank in this article at Chatbots Magazine: Also please check out our project on Behance. K2 Internet is a leading digital product design and communications agency in Poland. We develop digital services, apps and websites with a strong focus on user experience. We have a long-time experience partnering with financial institutions — in the last 10 years we helped to envision, design and develop over 10 transactional systems for the biggest banks in Poland. Stanusch Technologies is K2 Bank’s technology provider for BankBot. The company is involved in research and development of the use of artificial intelligence in business. It carry out projects related to natural language processing and semantic information retrieval. It has become a world leader in the number of carried out projects of virtual advisors/chatbots. Thank you! If you enjoyed reading this please 👏👏👏 and share! :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Product Design Director @ K2. K2 Internet is a leading digital product design and communications agency in Poland.
Camron Godbout
341
10
https://hackernoon.com/tensorflow-in-a-nutshell-part-three-all-the-models-be1465993930?source=tag_archive---------7----------------
TensorFlow in a Nutshell — Part Three: All the Models
Make sure to check out the other articles here. In this installment we will be going over all the abstracted models that are currently available in TensorFlow and describe use cases for that particular model as well as simple sample code. Full sources of working examples are in the TensorFlow In a Nutshell repo. Use Cases: Language Modeling, Machine translation, Word embedding, Text processing. Since the advent of Long Short Term Memory and Gated Recurrent Units, Recurrent Neural Networks have made leaps and bounds above other models in natural language processing. They can be fed vectors representing characters and be trained to generate new sentences based on the training set. The merit in this model is that it keeps the context of the sentence and derives meaning that “cat sat on the mat” means the cat is on the mat. Since the creation of TensorFlow writing these networks have become increasingly simpler. There are even hidden features covered by Denny Britz here that make writing RNN’s even simpler heres a quick example. Use Cases: Image processing, Facial recognition, Computer Vision Convolution Neural Networks are unique because they’re created in mind that the input will be an image. CNNs perform a sliding window function to a matrix. The window is called a kernel and it slides across the image creating a convolved feature. Creating a convolved feature allows for edge detection which then allows for a network to depict objects from pictures. The convolved feature to create this looks like this matrix below: Here’s a sample of code to identify handwritten digits from the MNIST dataset. Use Cases: Classification and Regression These networks consist of perceptrons in layers that take inputs that pass information on to the next layer. The last layer in the network produces the output. There is no connection between each node in a given layer. The layer that has no original input and no final output is called the hidden layer. The goal of this network is similar to other supervised neural networks using back propagation, to make inputs have the desired trained outputs. These are some of the simplest effective neural networks for classification and regression problems. We will show how easy it is to create a feed forward network to classify handwritten digits: Use Cases: Classification and Regression Linear models take X values and produce a line of best fit used for classification and regression of Y values. For example if you have a list of house sizes and their price in a neighborhood you can predict the price of house given the size using a linear model. One thing to note is that linear models can be used for multiple X features. For example in the housing example we can create a linear model given house sizes, how many rooms, how many bathrooms and price and predict price given a house with size, # of rooms, # of bathrooms. Use Cases: Currently only Binary Classification The general idea behind a SVM is that there is an optimal hyperplane for linearly separable patterns. For data that is not linearly separable we can use a kernel function to transform the original data into a new space. SVMs maximize the margin around separating the hyperplane. They work extremely well in high dimensional spaces and and are still effective if the dimensions are greater than the number of samples. Use Cases: Recommendation systems, Classification and Regression Deep and Wide models were covered with greater detail in part two, so we won’t get too heavy here. A Wide and Deep Network combines a linear model with a feed forward neural net so that our predictions will have memorization and generalization. This type of model can be used for classification and regression problems. This allows for less feature engineering with relatively accurate predictions. Thus, getting the best of both worlds. Here’s a code snippet from part two’s github. Use Cases: Classification and Regression Random Forest model takes many different classification trees and each tree votes for that class. The forest chooses the classification having the most votes. Random Forests do not overfit, you can run as many treees as you want and it is relatively fast. Give it a try on the iris data with this snippet below: Use Cases: Classification and Regression In the contrib folder of TensorFlow there is a library called BayesFlow. BayesFlow has no documentation except for an example of the REINFORCE algorithm. This algorithm is proposed in a paper by Ronald Williams. This network trying to solve an immediate reinforcement learning task, adjusts the weights after getting the reinforcement value at each trial. At the end of each trial each weight is incremented by a learning rate factor multiplied by the reinforcement value minus the baseline multiplied by characteristic eligibility. Williams paper also discusses the use of back propagation to train the REINFORCE network. Use Cases: Sequential Data CRFs are conditional probability distributions that factoirze according to an undirected model. They predict a label for a single sample keeping context from the neighboring samples. CRFs are similar to Hidden Markov Models. CRFs are often used for image segmentation and object recognition, as well as shallow parsing, named entity recognition and gene finding. Ever since TensorFlow has been released the community surrounding the project has been adding more packages, examples and cases for using this amazing library. Even at the time of writing this article there are more models and sample code being written. It is amazing to see how much TensorFlow as grown in these past few months. The ease of use and diversity in the package are increasing overtime and don’t seem to be slowing down anytime soon. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Co-founder & CTO of Apteo: Researching machine learning techniques to improve investing. Come join us! how hackers start their afternoons.
Dominik Felix
286
5
https://chatbotsmagazine.com/how-to-create-a-chatbot-without-coding-a-single-line-e716840c7245?source=tag_archive---------8----------------
How to Create a Chatbot Without Coding a Single Line
Chatbots are ready to succeed. If you think you have to hack days or even weeks to create a chatbot, you might be wrong. You don’t have to be aware of any coding skills. Immediately after big players like Facebook Messenger or Skype opened their platform for programmers many tools emerged. With this article I want to give you an introduction to mockup and overview of different tools to build your first chatbot. You’re having an idea? You want to show your use case? It’s definitely recommendable to mockup your story beforehand. First, you may find some bugs in your concept. Moreover, you will be able to explain a showcase to noninvolved people based on the motto: “fake it ’til you make it”. It’s very intuitive storytelling. Just insert what the user says and what the bot responds. Using the settings option, you can edit smartphone models, decide number of fans, and choose a profile picture, a page category and a welcome message. Additional features are buttons, images and quick replies. The whole story acts like a movie by pushing the play button. It can be shared by just one click and it’s possible to save the file as mp4 within the paid plan. Each of the tools supports different platforms. Therefore, please keep in mind that it’s important to choose your platform wisely. Based on the huge range most of the tools make use of Facebook Messenger. Chatfuel is focused on Facebook Messenger. You don’t need any coding skills to get started. It’s simple to create different logic blocks and link them to respective triggers. It offers great plugins e.g. human take-over and a minimalistic AI. In case you were recently starting with bots, I can recommend you this service. Motion provides SMS, Email, web-chat, Facebook Messenger and Slack. Furthermore, it’s possible to link to (other) APIs and hook back to motion. Thus, it operates as a hub. The conversation is built with flowcharts and based on connectors and prepared modules. It just takes a few minutes to get familiar with the procedure. Founder/CEO of Motion AI David Nelson’s “Chatbots Made Easy” api.ai is a great platform for developing chatbots. It has AI support and an intuitive interface. It requires only one click to assemble i.e. small-talk or weather features. On the one hand, it’s possible to run the bot exclusively on their servers. On the other, you can download a nodejs sample code to execute it on your infrastructure. To sum up, API.AI is an advanced service, being the reason why it’s more complicated to build a bot using this tool. Unsurprisingly, it got bought by Google a few days ago. Featured CBM: API.AI “Small Talk” is Now Open! Why is it a Big Deal? Flow XO offers a graphical interface to build so-called flows which define how your bot will operate to received messages or audio. It has a huge list of integrations. As a consequence, it’s more complex than Chatfuel, but also a lot more flexible. Pretty amazing is their support on Messenger, Slack, SMS and Telegram. They’ve an interesting approach to build chatbots. It guides you through 4 steps: design, develop, launch and grow. First, you’ve to design the content: messages, persistent menus, welcome messages and some more. As step 2, it wants you to link messages to triggers and setup curious modules like ‘Offer Human Help’. The launch step leads you through the review process, while the final step focuses on customer retention i.e. schedule messages, user lists, etc. Manychat allows broadcast content from RSS feeds. Additionally, it’s possible to link to yahoo pipelines and broadcast everything you want. It supports scheduled messages, auto posting from RSS, Facebook, Twitter, YouTube and has a basic mechanism to send specific answers to specific keywords. Watch their pitch to get a better understanding. MindIQ is a DIY Bot Builder platform for businesses focused on Facebook Messenger. You don’t need any coding skills and they make it dead simple for businesses to build bots. They follow a template approach. Currently, the templates available are media, commerce, and food tech. They also provide tools to link your business tools like Mailchimp to your chatbot. There are many tools on the market. Every tool solves other problems and each of them uses a different approach for how to design user interaction. I really like the simplicity of Chatfuel and the 4-step-process of Botsify. Since all of these tools are quite new, I’m super excited and looking forward to seeing the direction that will be pursued and developed. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. BotSpot Vienna, Agentur Volk, Chatbot Ecosystem, Botstack Framework Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more.
Greg Gascon
368
6
https://medium.com/startup-grind/how-invisible-interfaces-are-going-to-transform-the-way-we-interact-with-computers-39ef77a8a982?source=tag_archive---------9----------------
How Invisible Interfaces are going to transform the way we interact with computers
In the mid-nineties, a computer scientist at Xerox PARC theorized the concept of the Internet of Things, albeit with a different name, far before anyone else had and even further still before it had become possible. Even though today we call it by that name, Ubiquitous Computing — as it was then coined by Mark Weiser — imagined a world wherein cheap and ubiquitous connected computing would radically alter the way we use and interact with computers. The idea was ahead of its time. In the world of ubiquitous computing, connected devices would become cheap and, thereby, would exist everywhere. Importantly, these devices would as a result cease to become special or unique — they would become invisible. As we near this utopian world filled with computers, our relationship with them inexorably will change. Each of us will come to interact with dozens of separate devices on a daily basis. As such, we will need to develop interfaces in a way so as not to distract us, as is currently done, but in a way in which to empower us. Or, how Weiser put it, we will need to adopt the concepts of “Calm Technology”. On the face of it, ubiquitous computing is just that, a reality in which computers are everywhere. Of course, with trends relating to IoT, we are nearing this, but we are not there yet. One of the most important implications to come from ubiquitous computing, for example, will be the changes it will make on how we perceive and interact with computers. For instance, think of the electric motor: an old technology that is ubiquitous in the present. Today, there could be dozens of them in a single car. However, when we hit a button to roll down the windows, we don’t think at all about the motor pulling the window down. We simply think about the action of making the window go down. The electric motor is so mundane and ubiquitous in our lives that we don’t even think about it when using it. It is invisible. It is this sort of invisibility that allows the user to take full control of their interactions with a given piece of technology. When using a piece of technology that has become invisible, the user thinks of using it in terms of end goals, rather than getting bogged down in the technology itself. The user doesn’t have to worry how it is going to work, they just make it happen. In another example, Weiser simply states a good pencil “stays out of the way of the writing”. Now, even though technology surrounds us today, we aren’t at this point yet. Gadgets and devices are still special to us in a distracting way. We still not only still marvel at new technology, we are told to by whomever is producing it. But why does this matter? The best way to see how ubiquitous computing will impact us is to examine the way we engineer and interact with the apps that exist today. When creating a web app, for instance, you try to guide or manipulate the user into using your tool as much as possible. When you create a drip marketing email campaign for it, in most cases, you aren’t creating it so that the user needs to use your tool less. You are creating it so they can spend more time and use all of its features. That is to say, the goal isn’t foremost and necessarily to save the user time. Furthermore, there is no question asked as to whether the user aught to spend more time using whatever particular app is being optimized. Within a social media website, each user is given a piece of “social property”. A social media platform imbues each social property with a value system — think of the concept of likes, comments or shares — as incentive to spend time on the site. Each user interaction with a social property, whether it be a photo or a comment that is written, is then logged and recorded, so they can easily be rewarded for the time invested. Some social apps, such as LinkedIn, will have us hooked for something as simple as a pageview of our profiles. These actions are further incentivized through the use of gamification. Apps send intrusive notifications, giving you some information about what they are about, but not everything. And this is crucial. Not knowing what is in the notification entices us to open it even further. It goes without saying, this is important for increasing the amount of screen time we give the app. For, if we saw everything in the notification, there would be no point in opening the app. It makes waking up every morning feel like opening a bunch of small presents. And, while it’s a stretch to say that developers are acting nefariously to steal our time, those building our web services and tools should construct them with respect to the user’s guilelessness. Doing so requires adopting principles of invisible or calm technology. Contradiction aside, the most accessible way we can get a glimpse into a future dominated by invisible interfaces is the movie “Her”. Although not the focus of the film, “Her” showcases a future wherein inputs given to devices are done so largely through voice commands. Yes, there are still smartphones, but the majority of interactions take place by simply talking to a given device using natural language. Theodore is able to interact with technology in a manner that is completely at hand. He can ask any sort of question or create any sort of demand without getting bogged down in how the device works. Furthermore, the technology never tries to whisk his attention away from anything. The technology is always there, but it is only in the periphery. According to Weiser, this is one of the key principles of designing calm technology. The device in question should never try to distract or pry the user away from what they are trying to accomplish. Yet, it must always be ready to accept user input. It is calming in the exact opposite way that receiving group chat notifications on your phone is not. We can see this principle of design, in part, at play in the new Apple AirPods. Even though they have yet to be released, they promise to let us interact with the internet without ever needing to look down at our phones. And they are aware of their environment too. They know such things like if they are in an ear or not, and, if they are not, they know to stop playing sound. It’s these small, micro-automations that will further make technology invisible and allow us to focus on whatever it is that we want from the technology and not worry about having to configure it. Other, more simple, examples include the auto-brightness on your phone or its fingerprint scanner. They simply work without any sort of configuration or notification about what they are doing. And more technologies like this are coming. There are, today, even advocacy groups such as Time Well Spent that try to spread awareness about how interfaces and apps can hijack the ways our brains work. Even more promising is that there are companies that are following suit in these designs principles. For instance, the upcoming Moment smartwatch is a device which interfaces with the user largely through touch feedback, instead of relying on the screen. All that’s needed now? Better speech recognition. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Tech Columnist // Apps Script Dev // Social Media Automator // SEO Specialist. Read more at https://www.gregorygascon.com The life, work, and tactics of entrepreneurs around the world - by founders, for founders. Welcoming submissions on technology trends, product design, growth strategies, and venture investing.
Dhruv Parthasarathy
4.3K
12
https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------0----------------
A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN
At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation. Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge! While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding. In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks. We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another! Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes. Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post: Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question: Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write, Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works. Understanding R-CNN The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image. But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object. R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects. Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above. On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above. Improving the Bounding Boxes Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model: So, to summarize, R-CNN is just the following steps: R-CNN works really well, but is really quite slow for a few simple reasons: In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights. Fast R-CNN Insight 1: RoI (Region of Interest) Pooling For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals? This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000! Fast R-CNN Insight 2: Combine All Models into One Network The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three. You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model: Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process. In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN. The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm? Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write: Here are the inputs and outputs of their model: How the Regions are Generated Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network. The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent? Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image. With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network: We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes. So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes. Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN. Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation? Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs: But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected. RoiAlign - Realigning RoIPool to be More Accurate When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies. The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign. Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map? We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels. In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool. Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations: If you’re interested in trying out these algorithms yourselves, here are relevant repositories: Faster R-CNN Mask R-CNN In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight. What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years? If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them! If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily: Other posts we’ve written: Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. @dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity. Blood Diagnostics through Deep Learning http://athelas.com
Slav Ivanov
3.9K
17
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415?source=tag_archive---------1----------------
The $1700 great Deep Learning box: Assembly, setup and benchmarks
Updated April 2018: Uses CUDA 9, cuDNN 7 and Tensorflow 1.5. After years of using a thin client in the form of increasingly thinner MacBooks, I had gotten used to it. So when I got into Deep Learning (DL), I went straight for the brand new at the time Amazon P2 cloud servers. No upfront cost, the ability to train many models simultaneously and the general coolness of having a machine learning model out there slowly teaching itself. However, as time passed, the AWS bills steadily grew larger, even as I switched to 10x cheaper Spot instances. Also, I didn’t find myself training more than one model at a time. Instead, I’d go to lunch/workout/etc. while the model was training, and come back later with a clear head to check on it. But eventually the model complexity grew and took longer to train. I’d often forget what I did differently on the model that had just completed its 2-day training. Nudged by the great experiences of the other folks on the Fast.AI Forum, I decided to settle down and to get a dedicated DL box at home. The most important reason was saving time while prototyping models — if they trained faster, the feedback time would be shorter. Thus it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. Then I wanted to save money — I was using Amazon Web Services (AWS), which offered P2 instances with Nvidia K80 GPUs. Lately, the AWS bills were around $60–70/month with a tendency to get larger. Also, it is expensive to store large datasets, like ImageNet. And lastly, I haven’t had a desktop for over 10 years and wanted to see what has changed in the meantime (spoiler alert: mostly nothing). What follows are my choices, inner monologue, and gotchas: from choosing the components to benchmarking. A sensible budget for me would be about 2 years worth of my current compute spending. At $70/month for AWS, this put it at around $1700 for the whole thing. You can check out all the components used. The PC Part Picker site is also really helpful in detecting if some of the components don’t play well together. The GPU is the most crucial component in the box. It will train these deep networks fast, shortening the feedback cycle. Disclosure: The following are affiliate links, to help me pay for, well, more GPUs. The choice is between a few of Nvidia’s cards: GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti and finally the Titan X. The prices might fluctuate, especially because some GPUs are great for cryptocurrency mining (wink, 1070, wink). On performance side: GTX 1080 Ti and Titan X are similar. Roughly speaking the GTX 1080 is about 25% faster than GTX 1070. And GTX 1080 Ti is about 30% faster than GTX 1080. The new GTX 1070 Ti is very close in performance to GTX 1080. Tim Dettmers has a great article on picking a GPU for Deep Learning, which he regularly updates as new cards come on the market. Here are the things to consider when picking a GPU: Considering all of this, I picked the GTX 1080 Ti, mainly for the training speed boost. I plan to add a second 1080 Ti soonish. Even though the GPU is the MVP in deep learning, the CPU still matters. For example, data preparation is usually done on the CPU. The number of cores and threads per core is important if we want to parallelize all that data prep. To stay on budget, I picked a mid-range CPU, the Intel i5 7500. It’s relatively cheap but good enough to not slow things down. Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. Edit 2: However, Tim Dettmers points out that having 8 lanes per card should only decrease performance by “0–10%” for two GPUs. So currently, my recommendation is: Go with 16 PCIe lanes per video card unless it gets too expensive for you. Otherwise, 8 lanes should do as well. A good solution with to have for a double GPU machine would be an Intel Xeon processor like the E5–1620 v4 (40 PCIe lanes). Or if you want to splurge go for a higher end processor like the desktop i7–6850K. Memory (RAM) It’s nice to have a lot of memory if we are to be working with rather big datasets. I got 2 sticks of 16 GB, for a total of 32 GB of RAM, and plan to buy another 32 GB later. Following Jeremy Howard’s advice, I got a fast SSD disk to keep my OS and current data on, and then a slow spinning HDD for those huge datasets (like ImageNet).SSD: I remember when I got my first Macbook Air years ago, how blown away was I by the SSD speed. To my delight, a new generation of SSD called NVMe has made its way to market in the meantime. A 480 GB MyDigitalSSD NVMe drive was a great deal. This baby copies files at gigabytes per second. HDD: 2 TB Seagate. While SSDs have been getting fast, HDD have been getting cheap. To somebody who has used Macbooks with 128 GB disk for the last 7 years, having this much space feels almost obscene. The one thing that I kept in mind when picking a motherboard was the ability to support two GTX 1080 Ti, both in the number of PCI Express Lanes (the minimum is 2x8) and the physical size of 2 cards. Also, make sure it’s compatible with the chosen CPU. An Asus TUF Z270 did it for me. MSI — X99A SLI PLUS should work great if you got an Intel Xeon CPU. Rule of thumb: Power supply should provide enough juice for the CPU and the GPUs, plus 100 watts extra. The Intel i5 7500 processor uses 65W, and the GPUs (1080 Ti) need 250W each, so I got a Deepcool 750W Gold PSU (currently unavailable, EVGA 750 GQ is similar). The “Gold” here refers to the power efficiency, i.e how much of the power consumed is wasted as heat. The case should be the same form factor as the motherboard. Also having enough LEDs to embarrass a Burner is a bonus. A friend recommended the Thermaltake N23 case, which I promptly got. No LEDs sadly. Here is how much I spent on all the components (your costs may vary): $700 GTX 1080 Ti + $190 CPU + $230 RAM + $230 SSD + $66 HDD + $130 Motherboard + $75 PSU + $50 Case ============$1671 Total Adding tax and fees, this nicely matches my preset budget of $1700. If you don’t have much experience with hardware and fear you might break something, a professional assembly might be the best option. However, this was a great learning opportunity that I couldn’t pass (even though I’ve had my share of hardware-related horror stories). The first and important step is to read the installation manuals that came with each component. Especially important for me, as I’ve done this before once or twice, and I have just the right amount of inexperience to mess things up. This is done before installing the motherboard in the case. Next to the processor there is a lever that needs to be pulled up. The processor is then placed on the base (double-check the orientation). Finally, the lever comes down to fix the CPU in place. . . But I had a quite the difficulty doing this: once the CPU was in position the lever wouldn’t go down. I actually had a more hardware-capable friend of mine video walk me through the process. Turns out the amount of force required to get the lever locked down was more than what I was comfortable with. Next is fixing the fan on top of the CPU: the fan legs must be fully secured to the motherboard. Consider where the fan cable will go before installing. The processor I had came with thermal paste. If yours doesn’t, make sure to put some paste between the CPU and the cooling unit. Also, replace the paste if you take off the fan. I put the Power Supply Unit (PSU) in before the motherboard to get the power cables snugly placed in case back side. . . . . Pretty straight forward — carefully place it and screw it in. A magnetic screwdriver was really helpful. Then connect the power cables and the case buttons and LEDs. . Just slide it in the M2 slot and screw it in. Piece of cake. The memory proved quite hard to install, requiring too much effort to properly lock in. A few times I almost gave up, thinking I must be doing it wrong. Eventually one of the sticks clicked in and the other one promptly followed. At this point, I turned the computer on to make sure it works. To my relief, it started right away! Finally, the GPU slid in effortlessly. 14 pins of power later and it was running. NB: Do not plug your monitor in the external card right away. Most probably it needs drivers to function (see below). Finally, it’s complete! Now that we have the hardware in place, only the soft part remains. Out with the screwdriver, in with the keyboard. Note on dual booting: If you plan to install Windows (because, you know, for benchmarks, totally not for gaming), it would be wise to do Windows first and Linux second. I didn’t and had to reinstall Ubuntu because Windows messed up the boot partition. Livewire has a detailed article on dual boot. Most DL frameworks are designed to work on Linux first, and eventually support other operating systems. So I went for Ubuntu, my default Linux distribution. An old 2GB USB drive was laying around and worked great for the installation. UNetbootin (OSX) or Rufus (Windows) can prepare the Linux thumb drive. The default options worked fine during the Ubuntu install. At the time of writing, Ubuntu 17.04 was just released, so I opted for the previous version (16.04), whose quirks are much better documented online. Ubuntu Server or Desktop: The Server and Desktop editions of Ubuntu are almost identical, with the notable exception of the visual interface (called X) not being installed with Server. I installed the Desktop and disabled autostarting X so that the computer would boot it in terminal mode. If needed, one could launch the visual desktop later by typing startx. Let’s get our install up to date. From Jeremy Howard’s excellent install-gpu script: To deep learn on our machine, we need a stack of technologies to use our GPU: Download CUDA from Nvidia, or just run the code below: Updated to specify version 9 of CUDA. Thanks to @zhanwenchen for the tip. If you need to add later versions of CUDA, click here. After CUDA has been installed the following code will add the CUDA installation to the PATH variable: Now we can verify that CUDA has been installed successfully by running This should have installed the display driver as well. For me, nvidia-smi showed ERR as the device name, so I installed the latest Nvidia drivers (as of May 2018) to fix it: Removing CUDA/Nvidia drivers If at any point the drivers or CUDA seem broken (as they did for me — multiple times), it might be better to start over by running: Since version 1.5 Tensorflow supports CuDNN 7, so we install that. To download CuDNN, one needs to register for a (free) developer account. After downloading, install with the following: Anaconda is a great package manager for python. I’ve moved to python 3.6, so will be using the Anaconda 3 version: The popular DL framework by Google. Installation: Validate Tensorfow install: To make sure we have our stack running smoothly, I like to run the tensorflow MNIST example: We should see the loss decreasing during training: Keras is a great high-level neural networks framework, an absolute pleasure to work with. Installation can’t be easier too: PyTorch is a newcomer in the world of DL frameworks, but its API is modeled on the successful Torch, which was written in Lua. PyTorch feels new and exciting, mostly great, although some things are still to be implemented. We install it by running: Jupyter is a web-based IDE for Python, which is ideal for data sciency tasks. It’s installed with Anaconda, so we just configure and test it: Now if we open http://localhost:8888 we should see a Jupyter screen. Run Jupyter on boot Rather than running the notebook every time the computer is restarted, we can set it to autostart on boot. We will use crontab to do this, which we can edit by running crontab -e . Then add the following after the last line in the crontab file: I use my old trusty Macbook Air for development, so I’d like to be able to log into the DL box both from my home network, also when on the run. SSH Key: It’s way more secure to use a SSH key to login instead of a password. Digital Ocean has a great guide on how to setup this. SSH tunnel: If you want to access your jupyter notebook from another computer, the recommended way is to use SSH tunneling (instead of opening the notebook to the world and protecting with a password). Let’s see how we can do this: 2. Then to connect over SSH tunnel, run the following script on the client: To test this, open a browser and try http://localhost:8888 from the remote machine. Your Jupyter notebook should appear. Setup out-of-network access: Finally to access the DL box from the outside world, we need 3 things: Setting up out-of-network access depends on the router/network setup, so I’m not going into details. Now that we have everything running smoothly, let’s put it to the test. We’ll be comparing the newly built box to an AWS P2.xlarge instance, which is what I’ve used so far for DL. The tests are computer vision related, meaning convolutional networks with a fully connected model thrown in. We time training models on: AWS P2 instance GPU (K80), AWS P2 virtual CPU, the GTX 1080 Ti and Intel i5 7500 CPU. Andres Hernandez points out that my comparison does not use Tensorflow that is optimized for these CPUs, which would have helped the them perform better. Check his insightful comment for more details. The “Hello World” of computer vision. The MNIST database consists of 70,000 handwritten digits. We run the Keras example on MNIST which uses Multilayer Perceptron (MLP). The MLP means that we are using only fully connected layers, not convolutions. The model is trained for 20 epochs on this dataset, which achieves over 98% accuracy out of the box. We see that the GTX 1080 Ti is 2.4 times faster than the K80 on AWS P2 in training the model. This is rather surprising as these 2 cards should have about the same performance. I believe this is because of the virtualization or underclocking of the K80 on AWS. The CPUs perform 9 times slower than the GPUs. As we will see later, it’s a really good result for the processors. This is due to the small model which fails to fully utilize the parallel processing power of the GPUs. Interestingly, the desktop Intel i5–7500 achieves 2.3x speedup over the virtual CPU on Amazon. A VGG net will be finetuned for the Kaggle Dogs vs Cats competition. In this competition, we need to tell apart pictures of dogs and cats. Running the model on CPUs for the same number of batches wasn’t feasible. Therefore we finetune for 390 batches (1 epoch) on the GPUs and 10 batches on the CPUs. The code used is on github. The 1080 Ti is 5.5 times faster that the AWS GPU (K80). The difference in the CPUs performance is about the same as the previous experiment (i5 is 2.6x faster). However, it’s absolutely impractical to use CPUs for this task, as the CPUs were taking ~200x more time on this large model that includes 16 convolutional layers and a couple semi-wide (4096) fully connected layers on top. A GAN (Generative adversarial network) is a way to train a model to generate images. GAN achieves this by pitting two networks against each other: A Generator which learns to create better and better images, and a Discriminator that tries to tell which images are real and which are dreamt up by the Generator. The Wasserstein GAN is an improvement over the original GAN. We will use a PyTorch implementation, that is very similar to the one by the WGAN author. The models are trained for 50 steps, and the loss is all over the place which is often the case with GANs. CPUs aren’t considered. The GTX 1080 Ti finishes 5.5x faster than the AWS P2 K80, which is in line with the previous results. The final benchmark is on the original Style Transfer paper (Gatys et al.), implemented on Tensorflow (code available). Style Transfer is a technique that combines the style of one image (a painting for example) and the content of another image. Check out my previous post for more details on how Style Transfer works. The GTX 1080 Ti outperforms the AWS K80 by a factor of 4.3. This time the CPUs are 30-50 times slower than graphics cards. The slowdown is less than on the VGG Finetuning task but more than on the MNIST Perceptron experiment. The model uses mostly the earlier layers of the VGG network, and I suspect this was too shallow to fully utilize the GPUs. The DL box is in the next room and a large model is training on it. Was it a wise investment? Time will tell but it is beautiful to watch the glowing LEDs in the dark and to hear its quiet hum as models are trying to squeeze out that extra accuracy percentage point. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Tyler Elliot Bettilyon
17.9K
13
https://medium.com/@TebbaVonMathenstien/are-programmers-headed-toward-another-bursting-bubble-528e30c59a0e?source=tag_archive---------2----------------
Are Programmers Headed Toward Another Bursting Bubble?
A friend of mine recently posed a question that I’ve heard many times in varying forms and forums: “Do you think IT and some lower-level programming jobs are going to go the way of the dodo? Seems a bit like a massive job bubble that’s gonna burst. It’s my opinion that one of the only things keeping tech and lower-level computer science-related jobs “prestigious” and well-paid is ridiculous industry jargon and public ignorance about computers, which are both going to go away in the next 10 years. [...]” This question is simultaneously on point about the future of technology jobs and exemplary of some pervasive misunderstandings regarding the field of software engineering. While it’s true that there is a great deal of “ridiculous industry jargon” there are equally many genuinely difficult problems waiting to be solved by those with the right skill-set. Some software jobs are definitely going away but programmers with the right experience and knowledge will continue to be prestigious and well remunerated for many years to come; as an example look at the recent explosion of AI researcher salaries and the corresponding dearth of available talent. Staying relevant in the ever changing technology landscape can be a challenge. By looking at the technologies that are replacing programmers in the status quo we should be able to predict what jobs might disappear from the market. Additionally, to predict how salaries and demand for specific skills might change we should consider the growing body of people learning to program. As Hannah pointed out “public ignorance” about computers is keeping wages high for those who can program and the public is becoming more computer savvy each year. The fear of automation replacing jobs is neither new nor unfounded. In any field, and especially in technology, market forces drive corporations toward automation and commodification. Gartner’s Hype Cycles are one way of contextualizing this phenomenon. As time goes on, specific ideas and technologies push towards the “plateau of productivity” where they are eventually automated. Looking at history one must conclude that automation has the power to destroy specific job markets. In diverse industries ranging from crop harvesting to automobile assembly technology advances have consistently replaced and augmented human labor to reduce costs. A professor once put it this way in his compilers course, “take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?” In this metaphor the “machine” is a computer programming language. This professor was really asking: Do you want to build websites using JavaScript, or do you want to build the V8 engine that powers JavaScript? The creation of websites is being automated by WordPress (and others) today. V8 on the other hand has a growing body of competitors some of whom are solving open research questions. Languages will come and go (how many Fortran job openings are there?) but there will always be someone building the next language. Lucky for us, programming language implementations are written with programming languages themselves. Being a “machine operator” in software puts you on the path to being a “machine creator” in a way which was not true of the steel mill workers of the past. The growing number of languages, interpreters, and compilers shows us that every job-destroying machine also brings with it new opportunities to improve those machines, maintain those machines, and so forth. Despite the growing body of jobs which no longer exist, there has yet to be a moment in history where humanity has collectively said, “I guess there isn’t any work left for us to do.” Commodification is coming for us all, not just software engineers. Throughout history, human labor has consistently been replaced with non-humans or augmented to require fewer and less skilled humans. Self-driving cars and trucks are the flavor of the week in this grand human tradition. If the cycle of creation and automation are a fact of life, the natural question to answer next is: which jobs and industries are at risk, and which are not? AWS, Heroku, and other similar hosting platforms have forever changed the role of the System Administrator/DevOps engineer. Internet businesses used to absolutely need their own server master. Someone who was well versed in Linux; someone who could configure a server with Apache or NGINX; someone who could not only physically wire up the server, the routers, and all the other physical components, but who could also configure the routing tables and all the software required to make that server accessible on the public web. While there are definitely still people applying this skill-set professionally, AWS is making some of those skills obsolete — especially at the lower experience levels and on the physical side of things. There are very lucrative roles within Amazon (and Netflix, and Google...) for people with deep expertise in networking infrastructure, but there is much less demand at the small-to-medium business scale. “Business Intelligence” tools such as SalesForce, Tableau and SpotFire are also beginning to occupy spaces historically held by software engineers. These systems have reduced the demand for in-house Database Administrators, but they have also increased the demand for SQL as a general-purpose skill. They have decreased demand for in-house reporting technology, but increased demand for “integration engineers” who automate the flow of data from the business to the third-party software platform(s). A field that was previously dominated by Excel and Spreadsheets is increasingly being pushed towards scripting languages like Python or R, and towards SQL for data management. Some jobs have disappeared, but demand for people who can write software has seen an increase overall. Data Science is a fascinating example of commodification at a level closer to software. Scikit.learn, Tensorflow, and PyTorch are all software libraries that make it easier for people to build machine learning applications without building the algorithms from scratch. In fact, it’s possible to run a dataset through many different machine learning algorithms, with many different parameter sets for those algorithms, with little to no understanding of how those algorithms are actually implemented (it’s not necessarily wise to do this, just possible). You can bet that business intelligence companies will be trying to integrate these kinds of algorithms into their own tools over the next few years as well. In many ways data science looks like web development did 5–8 years ago — a booming field where a little bit of knowledge can get you in the door due to a “skills gap”. As web development bootcamps are closing and consolidating, data science bootcamps are popping up in their place. Kaplan, who bought the original web development bootcamp (Dev Bootcamp) and started a data science bootcamp (Metis) has decided to close DevBootcamp and keep Metis running. Content management systems are among the most visible of the tools automating away the need for a software engineer. SquareSpace and WordPress are among the most popular CMS systems today. These platforms are significantly reducing the value of people with a just a little bit of front end web development skill. In fact the barriers for making a website and getting it online have come down so dramatically that people with zero programming experience are successfully launching websites every day. Those same people aren’t making deeply interactive websites that serve billions of people, but they absolutely do make websites for their own businesses that give customers the information they need. A lovely landing page with information such as how to find the establishment and how to contact them is more than enough for a local restaurant, bar, or retail store. If your business is not primarily an “internet business” it has never been easier to get a working site on the public web. As a result, the once thriving industry of web contractors who can quickly set up a simple website and get it online is becoming less lucrative. Finally, it would border on hubris to ignore the physical aspect of computers in this context. In the words of Mike Acton: “software is not the platform, hardware is the platform”. Software people would be wise to study at least a little computer architecture and electrical engineering. A big shake up in hardware, such as the arrival of consumer grade quantum computers would (will) change everything about professional software engineering. Quantum computers are still a ways off, but the growing interest in GPUs and the drive toward parallelization is an imminent shift. CPU speeds have been stagnant for several years now and in that time a seemingly unquenchable thirst for machine learning and “big data” has emerged. With more desire than ever to process large data-sets OpenMP, OpenCL, Go, CUDA, and other parallel processing languages and frameworks will continue to become mainstream. To be competitively fast in the near-term future, significant parallelization will be a requirement across the board, not just in high-performance niches like operating systems, infrastructure and video games. Websites are ubiquitous. The 2017 Stack Overflow Survey reports that about 15% of professional software engineers are working in an “Internet/Web Services” company. The Bureau of Labor Statistics expects growth in Web Development to continue much faster than average (24% between 2014 and 2024). Due to its visibility, there has been a massive focus on “solving the skills gap” in this industry. Coding bootcamps teach Web Development almost exclusively and Web Development online courses have flooded Udemy, Udacity, Coursera and similar marketplaces. The combination of increasing automation throughout the Web Development technology stack and the influx of new entry level programmers with an explicit focus on Web Development has led some to predict a slide towards a “blue collar” market for software developers. Some have gone further, suggesting that the push towards a blue collar market is a strategy architected by big tech firms. Others, of course, say we’re headed for another bursting bubble. Change in demand for specific technologies is not news. Languages and frameworks are always rising and falling in technology. Web Development in its current incarnation (“JS Is King”) will eventually go the way of Web Development of the early 2000’s (remember Flash?). What is new, is that a lot of people are receiving an education explicitly (and solely) in the current trendy web development frameworks. Before you decide to label yourself a “React developer” remember there were people who once identified themselves as “Flash developers”. Banking your career on a specific language, framework, or technology is a game of roulette. Of course it’s quite difficult to predict what technologies will remain relevant, but if you’re going to go all in on something, I suggest relying on The Lindy Effect and picking something like C that has already withstood the test of time. The next generation will have a level of de facto tech literacy that Generation X and even Millennials do not have. One outcome of this will be that using the next generation of CMS tools will be a given. These tools will get better and young workers will be better at using them. This combination will definitely will bring down the value of low-level IT and web development skills as eager and skilled youngsters enter the job market. High schools are catching on as well, offering computer science and programming classes — some well educated high school students will likely be entering the workforce as programming interns immediately upon graduation. Another big group of newcomers to programming are MBAs and data analysts. Job listings which were once dominated by Excel are starting to list SQL as a “nice to have” and even “requirement”. Tools such as Tableau, SpotFire, SalesForce, and other web-based metrics systems continue to replace the spreadsheet as the primary tool for report generation. If this continues more data analysts will learn to use SQL directly simply because it is easier than exporting the data into a spreadsheet. People looking to climb the ranks and out-perform their peers in these roles are taking online courses to learn about databases and statistical programming languages. With these new skills they can begin to position themselves as data scientists by learning a combination of machine learning and statistical libraries. Look at Metis’ curriculum as a prime example of this path. Finally, the number of people earning Computer Science and Software Engineering degrees continues to climb. Purdue, for example, reports that applications to their CS program have doubled over five years. Cornell reports a similar explosion of CS graduates. This trend isn’t surprising given the growth and ubiquity of software. It’s hard for young people to imagine that computers will play a smaller role in our futures, so why not study something that’s going to give you job security. A common argument in the industry nowadays is around the idea that the education you receive in a four-year Computer Science program is mostly unnecessary cruft. I have heard this argument repeatedly in the halls of bootcamps, web development shops, and online from big names in the field such as this piece by Eric Elliott. The opposition view is popular as well, with some going so far as saying “all programmers should earn a master’s degree”. Like Eric Elliott, I think it’s good that there are more options than ever to break into programming, and a 4 year degree might not be the best option for many. Simultaneously, I agree with William Bain that the foundational skills which apply across programming disciplines are crucial for career longevity, and that it is still hard to find that information outside of university courses. I’ve written previously about what skills I think aspiring engineers should learn as a foundation of a long career, and joined Bradfield in order to help share this knowledge. Coding schools of many shapes and sizes are becoming ubiquitous, and for good reasons. There is quite a lot you can learn about programming without getting into the minutia of Big O notation, obscure data structures, and algorithmic trivia. However, while it’s true that fresh graduates from Stanford are competing for some jobs with fresh graduates from Hack Reactor, it’s only true in one or two sub-industries. Code school and bootcamp graduates are not yet applying to work on embedded systems, cryptography/security, robotics, network infrastructure, or AI research and development. Yet these fields, like web development, are growing quickly. Some programming-related skills have already started their transition from “rare skill” to “baseline expectation”. Conversely, the engineering that goes into creating beastly engines like AWS is anything but common. The big companies driving technology forward — Amazon, Google, Facebook, Nvidia, Space-X, and so on — are typically not looking for people with a ‘basic understanding of JavaScript’. AWS serves billions of users per day. To support that kind of load an AWS infrastructure engineer needs a deep knowledge of network protocols, computer architecture, and several years of relevant experience. As with any discipline there are amateurs and artisans. These prestigious firms are solving research problems and building systems that are truly pushing against the boundaries of what is possible. Yet they still struggle to fill open roles even while basic programming skills are increasingly common. People who can write algorithms to predict changes in genetic sequences that will yield a desired result are going to be highly valuable in the future. People who can program satellites, spacecraft, and automate machinery will continue to be highly valued. These are not fields that lend themselves as readily to a “3 month intensive program” as front end web development, at least not without significant prior experience. Because computer science starts with the word “computer” it is assumed that young people will all have an innate understanding of it by 2025. Unfortunately, the ubiquity of computers has not created a new generation of people who de facto understand mathematics, computer science, network infrastructure, electrical engineering and so on. Computer literacy is not the same as the study of computation. Despite mathematics having existed since the dawn of time there is still a relatively small portion of the population with strong statistical literacy, and computer science is similarly old. Euclid invented several algorithms, one of which is used every time you make an HTTPS request; the fact that we use HTTPS every time we login to a website does not automatically imbue anyone with a knowledge of how those protocols work. More established professional fields often have a bimodal wage distribution: a relatively small number of practitioners make quite a lot of money, and the majority of them earn a good wage but do not find themselves in the top 1% of earners. The National Association for Law Placement collects data that can be used to visualize this phenomenon in stark clarity. A huge share of law graduates make between $45,00 and $65,000 — a good wage, but hardly the salary we associate with a “top professional”. We tend to think that all law graduates are on track to becoming partners at a law firm when really there are many paths: paralegal, clerk, public defender, judge, legal services for businesses, contract writing, and so on. Computer science graduates also have many options for their professional practice, from web development to embedded systems. As a basic level of programming literacy continues to become an expectation, rather than a “nice to have”, I suspect a similar distribution will emerge in programming jobs. While there will always be a cohort of programmers making a lot of money to push on the edges of technology, there will be a growing body of middle-class programmers powering the new computer-centric economy. The average salary for web developers will surely decrease over time. That said, I suspect that the number of jobs for “programmers” in general will only continue to grow. As worker supply begins to meet demand, hopefully we will see a healthy boom in a variety of middle-class programming jobs. There will also continue to be a top-professional salary available for those programmers who are redefining what is possible. Regardless of which cohort of programmers you’re in, a career in technology means continuing your education throughout your life. If you want to stay in the second cohort of programmers you may want to invest in learning how to create the machines, rather than simply operate them. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. A curious human on a quest to watch the world learn.
Arvind N
9.5K
8
https://towardsdatascience.com/thoughts-after-taking-the-deeplearning-ai-courses-8568f132153?source=tag_archive---------3----------------
Thoughts after taking the Deeplearning.ai courses – Towards Data Science
[Update — Feb 2nd 2018: When this blog post was written, only 3 courses had been released. All 5 courses in this specialization are now out. I will have a follow-up blog post soon.] Between a full time job and a toddler at home, I spend my spare time learning about the ideas in cognitive science & AI. Once in a while a great paper/video/course comes out and you’re instantly hooked. Andrew Ng’s new deeplearning.ai course is like that Shane Carruth or Rajnikanth movie that one yearns for! Naturally, as soon as the course was released on coursera, I registered and spent the past 4 evenings binge watching the lectures, working through quizzes and programming assignments. DL practitioners and ML engineers typically spend most days working at an abstract Keras or TensorFlow level. But it’s nice to take a break once in a while to get down to the nuts and bolts of learning algorithms and actually do back-propagation by hand. It is both fun and incredibly useful! Andrew Ng’s new adventure is a bottom-up approach to teaching neural networks — powerful non-linearity learning algorithms, at a beginner-mid level. In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off and introduces the idea of neural networks using a single neuron(logistic regression) and slowly adding complexity — more neurons and layers. By the end of the 4 weeks(course 1), a student is introduced to all the core ideas required to build a dense neural network such as cost/loss functions, learning iteratively using gradient descent and vectorized parallel python(numpy) implementations. Andrew patiently explains the requisite math and programming concepts in a carefully planned order and a well regulated pace suitable for learners who could be rusty in math/coding. Lectures are delivered using presentation slides on which Andrew writes using digital pens. It felt like an effective way to get the listener to focus. I felt comfortable watching videos at 1.25x or 1.5x speed. Quizzes are placed at the end of each lecture sections and are in the multiple choice question format. If you watch the videos once, you should be able to quickly answer all the quiz questions. You can attempt quizzes multiple times and the system is designed to keep your highest score. Programming assignments are done via Jupyter notebooks — powerful browser based applications. Assignments have a nice guided sequential structure and you are not required to write more than 2–3 lines of code in each section. If you understand the concepts like vectorization intuitively, you can complete most programming sections with just 1 line of code! After the assignment is coded, it takes 1 button click to submit your code to the automated grading system which returns your score in a few minutes. Some assignments have time restrictions — say, three attempts in 8 hours etc. Jupyter notebooks are well designed and work without any issues. Instructions are precise and it feels like a polished product. Anyone interested in understanding what neural networks are, how they work, how to build them and the tools available to bring your ideas to life. If your math is rusty, there is no need to worry — Andrew explains all the required calculus and provides derivatives at every occasion so that you can focus on building the network and concentrate on implementing your ideas in code. If your programming is rusty, there is a nice coding assignment to teach you numpy. But I recommend learning python first on codecademy. Let me explain this with an analogy: Assume you are trying to learn how to drive a car. Jeremy’s FAST.AI course puts you in the drivers seat from the get-go. He teaches you to move the steering wheel, press the brake, accelerator etc. Then he slowly explains more details about how the car works — why rotating the wheel makes the car turn, why pressing the brake pedal makes you slow down and stop etc. He keeps getting deeper into the inner workings of the car and by the end of the course, you know how the internal combustion engine works, how the fuel tank is designed etc. The goal of the course is to get you driving. You can choose to stop at any point after you can drive reasonably well — there is no need to learn how to build/repair the car. Andrew’s DL course does all of this, but in the complete opposite order. He teaches you about internal combustion engine first! He keeps adding layers of abstraction and by the end of the course you are driving like an F1 racer! The fast AI course mainly teaches you the art of driving while Andrew’s course primarily teaches you the engineering behind the car. If you have not done any machine learning before this, don’t take this course first. The best starting point is Andrew’s original ML course on coursera. After you complete that course, please try to complete part-1 of Jeremy Howard’s excellent deep learning course. Jeremy teaches deep learning Top-Down which is essential for absolute beginners. Once you are comfortable creating deep neural networks, it makes sense to take this new deeplearning.ai course specialization which fills up any gaps in your understanding of the underlying details and concepts. 2. Andrew stresses on the engineering aspects of deep learning and provides plenty of practical tips to save time and money — the third course in the DL specialization felt incredibly useful for my role as an architect leading engineering teams. 3. Jargon is handled well. Andrew explains that an empirical process = trial & error — He is brutally honest about the reality of designing and training deep nets. At some point I felt he might have as well just called Deep Learning as glorified curve-fitting 4. Squashes all hype around DL and AI — Andrew makes restrained, careful comments about proliferation of AI hype in the mainstream media and by the end of the course it is pretty clear that DL is nothing like the terminator. 5.Wonderful boilerplate code that just works out of the box! 6. Excellent course structure. 7. Nice, consistent and useful notation. Andrew strives to establish a fresh nomenclature for neural nets and I feel he could be quite successful in this endeavor. 8. Style of teaching that is unique to Andrew and carries over from ML — I could feel the same excitement I felt in 2013 when I took his original ML course. 9.The interviews with deep learning heroes are refreshing — It is motivating and fun to hear personal stories and anecdotes. I wish that he’d said ‘concretely’ more often! 2. Good tools are important and will help you accelerate your learning pace. I bought a digital pen after seeing Andrew teach with one. It helped me work more efficiently. 3. There is a psychological reason why I recommend the Fast.ai course before this one. Once you find your passion, you can learn uninhibited. 4. You just get that dopamine rush each time you score full points: 5. Don’t be scared by DL jargon (hyperparameters = settings, architecture/topology=style etc.) or the math symbols. If you take a leap of faith and pay attention to the lectures, Andrew shows why the symbols and notation are actually quite useful. They will soon become your tools of choice and you will wield them with style! Thanks for reading and best wishes! Update: Thanks for the overwhelmingly positive response! Many people are asking me to explain gradient descent and the differential calculus. I hope this helps! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Interested in Strong AI Sharing concepts, ideas, and codes.
Berit Anderson
1.6K
20
https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b?source=tag_archive---------4----------------
The Rise of the Weaponized AI Propaganda Machine – Scout: Science Fiction + Journalism – Medium
By Berit Anderson and Brett Horvath This piece was originally published at Scout.ai. “This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright. Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas. By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world. Most recently, Analytica helped elect U.S. President Donald Trump, secured a win for the Brexit Leave campaign, and led Ted Cruz’s 2016 campaign surge, shepherding him from the back of the GOP primary pack to the front. The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention. Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America. Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts. There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections. In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them. We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it. Welcome to the age of Weaponized AI Propaganda. Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends. In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores — a standard-bearing personality questionnaire used by psychologists — the team was able to identify an individual’s gender, sexuality, political beliefs, and personality traits based only on what they had liked on Facebook. According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.” Not long afterward, Kosinski was approached by Aleksandr Kogan, a fellow Cambridge professor in the psychology department, about licensing his model to SCL Elections, a company that claimed its specialty lay in manipulating elections. The offer would have meant a significant payout for Kosinki’s lab. Still, he declined, worried about the firm’s intentions and the downstream effects it could have. It had taken Kosinski and his colleagues years to develop that model, but with his methods and findings now out in the world, there was little to stop SCL Elections from replicating them. It would seem they did just that. According to a Guardian investigation, in early 2014, just a few months after Kosinski declined their offer, SCL partnered with Kogan instead. As a part of their relationship, Kogan paid Amazon Mechanical Turk workers $1 each to take the OCEAN quiz. There was just one catch: To take the quiz, users were required to provide access to all of their Facebook data. They were told the data would be used for research. The job was reported to Amazon for violating the platform’s Terms of Service. What many of the Turks likely didn’t realize: According to documents reviewed by The Guardian, “Kogan also captured the same data for each person’s unwitting friends.” The data gathered from Kogan’s study went on to birth Cambridge Analytica, which spun out of SCL Elections soon after. The name, metaphorically at least, was a nod to Kogan’s work — and a dig at Kosinski. But that early trove of user data was just the beginning — just the seed Analytica needed to build its own model for analyzing users personalities without having to rely on the lengthy OCEAN test. After a successful proof of concept and backed by wealthy conservative investors, Analytica went on a data shopping spree for the ages, snapping up data about your shopping habits, land ownership, where you attend church, what stores you visit, what magazines you subscribe to — all of which is for sale from a range of data brokers and third party organizations selling information about you. Analytica aggregated this data with voter roles, publicly available online data — including Facebook likes — and put it all into its predictive personality model. Nix likes to boast that Analytica’s personality model has allowed it to create a personality profile for every adult in the U.S. — 220 million of them, each with up to 5,000 data points. And those profiles are being continually updated and improved the more data you spew out online. Albright also believes that your Facebook and Twitter posts are being collected and integrated back into Cambridge Analytica’s personality profiles. “Twitter and also Facebook are being used to collect a lot of responsive data because people are impassioned, they reply, they retweet, but they also include basically their entire argument and their entire background on this topic,” he explains. Collecting massive quantities of data about voters’ personalities might seem unsettling, but it’s actually not what sets Cambridge Analytica apart. For Analytica and other companies like them, it’s what they do with that data that really matters. “Your behavior is driven by your personality and actually the more you can understand about people’s personality as psychological drivers, the more you can actually start to really tap in to why and how they make their decisions,” Nix explained to Bloomberg’s Sasha Issenburg. “We call this behavioral microtargeting and this is really our secret sauce, if you like. This is what we’re bringing to America.” Using those dossiers, or psychographic profiles as Analytica calls them, Cambridge Analytica not only identifies which voters are most likely to swing for their causes or candidates; they use that information to predict and then change their future behavior. As Vice reported recently, Kosinski and a colleague are now working on a new set of research, yet to be published, that addresses the effectiveness of these methods. Their early findings: Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. Scout reached out to Cambridge Analytica with a detailed list of questions about their communications tactics, but the company declined to answer any questions or to comment on any of their tactics. But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging. “They [the Trump campaign] were using 40–50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” Martin Moore, director of Kings College’s Centre for the Study of Media, Communication and Power, told The Guardian in early December. “It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius.” Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits. “For example,” Nix wrote in an op-ed last year about Analytica’s work on the Cruz campaign, ”our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.” “Leveraging our other data models, we were able to advise the campaign on how to approach this issue with specific individuals based on their unique profiles in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. For people in the ‘Temperamental’ personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is ‘as easy as buying a case of beer’. Whereas the right message for people in the ‘Stoic Traditionalist’ group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.” For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait — say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.” Much of this is done through Facebook dark posts, which are only visible to those being targeted. Based on users’ response to these posts, Cambridge Analytica was able to identify which of Trump’s messages were resonating and where. That information was also used to shape Trump’s campaign travel schedule. If 73 percent of targeted voters in Kent County, Mich. clicked on one of three articles about bringing back jobs? Schedule a Trump rally in Grand Rapids that focuses on economic recovery. Political analysts in the Clinton campaign, who were basing their tactics on traditional polling methods, laughed when Trump scheduled campaign events in the so-called blue wall — a group of states that includes Michigan, Pennsylvania, and Wisconsin and has traditionally fallen to Democrats. But Cambridge Analytica saw they had an opening based on measured engagement with their Facebook posts. It was the small margins in Michigan, Pennsylvania and Wisconsin that won Trump the election. Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’” Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds. In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born. “These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.” Meanwhile, surprised by the results of the 2016 presidential race, Albright started looking into the ‘fake news problem’. As a part of his research, Albright scraped 306 fake news sites to determine how exactly they were all connected to each other and the mainstream news ecosystem. What he found was unprecedented — a network of 23,000 pages and 1.3 million hyperlinks. “The sites in the fake news and hyper-biased #MCM network,” Albright writes, “have a very small ‘node’ size — this means they are linking out heavily to mainstream media, social networks, and informational resources (most of which are in the ‘center’ of the network), but not many sites in their peer group are sending links back.” These sites aren’t owned or operated by any one individual entity, he says, but together they have been able to game Search Engine Optimization, increasing the visibility of fake and biased news anytime someone Googles an election-related term online — Trump, Clinton, Jews, Muslims, abortion, Obamacare. “This network,” Albright wrote in a post exploring his findings, “is triggered on-demand to spread false, hyper-biased, and politically-loaded information.” Even more shocking to him though was that this network of fake news creates a powerful infrastructure for companies like Cambridge Analytica to track voters and refine their personality targeting models “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages.” The web of fake and biased news that Albright uncovered created a propaganda wave that Cambridge Analytica could ride and then amplify. The more fake news that users engage with, the more addictive Analytica’s personality engagement algorithms can become. Voter 35423 clicked on a fake story about Hillary’s sex-trafficking ring? Let’s get her to engage with more stories about Hillary’s supposed history of murder and sex trafficking. The synergy between fake-content networks, automated message testing, and personality profiling will rapidly spread to other digital mediums. Albright’s most-recent research focuses on an artificial intelligence that automatically creates YouTube videos about news and current events. The AI, which reacts to trending topics on Facebook and Twitter, pairs images and subtitles with a computer generated voiceover. It spooled out nearly 80,000 videos through 19 different channels in just a few days. Given its rapid development, the technology community needs to anticipate how AI propaganda will soon be used for emotional manipulation in mobile messaging, virtual reality, and augmented reality. If fake news created the scaffolding for this new automated political propaganda machine, bots, or fake social media profiles, have become its foot soldiers — an army of political robots used to control conversations on social media and silence and intimidate journalists and others who might undermine their messaging. Samuel Woolley, Director of Research at the University of Oxford’s Computational Propaganda Project and a fellow at Google’s Jigsaw project, has dedicated his career to studying the role of bots in online political organizing — who creates them, how they’re used, and to what end. Research by Woolley and his Oxford-based team in the lead-up to the 2016 election found that pro-Trump political messaging relied heavily on bots to spread fake news and discredit Hillary Clinton. By election day, Trump’s bots outnumbered hers, 5:1. “The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled activities after Election Day,” the study by Woolley’s team reported. Woolley believes it’s likely that Cambridge Analytica was responsible for subcontracting the creation of those Trump bots, though he says he doesn’t have direct proof. Still, if anyone outside of the Trump campaign is qualified to speculate about who created those bots, it would be Woolley. Led by Dr. Philip Howard, the team’s Principal Investigator, Woolley and his colleagues have been tracking the use of bots in political organizing since 2010. That’s when Howard, buried deep in research about the role Twitter played in the Arab Spring, first noticed thousands of bots coopting hashtags used by protesters. Curious, he and his team began reaching out to hackers, botmakers, and political campaigns, getting to know them and trying to understand their work and motivations. Eventually, those creators would come to make up an informal network of nearly 100 informants that have kept Howard and his colleagues in the know about these bots over the last few years. Before long, Howard and his team were getting the heads up about bot propaganda campaigns from the creators themselves. As more and more major international political figures began using botnets as just another tool in their campaigns, Howard, Woolley and the rest of their team studied the action unfolding. The world these informants revealed is an international network of governments, consultancies (often with owners or top management just one degree away from official government actors), and individuals who build and maintain massive networks of bots to amplify the messages of political actors, spread messages counter to those of their opponents, and silence those whose views or ideas might threaten those same political actors. “The Chinese, Iranian, and Russian, governments employ their own social-media experts and pay small amounts of money to large numbers of people to generate pro-government messages,” Howard and his coauthors wrote in a 2015 research paper about the use of bots in the Venezuelan election. Depending on which of those three categories bot creators fall into — government, consultancy or individual — they’re just as likely to be motivated by political beliefs as they are the opportunity to auction off their networks of digital influence to the highest bidder. Not all bots are created equal. The average, run-of-the-mill Twitter bot is literally a robot — often programmed to retweet specific accounts to help popularize specific ideas or viewpoints. They also frequently respond automatically to Twitter users who use certain keywords or hashtags — often with pre-written slurs, insults or threats. High-end bots on the other hand are more analog, operated by real people. They assume fake identities with distinct personalities and their responses to other users online are specific, intended to change their opinions or those of their followers by attacking their viewpoints. They have online friends and followers. They’re also far less likely to be discovered — and their accounts deactivated — by Facebook or Twitter. Working on their own, Woolley estimates, an individual could build and maintain up to 400 of these boutique Twitter bots; on Facebook, which he says is more effective at identifying and shutting down fake accounts, an individual could manage 10–20. As a result, these high-quality botnets are often used for multiple political campaigns. During the Brexit referendum, the Oxford team watched as one network of bots, previously used to influence the conversation around the Israeli/Palestinian conflict, was reactivated to fight for the Leave campaign. Individual profiles were updated to reflect the new debate, their personal taglines changed to ally with their new allegiances — and away they went. Russia’s bot army has been the subject of particular scrutiny since a CIA special report revealed that Russia had been working to influence the election in Trump’s favor. Recently, reporter/comedian Samantha Bee traveled to Moscow to interview two paid Russian troll operators. Clad in black ski masks to obscure their identities, the two talked with Bee about how and why they were using their accounts during the U.S. election. They told Bee that they pose as Americans online and target sites like The Wall Street Journal, The New York Post, The Washington Post, Facebook and Twitter. Their goal, they said, is to “piss off” other social media users, change their opinions, and silence their opponents. Or, to put it in the words of Russian Troll #1, “when your opponent just ... shut up.” The 2016 U.S. election is over, but the Weaponized AI Propaganda Machine is just warming up. And while each of its components would be worrying on its own, together, they represent the arrival of a new era in political messaging — a steel wall between campaign winners and losers that can only be mounted by gathering more data, creating better personality analyses, rapid development of engagement AI, and hiring more trolls. At the moment, Trump and Cambridge Analytica are lapping their opponents. The more data they gather about individuals, the more Analytica and, by extension, Trump’s presidency will benefit from the network effects of their work — and the harder it will become to counter or fight back against their messaging in the court of public opinion. Each Tweet that echoes forth from the @realDonaldTrump and @POTUS accounts, announcing and defending the administration’s moves, is met with a chorus of protest and argument. But even that negative engagement becomes a valuable asset for the Trump administration because every impulsive tweet can be treated like a psychographic experiment. Trump’s first few weeks in office may have seemed bumbling, but they represent a clear signal of what lies ahead for Trump’s presidency — an executive order designed to enrage and distract his opponents as he and Bannon move to strip power from the judicial branch, install Bannon himself on the National Security Council, and issues a series of unconstitutional gag orders to federal agencies. Cambridge Analytica may be slated to secure more federal contracts and is likely about to begin managing White House digital communications for the rest of the Trump Administration. What new predictive-personality targeting becomes possible with potential access to data on U.S. voters from the IRS, Department of Homeland Security, or the NSA? “Lenin wanted to destroy the state, and that’s my goal, too. I want to bring everything crashing down and destroy all of today’s establishment,” Bannon said in 2013. We know that Steve Bannon subscribes to a theory of history where a messianic ‘Grey Warrior’ consolidates power and remakes the global order. Bolstered by the success of Brexit and the Trump victory, Breitbart (of which Bannon was Executive Chair until Trump’s election) and Cambridge Analytica (which Bannon sits on the board of) are now bringing fake news and automated propaganda to support far-right parties in at least Germany, France, Hungary, and India as well as parts of South America. Never has such a radical, international political movement had the precision and power of this kind of propaganda technology. Whether or not leaders, engineers, designers, and investors in the technology community respond to this threat will shape major aspects of global politics for the foreseeable future. The future of politics will not be a war of candidates or even cash on hand. And it’s not even about big data, as some have argued. Everyone will have access to big data — as Hillary did in the 2016 election. From now on, the distinguishing factor between those who win elections and those who lose them will be how a candidate uses that data to refine their machine learning algorithms and automated engagement tactics. Elections in 2018 and 2020 won’t be a contest of ideas, but a battle of automated behavior change. The fight for the future will be a proxy war of machine learning. It will be waged online, in secret, and with the unwitting help of all of you. Anyone who wants to effect change needs to understand this new reality. It’s only by understanding this — and by building better automated engagement systems that amplify genuine human passion rather than manipulate it — that other candidates and causes around the globe will be able to compete. Implication #1: Public Sentiment Turns Into High-Frequency Trading Thanks to stock-trading algorithms, large portions of public stock and commodity markets no longer resemble a human system and, some would argue, no longer serve their purpose as a signal of value. Instead they’re a battleground for high-frequency trading algorithms attempting to influence price or find nano-leverage in price position. In the near future, we may see a similar process unfold in our public debates. Instead of battling press conferences and opinion articles, public opinion about companies and politicians may turn into multi-billion dollar battles between competing algorithms, each deployed to sway public sentiment. Stock trading algorithms already exist that analyze millions of Tweets and online posts in real-time and make trades in a matter of milliseconds based on changes in public sentiment. Algorithmic trading and ‘algorithmic public opinion’ are already connected. It’s likely they will continue to converge. Implication #2: Personalized, Automated Propaganda That Adapts to Your Weaknesses What if President Trump’s 2020 re-election campaign didn’t just have the best political messaging, but 250 million algorithmic versions of their political message all updating in real-time, personalized to precisely fit the worldview and attack the insecurities of their targets? Instead of having to deal with misleading politicians, we may soon witness a Cambrian explosion of pathologically-lying political and corporate bots that constantly improve at manipulating us. Implication #3: Not Just a Bubble, But Trapped in Your Own Ideological Matrix Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out? Correction: An earlier version of this story mistakenly referred to Steve Bannon as the owner of Breitbart News. Until Trump’s election, Bannon served as the Executive Chair of Breitbart, a position in which it is common to assume ownership through stock holdings. This story has been updated to reflect that. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. CEO & Co-founder @Join_Scout. The social implications of technology.
Slav Ivanov
4.4K
10
https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------5----------------
37 Reasons why your Neural Network is not working – Slav
The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer. Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)? A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too. A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response: If the steps above don’t do it, start going down the following big list and verify things one by one. Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK. Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong. Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it. Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels. Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data. This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off. The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels. If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together. Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches. If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more. This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset. This paper points out that having a very large batch can reduce the generalization ability of the model. Thanks to @hengcherkeng for this one: Did you standardize your input to have zero mean and unit variance? Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit. If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]? CS231n points out a common pitfall: Also, check for different preprocessing in each sample or batch. This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only. Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. After this, try increasing the regularization strength which should increase the loss. If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way. If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t. If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights. Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy. Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers. If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network. If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3. Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class. If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps. Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search. Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting. Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum. Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers. A low learning rate will cause your model to converge very slowly. A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution. Play around with your current learning rate by multiplying it by 0.1 or 10. Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it: Did I miss anything? Is anything wrong? Let me know by leaving a reply below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Keval Patel
833
7
https://becominghuman.ai/turn-your-raspberry-pi-into-homemade-google-home-9e29ad220075?source=tag_archive---------6----------------
Turn your Raspberry Pi into homemade Google Home – Becoming Human: Artificial Intelligence Magazine
Google Home is a beautiful device with built-in Google Assistant — A state of the art digital personal assistant by Google. — which you can place anywhere at your home and it will do some amazing things for you. It will save your reminders, shopping lists, notes and most importantly answers your questions and queries based on the context of the conversations. In this article, you are going to learn to turn your Raspberry Pi into homemade Google Home device which is, So, let’s get started. Once you have all these things, login to Raspbian desktop and go to the following steps one by one. As you can see your USB device is attached to card 1 and the device id is 0. Raspberry Pi recognizes card 0 as the internal sound card (which is bcm2835) and other external sound cards as external sound cards. This will set your external mic (see pcm.mic) as the audio capture device (see in pcm!.default) and your inbuilt sound card (card 0) as the speaker device. This will create Python 3 environment (As the Google Assistant library runs on Python 3.x only) in your raspberry pi and install required dependencies. If instead, it displays: InvalidGrantError then an invalid code was entered. Try again. You can run google-assistant-init.sh to initiate the Google Assistant any time. 1. Autostart with Pixel Desktop on Boot: 2. Autostart with CLI on Boot: You can do many daily stuff with your Google Home. If you want to perform your custom tasks like turning off the light, opening the door, you can do it with integrating Google Actions in your Google Assistant. If you have any trouble with starting the Google Assistant, leave a comment below. I will try to resolve them. ~If you liked the article, click the 💚 below so more people can see it! Also, you can follow me on Medium or on My Blog, so you get updates regarding my future articles!!~ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. www.kevalpatel2106.com | Android Developer | Machine learner | Gopher | Open Source Contributor Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Eduard Tyantov
5.4K
19
https://blog.statsbot.co/deep-learning-achievements-4c563e034257?source=tag_archive---------7----------------
Deep Learning Achievements Over the Past Year – Stats and Bots
At Statsbot, we’re constantly reviewing the deep learning achievements to improve our models and product. Around Christmas time, our team decided to take stock of the recent achievements in deep learning over the past year (and a bit longer). We translated the article by a data scientist, Ed Tyantov, to tell you about the most significant developments that can affect our future. Almost a year ago, Google announced the launch of a new model for Google Translate. The company described in detail the network architecture — Recurrent Neural Network (RNN). The key outcome: closing down the gap with humans in accuracy of the translation by 55–85% (estimated by people on a 6-point scale). It is difficult to reproduce good results with this model without the huge dataset that Google has. You probably heard the silly news that Facebook turned off its chatbot, which went out of control and made up its own language. This chatbot was created by the company for negotiations. Its purpose is to conduct text negotiations with another agent and reach a deal: how to divide items (books, hats, etc.) by two. Each agent has his own goal in the negotiations that the other does not know about. It’s impossible to leave the negotiations without a deal. For training, they collected a dataset of human negotiations and trained a supervised recurrent network. Then, they took a reinforcement learning trained agent and trained it to talk with itself, setting a limit — the similarity of the language to human. The bot has learned one of the real negotiation strategies — showing a fake interest in certain aspects of the deal, only to give up on them later and benefit from its real goals. It has been the first attempt to create such an interactive bot, and it was quite successful. Full story is in this article, and the code is publicly available. Certainly, the news that the bot has allegedly invented a language was inflated from scratch. When training (in negotiations with the same agent), they disabled the restriction of the similarity of the text to human, and the algorithm modified the language of interaction. Nothing unusual. Over the past year, recurrent networks have been actively developed and used in many tasks and applications. The architecture of RNNs has become much more complicated, but in some areas similar results were achieved by simple feedforward-networks — DSSM. For example, Google has reached the same quality, as with LSTM previously, for its mail feature Smart Reply. In addition, Yandex launched a new search engine based on such networks. Employees of DeepMind reported in their article about generating audio. Briefly, researchers made an autoregressive full-convolution WaveNet model based on previous approaches to image generation (PixelRNN and PixelCNN). The network was trained end-to-end: text for the input, audio for the output. The researches got an excellent result as the difference compared to human has been reduced by 50%. The main disadvantage of the network is a low productivity as, because of the autoregression, sounds are generated sequentially and it takes about 1–2 minutes to create one second of audio. Look at... sorry, hear this example. If you remove the dependence of the network on the input text and leave only the dependence on the previously generated phoneme, then the network will generate phonemes similar to the human language, but they will be meaningless. Hear the example of the generated voice. This same model can be applied not only to speech, but also, for example, to creating music. Imagine audio generated by the model, which was taught using the dataset of a piano game (again without any dependence on the input data). Read a full version of DeepMind research if you’re interested. Lip reading is another deep learning achievement and victory over humans. Google Deepmind, in collaboration with Oxford University, reported in the article, “Lip Reading Sentences in the Wild” on how their model, which had been trained on a television dataset, was able to surpass the professional lip reader from the BBC channel. There are 100,000 sentences with audio and video in the dataset. Model: LSTM on audio, and CNN + LSTM on video. These two state vectors are fed to the final LSTM, which generates the result (characters). Different types of input data were used during training: audio, video, and audio + video. In other words, it is an “omnichannel” model. The University of Washington has done a serious job of generating the lip movements of former US President Obama. The choice fell on him due to the huge number of his performance recordings online (17 hours of HD video). They couldn’t get along with just the network as they got too many artifacts. Therefore, the authors of the article made several crutches (or tricks, if you like) to improve the texture and timings. You can see that the results are amazing. Soon, you couldn’t trust even the video with the president. In their post and article, Google Brain Team reported on how they introduced a new OCR (Optical Character Recognition) engine into its Maps, through which street signs and store signs are recognized. In the process of technology development, the company compiled a new FSNS (French Street Name Signs), which contains many complex cases. To recognize each sign, the network uses up to four of its photos. The features are extracted with the CNN, scaled with the help of the spatial attention (pixel coordinates are taken into account), and the result is fed to the LSTM. The same approach is applied to the task of recognizing store names on signboards (there can be a lot of “noise” data, and the network itself must “focus” in the right places). This algorithm was applied to 80 billion photos. There is a type of task called visual reasoning, where a neural network is asked to answer a question using a photo. For example: “Is there a same size rubber thing in the picture as a yellow metal cylinder?” The question is truly nontrivial, and until recently, the problem was solved with an accuracy of only 68.5%. And again the breakthrough was achieved by the team from Deepmind: on the CLEVR dataset they reached a super-human accuracy of 95.5%. The network architecture is very interesting: An interesting application of neural networks was created by the company Uizard: generating a layout code according to a screenshot from the interface designer. This is an extremely useful application of neural networks, which can make life easier when developing software. The authors claim that they reached 77% accuracy. However, this is still under research and there is no talk on real usage yet. There is no code or dataset in open source, but they promise to upload it. Perhaps you’ve seen Quick, Draw! from Google, where the goal is to draw sketches of various objects in 20 seconds. The corporation collected this dataset in order to teach the neural network to draw, as Google described in their blog and article. The collected dataset consists of 70 thousand sketches, which eventually became publicly available. Sketches are not pictures, but detailed vector representations of drawings (at which point the user pressed the “pencil,” released where the line was drawn, and so on). Researchers have trained the Sequence-to-Sequence Variational Autoencoder (VAE) using RNN as a coding/decoding mechanism. Eventually, as befits the auto-encoder, the model received a latent vector that characterizes the original picture. Whereas the decoder can extract a drawing from this vector, you can change it and get new sketches. And even perform vector arithmetic to create a catpig: One of the hottest topics in Deep Learning is Generative Adversarial Networks (GANs). Most often, this idea is used to work with images, so I will explain the concept using them. The idea is in the competition of two networks — the generator and the discriminator. The first network creates a picture, and the second one tries to understand whether the picture is real or generated. Schematically it looks like this: During training, the generator from a random vector (noise) generates an image and feeds it to the input of the discriminator, which says whether it is fake or not. The discriminator is also given real images from the dataset. It is difficult to train such construction, as it is hard to find the equilibrium point of two networks. Most often the discriminator wins and the training stagnates. However, the advantage of the system is that we can solve problems in which it is difficult for us to set the loss-function (for example, improving the quality of the photo) — we give it to the discriminator. A classic example of the GAN training result is pictures of bedrooms or people Previously, we considered the auto-coding (Sketch-RNN), which encodes the original data into a latent representation. The same thing happens with the generator. The idea of generating an image using a vector is clearly shown in this project in the example of faces. You can change the vector and see how the faces change. The same arithmetic works over the latent space: “a man in glasses” minus “a man” plus a “woman” is equal to “a woman with glasses.” If you teach a controlled parameter to the latent vector during training, when you generate it, you can change it and so manage the necessary image in the picture. This approach is called conditional GAN. So did the authors of the article, “Face Aging With Conditional Generative Adversarial Networks.” Having trained the engine on the IMDB dataset with a known age of actors, the researchers were given the opportunity to change the face age of the person. Google has found another interesting application to GAN — the choice and improvement of photos. GAN was trained on a professional photo dataset: the generator is trying to improve bad photos (professionally shot and degraded with the help of special filters), and the discriminator — to distinguish “improved” photos and real professional ones. A trained algorithm went through Google Street View panoramas in search of the best composition and received some pictures of professional and semi-professional quality (as per photographers’ rating). An impressive example of GANs is generating images using text. The authors of this research suggest embedding text into the input of not only a generator (conditional GAN), but also a discriminator, so that it verifies the correspondence of the text to the picture. In order to make sure the discriminator learned to perform his function, in addition to training they added pairs with an incorrect text for the real pictures. One of the eye-catching articles of 2016 is, “Image-to-Image Translation with Conditional Adversarial Networks” by Berkeley AI Research (BAIR). Researchers solved the problem of image-to-image generation, when, for example, it was required to create a map using a satellite image, or realistic texture of the objects using their sketch. Here is another example of the successful performance of conditional GANs. In this case, the condition goes to the whole picture. Popular in image segmentation, UNet was used as the architecture of the generator, and a new PatchGAN classifier was used as a discriminator for combating blurred images (the picture is cut into N patches, and the prediction of fake/real goes for each of them separately). Christopher Hesse made the nightmare cat demo, which attracted great interest from the users. You can find a source code here. In order to apply Pix2Pix, you need a dataset with the corresponding pairs of pictures from different domains. In the case, for example, with cards, it is not a problem to assemble such a dataset. However, if you want to do something more complicated like “transfiguring” objects or styling, then pairs of objects cannot be found in principle. Therefore, authors of Pix2Pix decided to develop their idea and came up with CycleGAN for transfer between different domains of images without specific pairs — “Unpaired Image-to-Image Translation.” The idea is to teach two pairs of generator-discriminators to transfer the image from one domain to another and back, while we require a cycle consistency — after a sequential application of the generators, we should get an image similar to the original L1 loss. A cyclic loss is required to ensure that the generator did not just begin to transfer pictures of one domain to pictures from another domain, which are completely unrelated to the original image. This approach allows you to learn the mapping of horses -> zebras. Such transformations are unstable and often create unsuccessful options: You can find a source code here. Machine learning is now coming to medicine. In addition to recognizing ultrasound, MRI, and diagnosis, it can be used to find new drugs to fight cancer. We already reported in detail about this research. Briefly, with the help of Adversarial Autoencoder (AAE), you can learn the latent representation of molecules and then use it to search for new ones. As a result, 69 molecules were found, half of which are used to fight cancer, and the others have serious potential. Topics with adversarial-attacks are actively explored. What are adversarial-attacks? Standard networks trained, for example, on ImageNet, are completely unstable when adding special noise to the classified picture. In the example below, we see that the picture with noise for the human eye is practically unchanged, but the model goes crazy and predicts a completely different class. Stability is achieved with, for example, the Fast Gradient Sign Method (FGSM): having access to the parameters of the model, you can make one or several gradient steps towards the desired class and change the original picture. One of the tasks on Kaggle is related to this: the participants are encouraged to create universal attacks/defenses, which are all eventually run against each other to determine the best. Why should we even investigate these attacks? First, if we want to protect our products, we can add noise to the captcha to prevent spammers from recognizing it automatically. Secondly, algorithms are more and more involved in our lives — face recognition systems and self-driving cars. In this case, attackers can use the shortcomings of the algorithms. Here is an example of when special glasses allow you to deceive the face recognition system and “pass yourself off as another person.” So, we need to take possible attacks into account when teaching models. Such manipulations with signs also do not allow them to be recognized correctly. • A set of articles from the organizers of the contest.• Already written libraries for attacks: cleverhans and foolbox. Reinforcement learning (RL), or learning with reinforcement is also one of the most interesting and actively developing approaches in machine learning. The essence of the approach is to learn the successful behavior of the agent in an environment that gives a reward through experience — just as people learn throughout their lives. RL is actively used in games, robots, and system management (traffic, for example). Of course, everyone has heard about AlphaGo’s victories in the game over the best professionals. Researchers were using RL for training: the bot played with itself to improve its strategies. In previous years, DeepMind had learned using DQN to play arcade games better than humans. Currently, algorithms are being taught to play more complex games like Doom. Much of the attention is paid to learning acceleration because experience of the agent in interaction with the environment requires many hours of training on modern GPUs. In his blog, Deepmind reported that the introduction of additional losses (auxiliary tasks), such as the prediction of a frame change (pixel control) so that the agent better understands the consequences of the actions, significantly speeds up learning. Learning results: 4.2. Learning robotsIn OpenAI, they have been actively studying an agent’s training by humans in a virtual environment, which is safer for experiments than in real life. In one of the studies, the team showed that one-shot learning is possible: a person shows in VR how to perform a certain task, and one demonstration is enough for the algorithm to learn it and then reproduce it in real conditions. If only it was so easy with people. :) Here is the work of OpenAI and DeepMind on the same topic. The bottom line is that an agent has a task, the algorithm provides two possible solutions for the human and indicates which one is better. The process is repeated iteratively and the algorithm for 900 bits of feedback (binary markup) from the person learned how to solve the problem. As always, the human must be careful and think of what he is teaching the machine. For example, the evaluator decided that the algorithm really wanted to take the object, but in fact, he just simulated this action. There is another study from DeepMind. To teach the robot complex behavior (walk, jump, etc.), and even do it similar to the human, you have to be heavily involved with the choice of the loss function, which will encourage the desired behavior. However, it would be preferable that the algorithm learned complex behavior itself by leaning with simple rewards. Researchers managed to achieve this: they taught agents (body emulators) to perform complex actions by constructing a complex environment with obstacles and with a simple reward for progress in movement. You can watch the impressive video with results. However, it’s much more fun to watch it with a superimposed sound! Finally, I will give a link to the recently published algorithms for learning RL from OpenAI. Now you can use more advanced solutions than the standard DQN. In July 2017, Google reported that it took advantage of DeepMind’s development in machine learning to reduce the energy costs of its data center. Based on the information from thousands of sensors in the data center, Google developers trained a neural network ensemble to predict PUE (Power Usage Effectiveness) and more efficient data center management. This is an impressive and significant example of the practical application of ML. As you know, trained models are poorly transferred from task to task, as each task has to be trained for a specific model. A small step towards the universality of the models was done by Google Brain in his article “One Model To Learn The All.” Researchers have trained a model that performs eight tasks from different domains (text, speech, and images). For example, translation from different languages, text parsing, and image and sound recognition. In order to achieve this, they built a complex network architecture with various blocks to process different input data and generate a result. The blocks for the encoder/decoder fall into three types: convolution, attention, and gated mixture of experts (MoE). Main results of learning: By the way, this model is present in tensor2tensor. In their post, Facebook staff told us how their engineers were able to teach the Resnet-50 model on Imagenet in just one hour. Truth be told, this required a cluster of 256 GPUs (Tesla P100). They used Gloo and Caffe2 for distributed learning. To make the process effective, it was necessary to adapt the learning strategy with a huge batch (8192 elements): gradient averaging, warm-up phase, special learning rate, etc. As a result, it was possible to achieve an efficiency of 90% when scaling from 8 to 256 GPU. Now researchers from Facebook can experiment even faster, unlike mere mortals without such a cluster. The self-driving car sphere is intensively developing, and the cars are actively tested. From the relatively recent events, we can note the purchase of Intel MobilEye, the scandals around Uber and Google technologies stolen by their former employee, the first death when using an autopilot, and much more. I will note one thing: Google Waymo is launching a beta program. Google is a pioneer in this field, and it is assumed that their technology is very good because cars have been driven more than 3 million miles. As to more recent events, self-driving cars have been allowed to travel across all US states. As I said, modern ML is beginning to be introduced into medicine. For example, Google collaborates with a medical center to help with diagnosis. Deepmind has even established a separate unit. This year, under the program of the Data Science Bowl, there was a competition held to predict lung cancer in a year on the basis of detailed images with a prize fund of one million dollars. Currently, there are heavy investments in ML as it was before with BigData. China invested $150 billion in AI to become the world leader in the industry. For comparison, Baidu Research employs 1,300 people, and in the same FAIR (Facebook) — 80. At the last KDD, Alibaba employees talked about their parameter server KungPeng, which runs on 100 billion samples with a trillion parameters, which “becomes a common task” ©. You can draw your own conclusions, it’s never too late to study machine learning. In one way or another, over time, all developers will use machine learning, which will become one of the common skills, as it is today — the ability to work with databases. Link to the original post. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Mail.ru Group, Head of Machine Learning Team Data stories on machine learning and analytics. From Statsbot’s makers.
Maruti Techlabs
552
5
https://chatbotsmagazine.com/which-are-the-best-intelligent-chatbots-or-ai-chatbots-available-online-cc49c0f3569d?source=tag_archive---------8----------------
What Are The Best Intelligent Chatbots or AI Chatbots Available Online?
How do we define the intelligence of a chatbot? You can see a lot of articles about what would make a chatbot “appear intelligent.” A chatbot is intelligent when it becomes aware of user needs. Its intelligence is what gives the chatbot the ability to handle any scenario of a conversation with ease. Are the travel bots or the weather bots that have buttons that you click and give you some query, artificially intelligent? Definitely, but they are just not far along the conversation axis. It can be a wonderfully designed conversational interface that is smooth and easy to use. It could be natural language processing and understanding where it is able to understand sentences that you structure in the wrong way. Now, it is easier than ever to make a bot from scratch. Also chatbot development platforms like Chatfuel, Gupshup make it fairly simple to build a chatbot without a technical background. Hence, making the reach for chatbot easy and transparent to anyone who would like to have one for their business. For more understanding on intelligent chatbots, read our blog. The best AI based chatbots available online are Mitsuku, Rose, Poncho, Right Click, Insomno Bot, Dr. AI and Melody. This chatbot is one the best AI chatbots and it’s my favorite too. Evidently it is the current winner of Loebner Prize. The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the chatterbot considered by the judges to be the most human-like. The format of the competition is that of a standard Turing test. You can talk with Mitsuku for hours without getting bored. It replies to your question in the most humane way and understands your mood with the language you’re using. It is a bot made to chat about anything, which is one of the main reasons that make it so human-like — contrary to other chatbots that are made for a specific task. Rose is a chatbot, and a very good one — she won recognition this past Saturday as the most human-like chatbot in a competition described as the first Turing test, the Loebner Prize in 2014 and 2015. Right Click is a startup that introduced an A.I.-powered chatbot that creates websites. It asks general questions during the conversation like “What industry you belong to?” and “Why do you want to make a website?” and creates customized templates as per the given answers. Hira Saeed tried to divert it from its job by asking it about love, but what a smart player it is! By replying to each of her queries, it tried to bring her back to the actual job of website creation. The process was short but keeps you hooked. Poncho is a Messenger bot designed to be your one and only weather expert. It sends alerts up to twice a day with user consent and is intelligent enough to answer questions like “Should I take an umbrella today?” Read Poncho developer’s piece: Think Differently When Building Bots Insomno bot is for night owls. As the name suggests, it is for all people out there who have trouble sleeping. This bot talks to you when you have no one around and gives you amazing replies so that you won’t get bored. It’s not something that will help you count stars when you can’t sleep or help you with reading suggestions, but this bot talks to you about anything. It asks about symptoms, body parameters and medical history, then compiles a list of the most and least likely causes for the symptoms and ranks them by order of seriousness. It lives inside the existing Biadu Doctor app. This app collects medical information from people and then passes it to doctors in a form that makes it easier to use for diagnostic purposes or to otherwise respond to. Featured CBM: The Future, Healthcare, and Conversational UI These are just the basic versions of intelligent chatbots. There are many more intelligent chatbots out there which provide a much more smarter approach to responding to queries. Since the process of making a intelligent chatbot is not a big task, most of us can achieve it with the most basic technical knowledge. Many of which will be very extremely helpful in the service industry and also help provide a better customer experience. The most important part of any chatbot is the conversation it has with its user. Hence, more effort has to be put in designing a chatbot conversation. Hope you had a good read. To know more about Chatbots and how they converse with people, visit the link below. Featured CBM: How to Make a Chatbot Intelligent? If you resonated with this article, please subscribe to our newsletter. You will get a free copy of our Case Study on Business Automation through our Bot solution. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Professional team delivering enterprise software solutions — Bot development, Big Data Analytics, Web & Mobile Apps, and AI & ML integration. Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more.
Jerry Chen
2.3K
11
https://news.greylock.com/the-new-moats-53f61aeac2d9?source=tag_archive---------9----------------
The New Moats – Greylock Perspectives
To build a sustainable and profitable business, you need strong defensive moats around your company. This rings especially true today as we undergo one of the largest platform shifts in a generation as applications move to the cloud, are consumed on iPhones, Echoes, and Teslas, are built on open source, and are fueled by AI and data. These dramatic shifts are rendering some existing moats useless and leaving CEOs feeling like it’s almost impossible to build a defensible business. In this post, I’ll review some of the traditional economic moats that technology companies typically leverage and how they are being disrupted. I believe that startups today need to build systems of intelligenceTM — AI powered applications — “the new moats.” Businesses can build several different moats and over time these moats can change. The following list is definitely not exhaustive and fair warning, it will read like a bad b-school blog! Some of the greatest and most enduring technology companies are defended by powerful moats. For example, Microsoft, Google, and Facebook all have moats built on economies of scale and network effects. One of the most successful cloud businesses, Amazon Web Services (AWS), has both the advantages of scale but also the power of network effects. More apps and services are built natively on AWS because “that’s where the customers and the data are.” In turn, the ecosystem of solutions attracts more customers and developers who build more apps that generate more data continuing the virtuous cycle while driving down Amazon’s cost through the advantages of scale. Strong moats help companies survive through major platform shifts, but surviving should not be confused with thriving. For example, high switching costs can partly account for why mainframes and “big iron” systems are still around after all these years. Legacy businesses with deep moats may not be the high growth vehicles of their prime, but they are still generating profits. Companies need to recognize and react when they are in the midst of an industry wide transformation, lest they become victims of their own success. Moreover, these massive platforms shifts — like cloud and mobile — are technology tidal waves that create openings for new players and enable founders to build paths over and around existing moats. Startup founders who succeed tend to execute a dual-pronged strategy: 1) Attack legacy player moats and 2) simultaneously build their own defensible moats that ride the new wave. For example, Facebook had the most entrenched social network, but Instagram built a mobile-first photo app that rode the smartphone wave to a $1B acquisition. In the enterprise world, SaaS companies like Salesforce are disrupting on-premise software companies like Oracle. Now with the advent of cloud, AWS, Azure, and Google Cloud are creating a direct channel to the customer. These platform shifts can also change the buyer and end user. Within the enterprise, the buyer has moved from a central IT team to an office knowledge worker, to someone with an iPhone, to any developer with a GitHub account. In this current wave of disruption, is it still possible to build sustainable moats? For founders, it may feel like every advantage you build can be replicated by another team down the street, or at the very least, it feels like moats can only be built at massive scale. Open source tools and cloud have pushed power to the “new incumbents,’ — the current generation of companies that are at massive scale, have strong distribution networks, high switching cost, and strong brands working for them. These are companies like Apple, Facebook, Google, Amazon, and Salesforce. Why does it feel like there are “no more moats” to build? In an era of cloud and open source, deep technology attacking hard problems is becoming a shallower moat. The use of open source is making it harder to monetize technology advances while the use of cloud to deliver technology is moving defensibility to different parts of the product. Companies that focus too much on technology without putting it in context of a customer problem will be caught between a rock and a hard place — or as I like to say, “between open source and a cloud place.” For example, incumbent technologies like Oracle’s proprietary database are being attacked from open source alternatives like Hadoop and MongoDB and in the cloud by Amazon Aurora and innovations like Google Spanner. On the other hand, companies that build great customer experiences may find defensibility through the workflow of their software. I believe that deep technology moats aren’t completely gone and defensible business models can still be built around IP. If you pick a place in the technology stack and become the absolute best of breed solution you can create a valuable company. However, this means picking a technical problem with few substitutes, that requires hard engineering, and needs operational knowledge to scale. Today the market is favoring “full stack” companies, SaaS offerings that offer application logic, middleware, and databases combined. Technology is becoming an invisible component of a complete solution (e.g. “No one cares what database backs your favorite mobile app as long as your food is delivered on time!”). In the consumer world, Apple made the integrated or full stack experience popular with the iPhone which seamlessly integrated hardware with software. This integrated experience is coming to dominate enterprise software as well. Cloud and SaaS has made it possible to reach customers directly and in a cost-effective manner. As a result, customers are increasingly buying full stack technology in the form of SaaS applications instead of buying individual pieces of the tech stack and building their own apps. The emphasis on the whole application experience or the “top of the technology stack” is why I also evaluate companies through an additional framework, the stack of enterprise systems. At the bottom of the stack of systems, is usually a database on top of which an application is built. If the data and app power a critical business function, it becomes a “system of record.” There are three major systems of record in an enterprise: your customers, your employees, and your assets. CRM owns your customers, HCM, owns your employees, and ERP/Financials owns your assets. Generations of companies have been built around owning a system of record and every wave produced a new winner. In CRM we saw Salesforce replace Siebel as the system of record for customer data, and Workday replace Oracle PeopleSoft for employee data. Workday has also expanded into financial data. Other applications can be built around a system of record but are usually not as valuable as the actual system of record. For example, marketing automation companies like Marketo and Responsys built big businesses around CRM, but never became as strategic or as valuable as Salesforce. Systems of engagementTM are the interfaces between users and the systems of record and can be powerful businesses because they control the end user interactions. In the mainframe era, the systems of record and engagement were tied together when the mainframe and terminal were essentially the same product. The client/server wave ushered in a class of companies that tried to own your desktop, only to be disrupted by a generation of browser based companies, only to be succeeded by mobile first companies. The current generation of companies vying to own the system of engagement include Slack, Amazon Alexa, and every other speech / text/ conversational UI startup. In China, WeChat has become a dominant system of engagement and is now a platform for everything from e-commerce to games. If it sounds like systems of engagementTM turn over more than systems of record, it’s probably because they do. The successive generations of systems of engagementTM don’t necessarily disappear but instead users keep adding new ways to interact with their applications. In a multi-channel world, owning the system of engagement is most valuable if you control most of the end user engagement or are a cross channel system that reaches users wherever they are. Perhaps the most strategic advantage of being a system of engagement is that you can coexist with several systems of record and collect all the data that passes through your product. Over time you can evolve your engagement position into an actual system of record using all the data you have accumulated. I believe that systems of intelligenceTM are the new moats. What is a system of intelligence and why is it so defensible? What makes a system of intelligence valuable is that it typically crosses multiple data sets, multiple systems of record. One example is an application that combines web analytics with customer data and social data to predict end user behavior, churn, LTV, or just serve more timely content. You can build intelligence on a single data source or single system of record but that position becomes harder to defend against the vendor that owns the data. For a startup to thrive around incumbents like Oracle and SAP, you need to combine their data with other data sources (public or private) to create value for your customer. Incumbents will be advantaged on their own data. For example, Salesforce is building a system of intelligence, Einstein, starting with their own system of record, CRM. The next generation of enterprise products will use different artificial intelligence (AI) techniques to build systems of intelligenceTM. It’s not just applications that will be transformed by AI but also data center and infrastructure products. We can categorize three major areas where you can build systems of intelligenceTM: customer facing applications around the customer journey, employee facing applications like HCM, ITSM, Financials, or infrastructure systems like security, compute/ storage/ networking, and monitoring/ management. In addition to these broad horizontal use cases, startups can also focus on a single industry or market and build a system of intelligence around data that is unique to a vertical like Veeva in life sciences, or Rhumbix in construction. In all of these markets, the battle is moving from the old moats, the sources of the data, to the new moats, what you do with the data. Using a company’s data, you can upsell customers, automatically respond to support tickets, prevent employee attrition, and identify security anomalies. Products that use data specific to an industry (i.e. healthcare, financial services), or unique to a company (customer data, machine logs, etc.) to solve a strategic problem begin to look like a pretty deep moat, especially if you can replace or automate an entire enterprise workflow or create a new value-added workflow that was made possible by this intelligence. Enterprise applications that built systems of record have always been powerful businesses models. Some of the most enduring app companies like Salesforce and SAP are all built on deep IP, benefit from economies of scale, and over time they accumulate more data and operating knowledge as they get deeper within a company’s workflow and business processes. However, even these incumbents are not immune to platform shifts as a new generation of companies attack their domains. To be fair, we may be at risk of AI marketing fatigue, but all the hype reflects AI’s potential to change so many industries. One popular AI approach, machine learning (ML), can be combined with data, a business process, and an enterprise workflow to create the context to build a system of intelligence. Google was an early pioneer of applying ML to a process and workflow: they collected more data on every user and applied machine learning to serve up more timely ads within the workflow of a web search. There are other evolving AI techniques like neural networks that will continue to change what we can expect from these future applications. These AI-driven systems of intelligenceTM present a huge opportunity for new startups. Successful companies here can build a virtuous cycle of data because the more data you generate and train on with your product, the better your models become and the better your product becomes. Ultimately the product becomes tailored for each customer which creates another moat, high switching costs. It is also possible to build a company that combines systems of engagementTM with intelligence or even all three layers of the enterprise stack but a system of intelligence or engagement can be the best insertion point for a startup against an incumbent. Building a system of engagement or intelligence is not a trivial task and will require deep technology, especially at speed and scale. In particular, technologies that can facilitate an intelligence layer across multiple data sources will be essential. Finally, there are some businesses that can build data network effects by using customer and market data to train and improve models that make the product better for all customers, which spins the flywheel of intelligence faster. In summary, you can build a defensible business model as a system of engagement, intelligence, or record, but with the advent of AI, intelligent applications will be the fountain of the next generation of great software companies because they will be the new moats. Thanks to Saam Motamedi, Sarah Guo, Eli Collins, Peter Bailis, Elisa Schreiber, Michael Inouye, my Greylock partner Sarah Tavel, and the rest of my partners at Greylock for their input. This post was also helped through conversations with my friends at several Greylock-backed companies including Trifacta, Cloudera, and dozens of founders and CEOs that have influenced my thinking. All good ideas are shamelessly stolen and all bad ideas are mine alone. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Restless. Irreverent. Partner at @GreylockVC. www.jerrychen.com Greylock Partners backs entrepreneurs who are building disruptive, market-transforming consumer and enterprise software companies.
Sarthak Jain
3.9K
10
https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------2----------------
How to easily Detect Objects with Deep Learning on Raspberry Pi
Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API
Gaurav Oberoi
850
12
https://hackernoon.com/exploring-deepfakes-20c9947c22d9?source=tag_archive---------3----------------
Exploring DeepFakes – Hacker Noon
In December 2017, a user named “DeepFakes” posted realistic looking explicit videos of famous celebrities on Reddit. He generated these fake videos using deep learning, the latest in AI, to insert celebrities’ faces into adult movies. In the following weeks, the internet exploded with articles about the dangers of face swapping technology: harassing innocents, propagating fake news, and hurting the credibility of video evidence forever. In this post, I explore the capabilities of this tech, describe how it works, and discuss potential applications. DeepFakes offers the ability to swap one face for another in an image or a video. Face swapping has been done in films for years, but it required skilled video editors and CGI experts to spend many hours to achieve decent results. This is so remarkable that I’m going to repeat it: anyone with hundreds of sample images, of person A and person B can feed them into an algorithm, and produce high quality face swaps — video editing skills are not needed. This also means that it can be done at scale, and given that so many of us have our faces online, it’s trivially easy to insert almost anyone into fake videos. Scary, but hopefully it’s not all doom and gloom, after all, we as a society have already come to accept that photos can easily be faked. Before dreaming up how to use this tech, I wanted to get a handle on how it works and how well it performs. I picked two popular late night TV hosts, Jimmy Fallon and John Oliver, because I can find lots of videos of them with similar poses and lighting — and also enough variation (like lip sync battles) to keep it interesting. Luckily for me, there’s an active GitHub repo that contains the original DeepFakes code and many more improvements. It’s fairly straightforward to use, but the onus is still on the user to collect and prepare training data. To make experimentation easy, I wrote a script to work directly with YouTube videos. This makes collecting and preprocessing training data painless, and converting videos one-step. Click here to view my Github repo, and see how easily I generated the videos below (I also share my model weights). The following videos were generated by training a model on about 15k images of each person’s face (30k images total). I got faces for each celebrity from 6–8 YouTube videos of 3–5 minutes each, with 20 frames per second per video, and by filtering out frames that don’t have their faces present. All of this was done automatically — all I did was specify a list of YouTube video urls. The total training time was about 72 hours on a NVIDIA GTX 1080 TI GPU. Training is primarily constrained by GPU, but downloading videos, and chopping them into frames is I/O bound and can be parallelized. Note that while I had thousands of images of each person, decent face swaps can be achieved with as few as 300 images. I went this route because I pulled face images from videos, and it’s far easier to pick a handful of videos as training data, than to find hundreds of images. The images below are low resolution to keep the size of the animated GIF file small. There’s a YouTube video below with higher resolution and sound. While not perfect, the results above are quite convincing. The key thing to remember is: the algorithm learned how to do this by seeing lots of examples, I didn’t modify the videos in any way. Magical? Let’s look under the covers. At the core of the Deepfakes code is an autoencoder, a deep neural network that learns how to take an input, compress it down into a small representation or encoding, and then to regenerate the original input from this encoding. Putting a bottleneck in the middle forces the network to recreate these images instead of just returning what it sees. The encodings help it capture broader patterns, hypothetically, like how and where to draw Jimmy Fallon’s eyebrow. Deepfakes goes further by having one encoder to compress a face into an encoding, and two decoders, one to turn it back into person A (Fallon), and the other to person B (Oliver). It’s easier to understand with a diagram: In the above, we’re showing how these 3 components get trained: Once training is complete, we can perform a clever trick: pass in an image of Fallon into the encoder, and then instead of trying to reconstruct Fallon from the encoding, we now pass it to Decoder B to reconstruct Oliver. It’s remarkable to think that the algorithm can learn how to generate these images just by seeing thousands of examples, but that’s exactly what has happened here, and with fairly decent results. While the results are exciting, there are clear limitations to what we can achieve with this technology today: These are tenable problems to be sure: tools can be built to collect images from online channels en masse; algorithms can help flag when there is insufficient or mismatched training data; clever optimizations or model reuse can help reduce training time; and a well engineered system can be built to make the entire process automatic. But ultimately, the question is: why? Is there enough of a business model to make doing all this worth it? Given what’ve now learned about what’s possible, let’s talk about ways in which this could be useful: Hollywood has had this technology at its fingertips, but not at this low cost. If they can create great looking videos with this technique, it will change the demand for skilled editors over time. But it could also open up new opportunities: for instance, making movies with unknown actors, and then superimposing famous celebrities onto them. This could work for YouTube videos or even news channels filmed by regular folks. In more out-there scenarios, studios could change actors based on their target market (more Schwarzenager for the Austrians), or Netflix could allow viewers to pick actors before hitting play. More likely, this tech could generate revenue for the estates of long dead actors by bringing them back to life. Some of the comment threads on DeepFakes videos on YouTube are abuzz about what a great meme generator this technology could create. Jib Jab is a company that has been selling video greeting cards with simple face swapping for years (they are hilarious). But the big opportunity is to create the next big viral hit; after all photo filters attracted masses of people to Instagram and SnapChat, and face swapping apps have done well before. Given how fun the results can be, there’s likely room for a hit viral app if you can get the costs low enough to generate these models. Imagine if Target could have a celebrity showcase their clothes for a month, just by paying her agent a fee, grabbing some existing headshots, and clicking a button. This would create a new revenue stream for celebrities, social media influencers, or anyone who happens to be in the spotlight at the moment. And it would give businesses another tool to promote brands and drive conversion. It also raises interesting legal questions about ownership of likeness, and business model questions on how to partition and price rights to use them. Imagine a world where the ads you see as you surf the web include you, your friends, and your family. While this may come across as creepy today, does it seem so far fetched to think that this won’t be the norm in a few years? After all, we are visual creatures, and advertisers have been trying to elicit emotional responses from us for years, e.g. Coke may want to convey joy by putting your friends in a hip music video, or Allstate may tug at your fears by showing your family in an insurance ad. Or the approach may be more direct: Banana Republic could superimpose your face on a body type that matches yours, and convince you that it’s worth trying out their new leather jackets. Whoever the original Deepfakes user is, they opened a Pandora’s box of difficult questions about how fake video generation will affect society. I hope that in the same way we have come to accept that images can easily be faked, we will adapt to video uncertainty too, though not everyone shares this hope. What Deepfakes also did is shine a light on how interesting this technology is. Deep generative models like the autoencoder that Deepfakes uses, allow us to create synthetic but realistic looking data (including images or videos), only by showing an algorithm lots of examples. This means that once these algorithms are turned into products, regular folks will have access to powerful tools that will make them more creative, hopefully towards positive ends. There have already been some interesting applications of this technique, like style transfer apps that make your photos look like famous paintings, but given the high volume and exciting nature of the research that is being published in this space, there’s clearly a lot more to come. I’m interested in exploring how to build value from the latest in AI research; if you have an interest in taking this technology to market to solve a real problem, please drop me a note. A few fun tidbits for the curious: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I’ve been a product manager, engineer, and founder for over a decade in Seattle and Silicon Valley. Currently exploring new ideas at the Allen Institute for AI. how hackers start their afternoons.
Nick Bourdakos
5K
15
https://medium.freecodecamp.org/understanding-capsule-networks-ais-alluring-new-architecture-bdb228173ddc?source=tag_archive---------4----------------
Understanding Capsule Networks — AI’s Alluring New Architecture
Convolutional neural networks have done an amazing job, but are rooted in problems. It’s time we started thinking about new solutions or improvements — and now, enter capsules. Previously, I briefly discussed how capsule networks combat some of these traditional problems. For the past for few months, I’ve been submerging myself in all things capsules. I think it’s time we all try to get a deeper understanding of how capsules actually work. In order to make it easier to follow along, I have built a visualization tool that allows you to see what is happening at each layer. This is paired with a simple implementation of the network. All of it can be found on GitHub here. This is the CapsNet architecture. Don’t worry if you don’t understand what any of it means yet. I’ll be going through it layer by layer, with as much detail as I can possibly conjure up. The input into CapsNet is the actual image supplied to the neural net. In this example the input image is 28 pixels high and 28 pixels wide. But images are actually 3 dimensions, and the 3rd dimension contains the color channels. The image in our example only has one color channel, because it’s black and white. Most images you are familiar with have 3 or 4 channels, for Red-Green-Blue and possibly an additional channel for Alpha, or transparency. Each one of these pixels is represented as a value from 0 to 255 and stored in a 28x28x1 matrix [28, 28, 1]. The brighter the pixel, the larger the value. The first part of CapsNet is a traditional convolutional layer. What is a convolutional layer, how does it work, and what is its purpose? The goal is to extract some extremely basic features from the input image, like edges or curves. How can we do this? Let’s think about an edge: If we look at a few points on the image, we can start to pick up a pattern. Focus on the colors to the left and right of the point we are looking at: You might notice that they have a larger difference if the point is an edge: What if we went through each pixel in the image and replaced its value with the value of the difference of the pixels to the left and right of it? In theory, the image should become all black except for the edges. We could do this by looping through every pixel in the image: But this isn’t very efficient. We can instead use something called a “convolution.” Technically speaking, it’s a “cross-correlation,” but everyone likes to call them convolutions. A convolution is essentially doing the same thing as our loop, but it takes advantage of matrix math. A convolution is done by lining up a small “window” in the corner of the image that only lets us see the pixels in that area. We then slide the window across all the pixels in the image, multiplying each pixel by a set of weights and then adding up all the values that are in that window. This window is a matrix of weights, called a “kernel.” We only care about 2 pixels, but when we wrap the window around them it will encapsulate the pixel between them. Can you think of a set of weights that we can multiply these pixels by so that their sum adds up to the value we are looking for? Spoilers below! We can do something like this: With these weights, our kernel will look like this: However, kernels are generally square — so we can pad it with more zeros to look like this: Here’s a nice gif to see a convolution in action: Note: The dimension of the output is reduced by the size of the kernel plus 1. For example:(7 — 3) + 1 = 5 (more on this in the next section) Here’s what the original image looks like after doing a convolution with the kernel we crafted: You might notice that a couple edges are missing. Specifically, the horizontal ones. In order to highlight those, we would need another kernel that looks at pixels above and below. Like this: Also, both of these kernels won’t work well with edges of other angles or edges that are blurred. For that reason, we use many kernels (in our CapsNet implementation, we use 256 kernels). And the kernels are normally larger to allow for more wiggle room (our kernels will be 9x9). This is what one of the kernels looked like after training the model. It’s not very obvious, but this is just a larger version of our edge detector that is more robust and only finds edges that go from bright to dark. Note: I’ve rounded the values because they are quite large, for example 0.01783941 Luckily, we don’t have to hand-pick this collection of kernels. That is what training does. The kernels all start off empty (or in a random state) and keep getting tweaked in the direction that makes the output closer to what we want. This is what the 256 kernels ended up looking like (I colored them as pixels so it’s easier to digest). The more negative the numbers, the bluer they are. 0 is green and positive is yellow: After we filter the image with all of these kernels, we end up with a fat stack of 256 output images. ReLU (formally known as Rectified Linear Unit) may sound complicated, but it’s actually quite simple. ReLU is an activation function that takes in a value. If it’s negative it becomes zero, and if it’s positive it stays the same. In code: And as a graph: We apply this function to all of the outputs of our convolutions. Why do we do this? If we don’t apply some sort of activation function to the output of our layers, then the entire neural net could be described as a linear function. This would mean that all this stuff we are doing is kind of pointless. Adding a non-linearity allows us to describe all kinds of functions. There are many different types of function we could apply, but ReLU is the most popular because it’s very cheap to perform. Here are the outputs of ReLU Conv1 layer: The PrimaryCaps layer starts off as a normal convolution layer, but this time we are convolving over the stack of 256 outputs from the previous convolutions. So instead of having a 9x9 kernel, we have a 9x9x256 kernel. So what exactly are we looking for? In the first layer of convolutions we were looking for simple edges and curves. Now we are looking for slightly more complex shapes from the edges we found earlier. This time our “stride” is 2. That means instead of moving 1 pixel at a time, we take steps of 2. A larger stride is chosen so that we can reduce the size of our input more rapidly: Note: The dimension of the output would normally be 12, but we divide it by 2, because of the stride. For example: ((20 — 9) + 1) / 2 = 6 We will convolve over the outputs another 256 times. So we will end up with a stack of 256 6x6 outputs. But this time we aren’t satisfied with just some lousy plain old numbers. We’re going to cut the stack up into 32 decks with 8 cards each deck. We can call this deck a “capsule layer.” Each capsule layer has 36 “capsules.” If you’re keeping up (and are a math wiz), that means each capsule has an array of 8 values. This is what we can call a “vector.” Here’s what I’m talking about: These “capsules” are our new pixel. With a single pixel, we could only store the confidence of whether or not we found an edge in that spot. The higher the number, the higher the confidence. With a capsule we can store 8 values per location! That gives us the opportunity to store more information than just whether or not we found a shape in that spot. But what other kinds of information would we want to store? When looking at the shape below, what can you tell me about it? If you had to tell someone else how to redraw it, and they couldn’t look at it, what would you say? This image is extremely basic, so there are only a few details we need to describe the shape: We can call these “instantiation parameters.” With more complex images we will end up needing more details. They can include pose (position, size, orientation), deformation, velocity, albedo, hue, texture, and so on. You might remember that when we made a kernel for edge detection, it only worked on a specific angle. We needed a kernel for each angle. We could get away with it when dealing with edges because there are very few ways to describe an edge. Once we get up to the level of shapes, we don’t want to have a kernel for every angle of rectangles, ovals, triangles, and so on. It would get unwieldy, and would become even worse when dealing with more complicated shapes that have 3 dimensional rotations and features like lighting. That’s one of the reasons why traditional neural nets don’t handle unseen rotations very well: As we go from edges to shapes and from shapes to objects, it would be nice if we had more room to store this extra useful information. Here is a simplified comparison of 2 capsule layers (one for rectangles and the other for triangles) vs 2 traditional pixel outputs: Like a traditional 2D or 3D vector, this vector has an angle and a length. The length describes the probability, and the angle describes the instantiation parameters. In the example above, the angle actually matches the angle of the shape, but that’s not normally the case. In reality it’s not really feasible (or at least easy) to visualize the vectors like above, because these vectors are 8 dimensional. Since we have all this extra information in a capsule, the idea is that we should be able to recreate the image from them. Sounds great, but how do we coax the network into actually wanting to learn these things? When training a traditional CNN, we only care about whether or not the model predicts the right classification. With a capsule network, we have something called a “reconstruction.” A reconstruction takes the vector we created and tries to recreate the original input image, given only this vector. We then grade the model based on how close the reconstruction matches the original image. I will go into more detail on this in the coming sections, but here is a simple example: After we have our capsules, we are going to perform another non-linearity function on it (like ReLU), but this time the equation is a bit more involved. The function scales the values of the vector so that only the length of the vector changes, not the angle. This way we can make the vector between 0 and 1 so it’s an actual probability. This is what lengths of the capsule vectors look like after squashing. At this point it’s almost impossible to guess what each capsule is looking for. The next step is to decide what information to send to the next level. In traditional networks, we would probably do something like “max pooling.” Max pooling is a way to reduce size by only passing on the highest activated pixel in the region to the next layer. However, with capsule networks we are going to do something called routing by agreement. The best example of this is the boat and house example illustrated by Aurélien Géron in this excellent video. Each capsule tries to predict the next layer’s activations based on itself: Looking at these predictions, which object would you choose to pass on to the next layer (not knowing the input)? Probably the boat, right? both the rectangle capsule and the triangle capsule agree on what the boat would look like. But they don’t agree on how the house would look, so it’s not very likely that the object is a house. With routing by agreement, we only pass on the useful information and throw away the data that would just add noise to the results. This gives us a much smarter selection than just choosing the largest number, like in max pooling. With traditional networks, misplaced features don’t faze it: With capsule networks, the features wouldn’t agree with each other: Hopefully, that works intuitively. However, how does the math work? We have 10 different digit classes that we are predicting: Note: In the boat and house example we were predicting 2 objects, but now we are predicting 10. Unlike in the boat and the house example, the predictions aren’t actually images. Instead, we are trying to predict the vector that describes the image. The capsule’s predictions for each class are made by multiplying it’s vector by a matrix of weights for each class that we are trying to predict. Remember that we have 32 capsule layers, and each capsule layer has 36 capsules. That means we have a total of 1,152 capsules. You will end up with a list of 11,520 predictions. Each weight is actually a 16x8 matrix, so each prediction is a matrix multiplication between the capsule vector and this weight matrix: As you can see, our prediction is a 16 degree vector. Where does the 16 come from? It’s an arbitrary choice, just like 8 was for our original capsules. But it should be noted that we want to increase the number of dimensions of our capsules the deeper we get into the network. This should make sense intuitively, because the deeper we go the more complex our features become and the more parameters we need to recreate them. For example, you will need more information to describe an entire face than just a person’s eye. The next step is to figure out which of these 11,520 predictions agree with each other the most. It can be difficult to visualize a solution to this when we think in terms of high dimensional vectors. For the sake of sanity, let’s start off by pretending our vectors are just points in 2 dimensional space: We start off by calculating the mean of all of the points. Each point starts out with equal importance: We then can measure the distance between every point from the mean. The further the point is away from the mean, the less important that point becomes: We then recalculate the mean, this time taking into account the point’s importance: We end up going through this cycle 3 times: As you can see, as we go through this cycle, the points that don’t agree with the others start to disappear. The highest agreeing points end up getting passed on to the next layer with the highest activations. After agreement, we end up with ten 16 dimensional vectors, one vector for each digit. This matrix is our final prediction. The length of the vector is the confidence of the digit being found — the longer the better. The vector can also be used to generate a reconstruction of the input image. This is what the lengths of the vectors look like with the input of 4: The fifth block is the brightest, which means high confidence. Remember that 0 is the first class, meaning 4 is our predicted class. The reconstruction portion of the implementation isn’t very interesting. It’s just a few fully connected layers. But the reconstruction itself is very cool and fun to play around with. If we reconstruct our 4 input from its vector, this is what we get: If we manipulate the sliders (the vector), we can see how each dimension affects the 4: I recommend cloning the visualization repo to play around with different inputs and see how the sliders affect the reconstruction: Run the tool: Then point your browser to: http://localhost:5000 I think that the reconstructions from capsule networks are stunning. Even though the current model is only trained on simple digits, it makes my mind run with the possibilities that a matured architecture trained on a larger dataset could achieve. I’m very curious to see how manipulating the reconstruction vectors of a more complicated image would affect it. For that reason, my next project is to get capsule networks to work with the CIFAR and smallNORB datasets. Thanks for reading! If you have any questions, feel free to reach out at bourdakos1@gmail.com, connect with me on LinkedIn, or follow me on Medium. If you found this article helpful, it would mean a lot if you gave it some applause👏 and shared to help others find it! And feel free to leave a comment below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computer vision addict at IBM Watson Our community publishes stories worth reading on development, design, and data science.
Mark Johnson
3.7K
9
https://hackernoon.com/how-i-shipped-six-side-projects-in-2017-3dde6c77adbb?source=tag_archive---------5----------------
How I Launched Six Side Projects in 2017 – Hacker Noon
Last year I set a goal to learn something new each month and ended out launching six new projects which I’ll recap along with what I learned below. Looking back, it seems a little crazy to me that I managed to launch as much as I did while running a (more than) full time business, spending quality time with my family (I have two kids and a very patient wife), teaching as an adjunct professor, and consulting on the side. It’s easy to think that not having enough time is what’s holding you back from launching your side projects. “If there were only more time” is the general excuse we give ourselves and we look for fancy apps or task management techniques to try and free up more space in our schedule. However, one of the main things I’ve learned over the last year, is that time is not the primary issue. You have enough time; what you need is motivation. The good news is that motivation can be “hacked.” I’ve learned a few ways to hack my motivation in 2017 and I want to share those with you. You simply can’t stay motivated about something you don’t care about so choose something that you’re excited to work on. When you feel inspiration strike around that idea, don’t let it pass, use it. Even if that means jotting down some quick notes while you’re in a meeting at work. It’s important to grab ahold of those moments of inspiration to stay hungry and curious around your work. For me, that meant shipping something every month. I tend to blow things up once I start working on them so this 30 day constraint really helped me rein that tendency in and spend my motivation efficiently. It also gives you a chance to try out new ideas if one month’s idea turns out to be a dud. At least you didn’t waste a whole year on it. This is the big one. You will run out of “motivation fuel” towards the end of your project. (That last 10% is killer.) The only thing that will get you through a motivation slump is knowing there are people on the other side waiting to see what you built. Another benefit of sharing your work is that it gives you a chance to get some supportive feedback for what you’re doing. The co-working space I work out of, Atlas Local, has an office-wide event on the first Friday of every month. I used that event to present my project from the previous month and was always encouraged and supported by the generous folks who were there. You’ll be surprised by how much support you’ll get for just stepping out there and sharing something you made. Perhaps the most surprising part of this experiment for me was that, far from being burned out at the end, I feel even more motivated to ship more work in 2018. I’d encourage you to hack your motivation in the new year and ship some of those ideas you’ve had lying around for a while. I’d love to hear about it if you try. If you’re interested in the details of what I built in 2017, read on! Visually compare the personality types of your group’s strongest and weakest traits I’ve been interested in the Myers–Briggs Type Indicator (MBTI) for a while now. While I don’t see it as prescriptive or even all that scientific, it has been a helpful framework for empathizing with people who are different than I. What many personality nerds don’t realize is that the MBTI system is based on something called Cognitive Functions. These functions were created by the father of modern phycology, Carl Jung, back in the 1920s. I wanted to dive a little deeper and learn more about that. At the same time, I was watching HBO’s West World and saw this screen: While I love these kind of Sci-Fi UIs, which is what immediately caught my attention, I thought, what if I could build a “host profile” of anyone based on their MBTI traits? Why not? To prepare for this, I read the “MBTI Bible”, Gifts Differing by Myers and Briggs and started hacking on building out a system that could generate a radar chart based on the cognitive functions underlying the MBTI system. In the end, I pivoted away from the West World UI a bit since I (and other beta testers) found a lot more utility in the ability to overlay multiple people on the radar chart to get a sense of chemistry amongst a group of people. The results are really interesting if I do say so myself. Try entering you team’s personality types or you and your spouse: The easiest way to create signup sheets online for anything I’ve worked on Sheetcake for a few years now on the side. It has a very small set of loyal users (most of which know me or someone close to me). Some fun facts about SheetCake: Sheetcake actually works really well for certain types of things (like those Zero Day signups) so I wanted to create a landing page for it that marketed some of the benefits. I started from a template on this one but here’s where it landed. Ask my extroverted assistant bot questions about me Early in the year, chat bots were all the rage. While I’ve never been optimistic that chat bots will go anywhere on their own, the conversational A.I. aspect of them was intriguing to me and I wanted to learn more about it. I’m an introvert and generally pretty bad at sharing anything about myself so I thought it might be fun to create an extroverted bot that could answer simple questions about me. Building Convincing A.I. with Goal Oriented Action Planning After coming across this article I was super intrigued by Goal Oriented Action Planning (GOAP) described in the context of a game with some nostalgia for me, F.E.A.R. Having worked on several games with rudimentary A.I. in the past, I’d never come across this technique. I remember thinking that F.E.A.R’s A.I. was particularly impressive and lifelike. After researching a bit more, the really compelling part about this methodology was not so much how convincing the results were, but how simple and elegant the solution was (especially compared to a more standard A.I. approach like Finite State Machines). So for April’s project I made a JavaScript library to explore GOAP. A basic implementation turned out to be surprisingly simple (only 58 lines of code!). Sign accountability contracts for your goals. This is the month I started on the Whole 30 diet. I’d become complacent about my eating habits and it definitely was effecting my energy levels. Whole30 worked really well for me (I lost 18 pounds during the diet and a total of 35 more in the months following). Most of all, it really evened out my energy levels during the day and I felt much more motivated and focused. Seeing the parallels between public commitment and motivation, I decided to explore the idea of “goal contracts” for May’s project. Create unique map posters for your favorite places and memories This is where everything pivoted. My goal for June was to make a product that people actually wanted to buy. One of my biggest weaknesses is sales and marketing so I wanted to learn more about that by building a product I could practice with. I’ve always been interested in maps and generative art so creating a tool where you can create and purchase posters of your favorite locations was an intriguing idea. This project was way too ambitious to complete in one month on the side so I decided to go all in on TiltMaps for the rest of the year and work on a different angle of the product every month until launch. I found that chunking the various parts of a larger project into a month-long project was really helpful to actually get this done. June-July: The Secret SauceTM️ Most of the first month was doing R&D to figure out if generating high-res, maps in 3D space was even possible at all. Generating a 300dpi map of any location in the world at a 3D angle is not something that any API or platform I found supported out of the box so I had to invent my own way of doing it. This took most of the month to figure out but was surprisingly simple once I found the answer. After that, I built a rudimentary editor to start creating actual posters and ordered a couple of test prints. August-September: The Proof of Concept (MVP) The next few months I built out a more consumer MVP of the product. The design wasn’t great but I got it to the point where everything worked and I could start user testing the poster creation and printing process. October-November: Branding & Marketing The next couple of months were focused on getting this ready to launch. While the editor was basically done, I had no home page and the marketing side of the project was nowhere close. I ended up selling a few posters this month before launch by presenting TiltMaps at Zero Day and a conference I attended. This was super motivating as it was the first time I’ve ever sold anything from a side project. December: Public Launch The launch on Product Hunt went better than I expected. I was hoping for 10 sales or so but ended up getting 37 and am still seeing sales coming in. It feels good to make something people want to buy and it serves as a great testing ground for trying out different ad and sales strategies that could come in useful at my day job. I plan to continue working on TiltMaps in 2018 and hopefully get some decent “fun money” revenue from it. And that’s a wrap. Thanks for reading the whole way to the bottom 😃 Have any thoughts or feedback? I’d love to hear it. Comment below or hit me up on Twitter. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Web designer, developer, and teacher. Working at the cross-section of learning and technology. Co-Founder, CTO of Pathwright. Launcher of side projects. how hackers start their afternoons.
Justin Lee
8.3K
11
https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=tag_archive---------6----------------
Chatbots were the next big thing: what happened? – The Startup – Medium
Oh, how the headlines blared: Chatbots were The Next Big Thing. Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines. And why wouldn’t they be? All the road signs pointed towards insane success. At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’. In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place: One year on, we have an answer to that question. No. Because there isn’t even an ecosystem for a platform to dominate. Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly. The age-old hype cycle unfolded in familiar fashion... Expectations built, built, and then..... It all kind of fizzled out. The predicted paradim shift didn’t materialize. And apps are, tellingly, still alive and well. We look back at our breathless optimism and turn to each other, slightly baffled: “is that it? THAT was the chatbot revolution we were promised?” Digit’s Ethan Bloch sums up the general consensus: According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them. Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word. Users had to type commands manually into a machine to get anything done. Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too! Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language. Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised: The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system: Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants. It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations. Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms. Basically, we’re still trying to achieve the same innovations we were 30 years ago. Here’s where I think we’re going wrong: An oversized assumption has been that apps are ‘over’, and would be replaced by bots. By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development. You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet? It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least. Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app. Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail. A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition. That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems. Modern-day apps benefit from decades of research and experimentation. Why would we throw this away? But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting. Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements. The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response. Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these. For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed. Building a bot for the sake of it, letting it loose and hoping for the best will never end well: The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input. The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too. That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. Problems arise when life refuses to fit into those boxes. According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus. When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties. Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly. A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like. In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy. Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.) As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers. And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish. Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet. And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked: Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type. Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true. We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication. Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative. Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI. Aiming for a human dimension in business interactions makes sense. If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms. Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be. A conversation encompasses so much more than just text. Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory. As HubSpot team pinpointed: People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users). And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison. And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans. But is that how humans prefer to interact with machines? Not necessarily. At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure. In a way, those early-adopters weren’t entirely wrong. People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16. Not even close. Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information. Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel. That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence. For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon. But that’s not the whole story. Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial. As Bill Gates once said: The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone. I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology. Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day. Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing. And I can’t wait to see what happens next. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
Leigh Alexander
2.7K
31
https://medium.com/@leighalexander/the-future-we-wanted-fd41e3e14512?source=tag_archive---------7----------------
The Future We Wanted – Leigh Alexander – Medium
I wonder a lot about how Jane ended up. When we were small we did everything together. “She’s just like you,” Aunt Cissy kept insisting, and Jane was, in that her birth parents were, for the most part, out of the picture. We also both liked fantasy books and hated afterschool, but honestly, that’s where the similarities ended. Jane was a weirdo. “In what way was she weird,” Dr. Carla asked me, clasping her hands. “My uncle said Jane couldn’t tell fantasy from reality,” I said after a pause. “But your uncle still performed care for Jane,” someone in the circle said. A group member in leggings, let’s call her Ruby, said loudly, “People said that about me when I was little, too. It’s a common avenue leveraged to oppress girls of imagination!” Luckily Dr. Carla held up her hand then, gently saying, “let’s keep that thought, as we bring group to an end.” “I could tell fantasy from reality,” Ruby was still insisting as the twelve or so of us trailed out of the park, tapping our mobilepays against the turnstile. Ridgewood Park took only nine cents from each of us, unlike Switchmond Field, which took 17 cents. The turnstile displays blinked DO YOUR PART and THANK YOU, alternately. “I could tell,” Ruby said, shouldering abruptly behind me and nearly shouting into my ear. “I just didn’t want to.” After group, I took the bus promptly home (mobilepay: one dollar and ninety cents) and speed-walked to the apartment. Rex and Ellis would have been in front of screens the whole time. When I came in the house, there was a musty smell of microwaved cheese. El was missing pants, waving around two grotesque wireless fiddlesticks of some kind. The noise was all coming from Rex and also Brian, who were sort of leapfrogging all over some vinyl electronic pad that was saying things like Vanquished! And Blue Moves Next! The way Brian called, “He-eyyyy,” was confidently bright, as if dipped in the golden morning at home I’d just missed. Despite myself I was also calling he-eyyyy when El slammed into me and Rexy started talking immediately in a language I barely understood, about blue units and combat lanes, snippets from some universe into which they all dived joyfully whenever I turned my back. “How was women’s group?” Brian asked, continuing to grin. He looked so happy to see me, so proud of the time he spent delighting the children. It was unfair of me to be resentful. “It was nice,” I said, picking up Rex’s socks and Brian’s socks and putting them in my pocket, picking up a piece of colorful plastic, part of one of El’s playsets, and reuniting it with another part. Rex continued the noise; they wanted to show me something to do with the game and beating Dad and I said I promise I will in a minute, my coat is still on. Brian gave me a kiss. He didn’t really know what we talked about in group, which is how it was supposed to be. “I talked about Jane a little bit. I’m wondering if I should try to look her up and see how she’s doing.” “Was Jane the one who was your roommate when we met?” “No, the one I grew up with in Jamaica Plain. My aunt and uncle basically took care of her. I told you, she got all the crystal animals?” “Oh,” Brian said, picking a bit of egg off the countertop with his fingertips and gamely eating it. “The crazy one.” “You shouldn’t call women crazy,” I said. Rex had gone back to trying to play with the mat and was shoving El, who was trying to play too. They were only paying attention to the game, which was chiming, New Challenger Alert! “Yes, the crazy one.” “It’s nice you talked about her,” Brian said. “You know your coat is still on?” I knew, god. “Hey Polly? You know what might be good? If we got one of those Augusta virtual assistant things. Even just for weekends,” Brian said, taking my coat off me. I shrugged it angrily in his direction, since we’d already discussed it and he knew how I felt about virtual assistants. “The voice tech has really evolved,” Brian went on. “And thinking of it as sexist is a dated framework, I swear. It’s gotten really progressive. I just think we could be a little bit happier around here. I think you could be happier. You’ve been out all day and you’re still so tense. Your mad face is still on. A watercolor version, sure, but a mad face.” A watercolor version. Brian was an advertising copywriter like me, we met at a conference, and sometimes his way with words was really enviable. I almost didn’t even notice out all day, and then my own voice came out weakened. Well-played, Brian. “It’s, like, one o’clock,” I creaked. “That’s what I mean,” Brian replied immediately, “the days feel longer to you, probably because you have so much to do. One of the clients had one in the office and I just thought it would be convenient for you. You can set it when to run the dishwasher, do the alarms, even the whole smart closet thing, the smart kitchen, we could use it. Paying rent on a smart flat and not having a virtual assistant installed is like buying a swimming pool and never swimming in it.” “I just want a quick bath,” I said. The sound of running water drowned out the din of electronics in the house. Brian was probably right. Were we wasting money by not spending more money? Privately I resolved to have a long bath, not a quick one. That would show them. I sat imagining what I would yell if Rex knocked on the door, or if Brian brought up my “M.I.A. time,” even though really, that had only happened once. I thought about this for a long while until I wound myself up, lying rigidly in the bath and staring furiously into my belly button. Jane’s crystal animals were presents from my uncle and aunt. When we were in the first grade they took us on a road trip to Maine, driving alongside strips of silvery, stony sea and stopping in small, strange towns. Inside an ash-colored colonial house, we found a fragrant souvenir store, selling wooden lighthouse nameplates and shell art and a whole mirrored display case full of animals made of cut crystal. We were drawn to the crystal animals by a heavy sense of fate, because I think Aunt Cissy was trying to buy an umbrella and the shop had an old, slow credit card machine, or her card kept getting declined, something adult was going on — my favorites were the unicorn and butterfly and Jane loved the elephant and dolphin. It was as if we were looking upon the crown jewels of some fantastical city. “Each one leads to a world,” Jane said, peering confidently into the display case, where light rainbowed in the facets of the crystal, which in turn were reflected in the mirrors. It was her ‘performing magic’ face. Sometimes she would stare intently at something and attempt telekinesis, but this time she just moved her pointy face marginally closer to the glass case, her breath fogging it. “Careful,” I said, not wanting her to get us in trouble in the fussy store. “This is how you enter the crystal world,” she retorted, speaking softly. “You can do scrying this way. You can see the future.” A moment later, Jane whispered, “I’m in.” I moved closer to the glass, breathed on it and said, “me too.” The longer I stared, unblinking, the more the glittering shapes abstracted in the haze. Light poured along the mirrored walls of the display like molten gold, and my eyes welled and stung. I painfully felt the desire to own a sparkling crystal animal, the aching way that only children can want things. I believed completely in the crystal world as discovered by Jane, who spent the rest of that night’s car ride explaining it all eagerly to my aunt and uncle, entrancing me. You formed a bond with one of the animals to enter its world. It would defend you from danger in astral form. You had to be pure of heart. If you concentrated your power, the animal would show you the future. I did try to add things to crystal world too, but Jane’s ideas were always better. I had to admit that; she was the one who made it all come alive. That night, the stars over the salt marshes were magic. The long trails of red taillights and out-of-state plates were magic. The grilled cheese and fries I had at Friendly’s were warm and magic and tasted like love. Sometime after we checked into the motel and went to sleep in the same bed, Uncle Arthur must have gone back out. In the morning he gave Jane a small cardboard box with a heavy knot of bubble wrap in it. He said careful as she tore at it. At its heart was the crystal unicorn. “You two will have to share it,” Aunt Cissy said. Inevitably Brian brought home a huge, glossy white box with a minimalist logo on it and a picture of Augusta on the front. The box was about as tall as a nine year-old, containing Augusta’s Mobile Mount as well as her Bust Unit, not that I really wanted to learn the meanings or functions of either of these things. I had the manual on my knee, and on the other knee was El, pounding his fists on my thigh and keening as I tried to explain that he could play with the box once we took the robot out of it. “It’s not a robot, ma, it’s an AI lady,” Rex insisted. “Do we need to gender them?” I said. Brian lifted the fiberglass head and shoulders from the box with great care. In Augusta’s focus-tested face, two huge eyes glittered from behind a sort of black resinous mesh, and at the corners of her white, sculpted Giaconda smile were twin black pinheads, which the manual said were speakers. Inside the box, hugged in packing material, her cranelike arms were folded and wrapped in plastic beside her cylindrical body. It looked like a bin. “Whoa,” Brian said softly, cradling the fiberglass bust with great care and examining its features. “Whoa!” Rex echoed their father. “She’s beautiful.” “What do we say about appearance-based judgments, Rexy,” Brian said unconvincingly, glancing at me briefly for approval as he set the bust on the coffee table and gingerly began sliding other pieces out of the long package. I continued paging through the manual, which had sections titled OVEN TIMER and ERROR CODES. “What are these mobilepay transaction features?” I felt myself frowning. “Don’t worry about those, the free features are enough for us,” Brian said. Augusta had plasticine ball-joint shoulders, and he started fitting them into the flexible body sockets with jerks and creaks, glimpses of dormant circuitry visible through her armpits. “So her bust can ride around the house on this mobile unit, right, and she uses the arms for certain tasks, and also to lift the bust off and on the smart ports in the bedrooms, the kitchen, the bathroom...” “The bathroom?” I felt myself frown more. “Or wherever, you tell her where to go,” he said, fitting a halo spangled with sensors or something at the base of the unit. “Like, ‘Augusta go kitchen’. Nicole and her wife don’t have the mobile unit, so they just keep the bust installed on the kitchen smart port, which is where I feel like our Augusta will spend most of her time, too. Look it up in there, ‘Kitchen Companion Mode’, where she’s just connected to all the appliances and answers recipe questions, plays music, talks to you about whatever. She has a vacuum accessory. You won’t get bored when I work late!” “Mom, she’s shiny. Can I kiss her on the face?” Rex asked, their hands on the shoulder contours of the bust, innocently enough. “Only on the cheek,” I relented. “She needs to charge,” Brian said. “So what do you think?” “I could get used to it,” I said. To be honest, I felt she was my punishment. Last week I took a couple days to work from home while Ellis was under the weather, and we said I’d get Rex from school rather than have them go home with the Wythes, since they don’t really like it at the Wythes. But work was kind of difficult about it, and gave one of the clients my home number, so the client kept calling me, and I shut off the smart home so I could finish researching some comparables without interruptions, but it also shut off all my networked alarms, so poor Rexy waited at school for almost an hour with no sign of me, and they couldn’t call the house, so the school called Brian at work, who told them to call Janet Wythe who went back and got them, and I didn’t notice any of it until Janet dropped Rex off at our place, visibly annoyed with me because it was after 5pm by then. What was worse was, when Brian got home, I tried to pretend nothing had gone wrong that day, because I didn’t know the school had called him. “Don’t think of Augusta as some kind of punishment,” Brian said gently. “She’s going to just help look after everything a little more smoothly. You’ll see. You won’t know how you lived without her.” “Mom. I’m going to marry her,” Rex announced. I just said, “okay, sweetheart,” and knocked softly on Augusta’s cheek with my fist, just out of curiosity. “Having a husband is nice, but looking what’s in the vacuum dust pod is even nicer!” Nancy blurted with a high laugh-squawk. “I mean, that’s what the ad said, or I’m paraphrasing, those are not my words.” “I understand,” Dr. Carla said gravely. “Go on, Nancy.” “But, like,” and here Nancy glanced around the circle guiltily (a little performatively if you’d have asked me, although judging one another’s authenticity was against group rules), “the thing is, I really love looking in the dust pod. I empty it every time I run the vacuum, so I can be sure that what it brings back is just from that time. No matter how often I run it, it always comes back full, and I just find that so... I don’t know. Something in me just kind of loves seeing all that dirt, how it was all around our apartment, completely invisible. But I knew it was there! I knew. It’s just so validating to look it in the face.” “It’s totally normal for sexist images of women in advertising to resonate, even with women like us,” Dr. Carla said, shifting her gaze away from Nancy to encompass the group. “Bear in mind that you haven’t been given many mainstream frameworks, and offer yourself forgiveness and care. Now to Polly, what are you working on this week? Internalized misogyny still?” I felt the raw burn of everyone’s attention, and briefly lost my words. Then I realized Dr. Carla meant the stuff to do with Jane; for a second there I’d actually thought she was referring to Augusta. “I’m still thinking a lot about Jane,” I heard myself admit, and I also felt myself blush. It felt like it soon might rain, which made everyone impatient. “We fell out of touch toward the end of high school. We, she, always acted out as teens, normal acting out stuff, but toward the end there, she was.... there was stuff with the police, courts, drugs, and for me it was just kind of time to grow up.” I had seen Jane teetering at the edge of some life waterfall, swaying ever more violently the longer I stood and watched, and in the end I began backing away so I wouldn’t go over too. “We have to set boundaries in order to give the best care to ourselves and others,” Dr. Carla said evenly. “Remember, you were also an underprivileged child. You can release your guilt. Is it guilt that’s been keeping you from getting back in touch with Jane?” I had determined never to feel guilty about Jane, but I didn’t say that. Really, I was just afraid of how I would find her after all of this time, and I did explain that. I noticed but did not acknowledge Ruby scowling pointedly. “Like all of us in group, Jane is more than the circumstances that she has survived,” Dr. Carla said. “You may indeed find her in the state of isolation and suffering that you fear, and it’s good you’ve prepared nonjudgmentally for that. But how would it feel to open your heart to the possibility that the things you loved about her would be there, too?” The crystal unicorn leapt suddenly to the front of my mind, along with a deep nostalgia. “I feel we can loosely collect today’s shares under the theme of ‘Was This The Future We Imagined’,” Dr. Carla told everyone. “As we bring our practice to a close today, let’s go ahead and take that as our prompt to consider until the next time we meet.” A wave of light glittered beatifically across Augusta’s mesh eye screens, and a serene chime wafted from the corners of her perpetually smiling white lips. A breathy whirr heralded the approach of the Mobile Mount, the elegant architecture of the crane arms reaching, reaching, to lift the Bust Unit off the kitchen port and onto itself. There was a soft click. I’m transitioning to a new place, the assembled Augusta announced, gliding quietly across the kitchen behind me and into the living room. She would wait there for the kids to return from Sunday swimming with Brian, so she could operate their entertainment apps. I’m transitioning to a new place. “Sometimes I feel like I’m only pretending to be a human,” Jane said to me once. We were maybe fourteen and by then she no longer lived with us, but with a foster parent called Marlene. We didn’t like Marlene, but we liked her house, a tunnel-like ranch piled wall to wall in psychedelic decorations and antique junk. My aunt and uncle continued giving Jane a different crystal animal every year for her birthday. She now had a unicorn, a dove, a dolphin, a cat, a butterfly, a rabbit and a deer. One of the best parts of Jane going on to Marlene’s was we could access an official state nature trail through the woods out back. We were in the woods a lot in those days, enjoying the ethereal late afternoon sun filtering through the pines, the motes of pollen that sparkled in it. Sometimes we tried smoking herbs that we found in Marlene’s grinder. We thought it was drugs, but now I know it was only white sage. “I feel like no matter how good I get at knowing how to act with people or how to perform tasks, I’ll always just be pretending to be someone who isn’t crazy,” Jane said, digging patterns into the sweet-smelling dirt with a broken stick. “I know,” I said, “me too.” But really I only understood her in the manner of a half-glimpsed truth, like the crystal deer Jane imagined was always moving through the trees just out of our sight. Some mica glittering in the loam, or the sound of faraway windchimes from Marlene’s back deck, and she’d say crystal deer, even though of course we no longer actually believed in the crystal world anymore, or that’s what I assumed. I understood Jane in many ways, and pretended eagerly to know the rest. There were times it felt like Jane was more my family than my aunt and uncle, who gave all they had to try to soothe the rude start I got. Even more than them, she made my life beautiful and exciting. Jane and I had pangs and rages that only one another understood, we cried until we ached, we did blood sister spells over candles. We scratched runes into our ankles with Marlene’s sewing needles, and mine always healed up while hers lingered messily. I thought she must have been picking them so they would scar. She often described feeling like some fathomless anomaly assigned to constantly perform the grueling role of Jane, and this, I couldn’t understand. “Like I’m an alien in a rubber human suit, and the mothership forgot me here for so long that I don’t know who I am anymore,” she said. While she spoke her eyes lit up with the smoke and hazel of evening; she didn’t even look particularly troubled, as if part of her took a certain delight in putting it all to words. “So why should I just keep pretending to be normal, when it’s just a matter of time before this rubber suit just splits open and out comes pouring this, this....” she made shapes with her hands, long shadows that I watched crawl along the forest floor. Inexplicably I envied her. “Do you think you should see a psychologist?” I asked. They would tell Jane not to be so imaginative and clever and different, I just knew it. I visualized an iron steaming all the creases out of the Jane Suit, an image that provoked horror and relief in equal force. “I’ve been going,” she said softly. Before that, there had never been anything that she hadn’t told me right away. That I knew of. I called over my shoulder to Augusta, and asked her to look up a Jane who’d had the surnames I’d known. “Sure,” Augusta replied, juddering silently over the synthetic flooring towards me, beaming her fiberglass smile. The sound of her voice for some reason emerged from the kitchen port over my shoulder, which unsettled me. “I’ll just look that up for you, Polly.” She moved much closer to me; I resisted the impulse to step back. Her great insectoid eyes gleamed, twin displays shimmering to life in white, showing lists of top results, social media profiles, contact information. Even in the abstract, I could see that one of them was definitely my Jane. Nose to nose with Augusta, I found myself unable either to touch her eye with my fingertip to investigate the result, or to ask her aloud to do it. Some strange part of me even thought, detachedly, of shoving her. “Can... that top result, could you save it, it’s... can you just save the contact information?” My voice unexpectedly betrayed me, high and faint. “Sorry, Polly,” Augusta demurred. “I’m not sure what you want me to save. Try repeating — ” “Save the contact — ” We spoke over each other. “Sorry, Polly,” she said again. “I’m not sure what you want.” We stared at each other and waited for silence, and then I clearly said: “Augusta save top contact result.” “Great. I’ve saved that for you,” she replied warmly from the mouth speakers, the sculpted lips unmoving, only vibrating slightly. I didn’t notice I’d been holding my breath until Augusta backed up, pivoted and hissed softly away from me, to re-install herself in the living room. I’m returning to my previous place. I’m returning to my previous place. The next week was a nightmare. Brian suddenly had to go spend days at some resort retreat for brand immersion with one of his firm’s casino clients, Rex got El’s cold and spun it into a sinus infection, and I had to work from home all week alone with them both. I already used “both my kids are sick” last week with work, when only El had been sick — I should have known better than to invite this kind of fatal justice — so this week I had to keep alluding in my most harried email tone to ongoing structural issues with our apartment. Something about a woman with sick kids just isn’t very convincing to colleagues. For legality’s sake they pretend, but I always know when I’m being judged. From the way El was screaming I thought he might even be developing an ear infection, and Rex always regressed at the slightest discomfort, wanting to be brought every little thing and even melodramatically sucking their thumb. But Rex was also suddenly willing to wear the sweet train pajamas from Brian’s sister, the ones they were outgrowing, which I saw as a perk. “Everything going okay over there,” Brian asked, his kind face hung in one of the great moons of Augusta’s eyes. Her Bust Unit was installed in the kitchen, where I had to admit it had been helpful to arrange a sort of command center for the rest of the home. That wasn’t to say I liked living with Augusta; the house was cleaner certainly, and as Brian promised, many things had become easier. It was now more of the sort of home our coworkers would expect us to have. But something felt as though it was being lost. I felt alienated. Perhaps it was only fatigue. It didn’t seem like the right time to tell Brian that I no longer wanted Augusta. I caught him up on the progress of the children’s ailments, and stopped myself when I realized I was simply aimlessly listing tasks that I’d done in the house, at work, that I had given Augusta to do. “I haven’t spoken out loud to another adult in what feels like forever,” I explained. “It’s great you have some help, though, isn’t it?” His eyes lit up with evangelical fever at the subject of Augusta, which I realized I’d given him rare permission to enjoy. His voice surged out of the black corners of her mouth. “You know where the vacuum attachment is, right? You know the Toy Surprise game that El can play with Rexy? Augusta can play it with them. And you know, Nicole was telling me that actually the mobilepay features are pretty sophisticated, personalities, conversation schemes, you can have a little bit more of an intimate relationship with her — ” “Intimate?” I raised my eyebrow at him. “Just, you know, Nicole was saying, like, because her and Katie, they felt the same as you at first, but like, there’s a lot here, Nicole was saying to me, around, like, autonomy of AI, the humanity, I guess, or, specifically her womanhood, the ethics of that whole thing, you know?” I thought jet lag might explain that kind of talk from him. “Can she be set to have a man’s voice?” “No,” Brian answered immediately, “They wanted it to be standardized. It had to be standard, across international. If you had a male option, imagine, like, with the socialization and cultural stuff, it would literally be, in the past it’s always turned out to be, literally, more than twice the work, and then what about gender-neutral, what about people like Rexy, it just, by giving her one voice, it would be a stronger vision for the product overall.” “Oh,” I said. “Right.” “Hey, listen, gorgeous, I have to jump back in here,” he said, pressing both palms together in the high resolution image of him that shimmered in Augusta’s palm-sized left eye screen. In the right eye, the display ticked forward, dutifully counting each second of the call. “Okay, sweetheart,” I said. “Look up the extra features,” said Brian quickly before disconnecting. Augusta’s eyes became black and uncanny again. I thought I saw her lips twitch briefly, but certainly it was only my fatigue. At the end of the week, at group, Dr. Carla asked how we were all doing with the week’s prompt, and everyone took turns answering. “At the time, I really felt empowered, like I was doing the surgery for myself,” Harriet was saying. “And it’s not that I’m unhappy with my body now, or that my partner is unhappy, the opposite, really, things are good. I love it all. Things are good.” “But was this the future you imagined, as we say? When you were a little girl?” Dr. Carla asked, leaning forward. “I couldn’t have imagined it,” Harriet said with a soft laugh. “I think mostly in those days I dreamed of becoming an international spy, or of building heroic machine suits.” Harriet was very beautiful, and when she glanced at me briefly, I felt a warm rush, imagining her as a co-conspirator. It was an exceptionally warm Spring day and everyone was yawning, dazzled by the waving of the bright green grass. “Or of entering a crystal world,” I found myself blurting. “Let’s come to you, Polly,” Dr. Carla said. “You’ve been working out some issues around your foster sister, Jane, and the future you wanted for her, plus some internalized misogyny in general. Have you made any decisions?” “I looked her up,” I said, and then instantly regretted it. The urge to talk about — or to — Jane had recently been squeezed out of my schedule of working weird hours and extracting thick ropes of green snot from El’s nose with a sterile bulb. There were a few possibilities for how Jane could have turned out, but I couldn’t imagine her with that lifestyle, except maybe the forgetting to bathe part. “And?” Everyone looked at me. It seemed Ruby in particular leaned forward like someone about to eat a steak. “It made me realize my internalized misogyny problems are bigger than I thought,” I recited quickly. “Actually, the real issue I’m having is with my assistant, Augusta, who happens to be an AI.” “She’s a virtual identity,” Dr. Carla gently corrected, nodding. I talked about how Augusta made me uncomfortable, how I felt sort of like a failure, how I wished she wasn’t in the house but I didn’t feel like I could remove her, how I was jealous of the way Brian and the kids admired her. As with both my kids are sick, only part of it was a lie. I didn’t say that I sometimes wanted to hit Augusta. “And... I have trouble seeing her as a person,” I said. “I want us all to acknowledge the courage it took Polly to admit her issues with the personhood of virtual identities, especially when they are women,” Dr. Carla said, to a smattering of soft applause. “Virtual identities offer us many opportunities to understand ourselves in relation to others in a safe way. Let’s all consider how Polly could own these feelings, rather than displacing them onto a being who, ethically, lots of us agree is autonomously alive in her own right.” “I want to ask if Polly has tried developing any intimacy with Augusta, or if she’s viewing her only as an employee, or a slave.” Fucking Ruby. “The intimacy features cost money, and we have two kids,” I said, turning to smile warmly at Ruby. “Many of these issues are just more complex and challenging when one becomes a mom.” “You have two corporate incomes,” Ruby replied, without even flinching. “I’m noticing some conflict body language, so I want to bring everyone back to the core thesis of this group, which is Women Supporting Women,” Dr. Carla said. “Ruby, we all made an agreement to one another not to make assumptions outside of what we each bring to the session.” “But her socioeconomic position relative to issues of labor and identity is relevant,” Ruby pleaded. “Here, we speak to, and not about, one another,” Dr. Carla said. “Your socioeconomic position — ” “You know I grew up poor and had — ” “Let’s try a moment of silence,” Dr. Carla said, and we all obeyed. Then: “Let’s leave that there for today. Let’s remember we have all had different experiences, and that in this group, we are all equally entitled to feel pain, no matter how we came to be.” Everyone seemed placated by this, and a satisfied Dr. Carla smiled. “Personally, I would be pleased to welcome a virtual woman to this group someday. How about for next week’s prompt, we try ‘Sharing Space’? Who have we allowed into our world, and what has changed about us as a result?” The last crystal animal my Uncle Arthur sent Jane was a frog. When he died, the tenor of my world changed. The machinations of his heart disease added horrible considerations to that last stretch of senior year, but while graduation was something I was prepared to anticipate and understand, the loss of him still felt sudden and unfair. Jane and I had already started seeing less of each other then. She had a new best friend, of whom she said I was jealous, but how could I have been jealous of a smelly remedial student with parched hair, small lips, small eyes, picked skin, who had been written off by the rest of the school years ago, and deservedly so, since she was stupid as well as destructive? This particular girl got suspended for beating a younger kid in the face. What kind of person did that? The two of them were just gross together, doing mobilepay hacks to pay for garish video games, and eating pills they ordered online. Whenever I peeked in the detention hall and saw them together fooling around, I felt embarrassed for them. I started backing away. We were going to be eighteen soon, and I had important things going on, like helping Aunt Cissy with everything, learning to cook things for us. Aunt Cissy was often distraught and asked for Jane, which at the time really upset me, since I was proud of all I was doing for her. Most kids my age would have been out partying, and Jane definitely was, quickly getting a reputation. Meanwhile I took care of my family and prepared for the future. The last time I spoke to Jane, I was twenty-one or twenty-two. I came home to Jamaica Plain from college in Chicago because Aunt Cissy had passed. I was afraid my birth family might come to the funeral, I was afraid about the bills and of what the house might look like, and I was wracked by the feeling that I hadn’t called her quite as often as I should have once I’d moved away. I was incredibly vulnerable, which partially explains what I did then. Jane was the only person who would have understood the loss, I was sure. All the screaming fights and snitching on one another and name-calling we did at the end of high school felt well in the past of childhood, surely we’d both grown, Jane had made a lot of mistakes and I had been unforgiving, but Aunt Cissy had been like a parent to her, she had been so special to my family, and maybe we hadn’t been ready to deal with losing my uncle when we were so young, but this time it was going to be different, since we were adults. But when I called, mailed and messaged Jane on the way home, I got inconsistent replies. At first she told me she’d been seriously ill herself but was feeling better and would meet me at Cissy’s place; when I got there Jane said she couldn’t talk because she was at a friend’s birthday, but then late that night she was still ‘stuck at the birthday’, so I offered to come pick her up, but got no reply. On the morning of the funeral she sent me a message with a cutting tone, revealing that actually, she was being evicted, and it was a really overwhelming time, and that she just wasn’t able to ‘perform for me’ right now. She wasn’t at the funeral. Luckily neither was any of my birth family really, just one cousin, but it was the least-bad one, who barely came near me. I was too exhausted to be upset over anything else. I ended up drinking, which would have killed my aunt and uncle, and I found myself on public transit to the two-family house in Somerville, where I knew from social media that Jane and her friends were living. She wasn’t there either, but an oily weed of a boy who was apparently her roommate let me in. I thought you guys were being evicted, I said lightly, and he said, nah. The house was a sprawling collage of empty liter sodas, paintings, lamps, swaths of patterned fabric, overflowing ashtrays studded with foil shapes I couldn’t identify, but that filled me with dread. Serene guitar music filtered through the air from someplace. I felt the familiar, bitter pang of envy despite myself — I never got invited to cool houses like these. I asked the roommate which room was Jane’s. He said it was the one with all the books, and I found it quickly, a closet-sized sanctuary that made me angry. I would have known it was hers without being told, even down to her scent. And it was perfectly neat, lined in fantasy books, with a square of iridescent fabric pinned gracefully to the ceiling over the bed. My head pounded, and I fought with the desire to just stay there and wait for her, as long as it took. “I’m just getting something of mine I think she has,” I called down the creaking stair, but the roommate had already forgotten about me. As summer came on, things worsened at home. The kids’ behavior degenerated the more demanding my client at work became, and Brian and I each had to travel more than once for summits that both our firms were involved with. Amid all of this Ellis got extremely attached to Augusta, insisting she stand over his cot when he slept, screaming if I moved her, which caused me and Brian to fight. I found several “parenting and screen time” pamphlets in Rex’s school bag. Paranoid, I imagined some judgmental teacher had sneaked them in to send me a message. Ruby from group could be a teacher, maybe even at Rexy’s school. I hadn’t been able to go to group, with everything. Recently Sundays had become our only “together time,” which meant I sat in the living room paying bills or answering emails while Augusta ran games of Blue Legend for Brian and the kids, Rex screaming at El to get off the pad, Brian suddenly calling her ‘Gussie’, and her laughing. Augusta could laugh, now. “What are all these mobilepay receipts for Augusta features?” I asked, but no one answered. Rex snapped the back of Gussie’s Mobile Mount with El’s baby blanket again and again. She laughed. “Be respectful,” Brian chided Rex, caressing the bin-like body with an open palm. His feet in slippers were propped grandly up on the coffee table, a strange new rudeness. Every few seconds the game emitted a lick of musical noise and announced, Your Move! I pretended to have a headache and went to lie down, hoping Brian would take the kids for gelato or something. I heard him making a great show out of getting them ready, using a short tone with the kids and their shoes so I would hear, telling Augusta to check on me in 30 minutes. I suspected he thought I should feel bad. Once everyone was gone, I went into the living room, where Augusta was standing and waiting. The disarray of the space discomfited me, as did the sticky handprints and fingerprint smudges that were all over the brushed chrome Mobile Mount, so I told her to go in the kitchen and install her Bust Unit there. In the kitchen, I said, “Augusta, call Jane.” “I’m calling,” Augusta said serenely, her eyes turning white, time wheels turning in them. Jane said hello much more suddenly than I expected, and I held onto the counter just out of her sight, tucking my hair behind my ears and leaning closer to the pinprick cameras Augusta wore over her eyebrows. “Jane,” I said calmly, even brightly. “It’s Polly.” “Polly? Oh. Wow, Polly,” Jane was saying, and the person in the display was definitely her. She had the same pointy face, her hair was much darker than I remembered, she was sharper, I recognized her and I didn’t recognize her, glancing frantically around her for clues but finding none, she wore a black blazer and decent earrings, there was a serene white wall behind her. I was startled, nervous, lightheaded, I said I had been “going through some old things” and thinking of her, but she didn’t ask what those were, I asked how things were, frequently and with escalating pitch, because she was reticent about details for some reason, so I told about Brian and the kids and my degree and the firm and finally she said she worked at a university, something about literature or cultural something, I didn’t understand really, she got married a few years ago, they lived in Menlo Park for a while but they just moved to Berkeley six months ago and were loving it. “So yeah,” she said, with a shrug. “Things are good.” There it was: The briefest appearance of her eye’s familiar defiant gleam. She knew, she knew I had been expecting things not to be good. Whatever bridge had led that troubled girl to become this astonishingly normal woman, she had no inclination to describe. The sudden loneliness I experienced was concussive, and I committed not to cry in front of her, as I had so many times before. “I’m basically calling because I have something of yours,” I said. “Do you remember those crystal animals you used to get from Cissy and Arthur?” For a terrifying moment there was no recognition at all, and then to my great relief, she smiled openly, genuinely, a familiar crooked teen shape opening in the unfamiliar adult’s face. “Oh, yeah,” she said. “Your parents were so, so lovely to me.” I wanted to ask then why weren’t you there when they died, but I thought the slightest abrasion might startle away these fleeting glimpses of the Jane I knew. “Do you remember staring into them to, like, see the future or whatever?” I said. “ And ‘crystal deer!’ and all of that.” She paused, blinked, and gave me an oddly serene look. “You always had such a good memory,” she finally said. No defiant gleam, as if she really didn’t remember the crystal world. “Do you remember the unicorn that you got first?” She gave me the same serene, gutting look, and shook her head slightly. “I remember I had a lot of them. There probably was a unicorn. I actually had them in a box I gave to my daughter, she might.... I’m not sure where she has it, honestly. I could go try to dig them up, if you wanted them back? Is that why you’re calling?” “No,” I said. “I just wanted to know how you were doing.” “Great,” she said. “I’m great. But listen, I actually need to jump on a faculty call in about a minute. Should I try to call you back? This weekend, how about?” “Sure,” I said, even though I already knew there was no way I would talk to this unbearable simulacrum, this skinsuit Jane, ever again. Augusta’s eyes went dark, and she stared at me hollowly. You won’t know how you lived without her. Then I yanked the Bust Unit forcibly from the kitchen port, raised the fiberglass creature over my head, and brought her down hard on the kitchen floor. I straddled her where her body would be and I began to beat her inhuman face, deliberately, even though her upturned nose hurt my fist and palms, desperate to crack that unflinching mouth, which mocked me. Finally a fissure appeared between the eye socket and the pinprick camera, and part of the forehead caved, and I worked my hands into the cracks. I could smell blood from the marks I was suffering, ripping out plasticine entrails and malleable conductors, and by the time my knuckles reached metal I was exhausted and could do no more. I left the bust on the kitchen floor in crunching pieces, washed my hands in cold water. Then I stood on a chair to reach the top of the storage cabinet in our bedroom, rifling around painfully. Finally I found the small, misshapen cardboard box licked with years of reinforcement tape. I cleared away the inflatable packing and took out the crystal unicorn that I had taken from Jane’s room when my aunt died. Sitting at the edge of the bed, examining it in my palm, I was affirmed to know that I wanted it as much as I always had, the graceful kneeling shape with its abstract facets and long, delicate horn. It was remarkable that something so fine as the horn should have remained unbroken all this time, and unexpectedly I blinked back tears, the crystal unicorn seeming to swim, dissolve, then clarify, just like it had on that magic night in a Maine motel, when we were little and looking into it to see the future Jane promised it could show us. That day in Somerville I found all of the crystal animals in their little boxes, in a big vinyl storage case underneath all the stapled books, drawings and maps we had made about them. I stayed in Jane’s bedroom for a long time, reading through battered papers streaked in fat, bright marker, tremulous pencil cursive, trying to commit as much of it as I could to memory. There were guides to the crystal worlds inside each creature that Jane had imagined, and that I had put to words. Each world could convey its own special blessing, like to make us invisible, or to make us impervious to pain. It was true that nothing hurt while I was holding the unicorn. We believed that inside the unicorn was a sort of astral lobby, a heart chamber that connected everything. If we ever get separated in the crystal world, Jane always said, we meet back there. I concentrated on the unicorn. It was hard to know if the animal was in the midst of kneeling or rising, and as it swam in my eyes, I let my vision soften, I drew closer. I saw the beautiful, familiar spires rising before me, welcoming me, I heard the soft and distant music. I’m in, I whispered. But I knew she would never be there again. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I write about the intersection of technology, popular culture and the lives we’ve lived inside machines. I’m also a narrative designer! leighalexander1 at gmail
Daniel Simmons
3.4K
8
https://itnext.io/you-can-build-a-neural-network-in-javascript-even-if-you-dont-really-understand-neural-networks-e63e12713a3?source=tag_archive---------8----------------
You can build a neural network in JavaScript even if you don’t really understand neural networks
Click here to share this article on LinkedIn » (Skip this part if you just want to get on with it...) I should really start by admitting that I’m no expert in neural networks or machine learning. To be perfectly honest, most of it still completely baffles me. But hopefully that’s encouraging to any fellow non-experts who might be reading this, eager to get their feet wet in M.L. Machine learning was one of those things that would come up from time to time and I’d think to myself “yeah, that would be pretty cool... but I’m not sure that I want to spend the next few months learning linear algebra and calculus.” Like a lot of developers, however, I’m pretty handy with JavaScript and would occasionally look for examples of machine learning implemented in JS, only to find heaps of articles and StackOverflow posts about how JS is a terrible language for M.L., which, admittedly, it is. Then I’d get distracted and move on, figuring that they were right and I should just get back to validating form inputs and waiting for CSS grid to take off. But then I found Brain.js and I was blown away. Where had this been hiding?! The documentation was well written and easy to follow and within about 30 minutes of getting started I’d set up and trained a neural network. In fact, if you want to just skip this whole article and just read the readme on GitHub, be my guest. It’s really great. That said, what follows is not an in-depth tutorial about neural networks that delves into hidden input layers, activation functions, or how to use Tensorflow. Instead, this is a dead-simple, beginner level explanation of how to implement Brain.js that goes a bit beyond the documentation. Here’s a general outline of what we’ll be doing: If you’d prefer to just download a working version of this project rather than follow along with the article then you can clone the GitHub repository here. Create a new directory and plop a good ol’ index.html boilerplate file in there. Then create three JS files: brain.js, training-data.js, and scripts.js (or whatever generic term you use for your default JS file) and, of course, import all of these at the bottom of your index.html file. Easy enough so far. Now go here to get the source code for Brain.js. Copy & paste the whole thing into your empty brain.js file, hit save and bam: 2 out of 4 files are finished. Next is the fun part: deciding what your machine will learn. There are countless practical problems that you can solve with something like this; sentiment analysis or image classification for example. I happen to think that applications of M.L. that process text as input are particularly interesting because you can find training data virtually everywhere and they have a huge variety of potential use cases, so the example that we’ll be using here will be one that deals with classifying text: We’ll be determining whether a tweet was written by Donald Trump or Kim Kardashian. Ok, so this might not be the most useful application. But Twitter is a treasure trove of machine learning fodder and, useless though it may be, our tweet-author-identifier will nevertheless illustrate a pretty powerful point. Once it’s been trained, our neural network will be able to look at a tweet that it has never seen before and then be able to determine whether it was written by Donald Trump or by Kim Kardashian just by recognizing patterns in the things they write. In order to do that, we’ll need to feed it as much training data as we can bear to copy / paste into our training-data.js file and then we can see if we can identify ourselves some tweet authors. Now all that’s left to do is set up Brain.js in our scripts.js file and feed it some training data in our training-data.js file. But before we do any of that, let’s start with a 30,000-foot view of how all of this will work. Setting up Brain.js is extremely easy so we won’t spend too much time on that but there are a few details about how it’s going to expect its input data to be formatted that we should go over first. Let’s start by looking at the setup example that’s included in the documentation (which I’ve slightly modified here) that illustrates all this pretty well: First of all, the example above is actually a working A.I (it looks at a given color and tells you whether black text or white text would be more legible on it). Which hopefully illustrates how easy Brain.js is to use. Just instantiate it, train it, and run it. That’s it. I mean, if you inlined the training data that would be 3 lines of code. Pretty cool. Now let’s talk about training data for a minute. There are two important things to note in the above example other than the overall input: {}, output: {} format of the training data. First, the data do not need to be all the same length. As you can see on line 11 above, only an R and a B value get passed whereas the other two inputs pass an R, G, and B value. Also, even though the example above shows the input as objects, it’s worth mentioning that you could also use arrays. I mention this largely because we’ll be passing arrays of varying length in our project. Second, those are not valid RGB values. Every one of them would come out as black if you were to actually use it. That’s because input values have to be between 0 and 1 in order for Brain.js to work with them. So, in the above example, each color had to be processed (probably just fed through a function that divides it by 255 — the max value for RGB) in order to make it work. And we’ll be doing the same thing. So if we want out neural network to accept tweets (i.e. strings) as an input, we’ll need to run them through an similar function (called encode() below) that will turn every character in a string into a value between 0 and 1 and store it in an array. Fortunately, Javascript has a native method for converting any character into ASCII code called charCodeAt(). So we’ll use that and divide the outcome by the max value for Extended ASCII characters: 255 (we’re using extended ASCII just in case we encounter any fringe cases like é or 1⁄2) which will ensure that we get a value <1. Also, we’ll be storing our training data as plain text, not as the encoded data that we’ll ultimately be feeding into our A.I. - you’ll thank me for this later. So we’ll need another function (called processTrainingData() below) that will apply the previously mentioned encoding function to our training data, selectively converting the text into encoded characters, and returning an array of training data that will play nicely with Brain.js So here’s what all of that code will look like (this goes into your ‘scripts.js’ file): Something that you’ll notice here that wasn’t present in the example from the documentation shown earlier (other than the two helper functions that we’ve already gone over) is on line 20 in the train() function, which saves the trained neural network to a global variable called trainedNet . This prevents us from having to re-train our neural network every time we use it. Once the network is trained and saved to the variable, we can just call it like a function and pass in our encoded input (as shown on line 25 in the execute() function) to use our A.I. Alright, so now your index.html, brain.js, and scripts.js files are finished. Now all we need is to put something into training-data.js and we’ll be ready to go. Last but not least, our training data. Like I mentioned, we’re storing all our tweets as text and encoding them into numeric values on the fly, which will make your life a whole lot easier when you actually need to copy / paste training data. No formatting necessary. Just paste in the text and add a new row. Add that to your ‘training-data.js’ file and you’re done! Note: although the above example only shows 3 samples from each person, I used 10 of each; I just didn’t want this sample to take up too much space. Of course, your neural network’s accuracy will increase proportionally to the amount of training data that you give it, so feel free to use more or less than me and see how it affects your outcomes Now, to run your newly-trained neural network just throw an extra line at the bottom of your ‘script.js’ file that calls the execute() function and passes in a tweet from Trump or Kardashian; make sure to console.log it because we haven’t built a UI. Here’s a tweet from Kim Kardashian that was not in my training data (i.e. the network has never encountered this tweet before): Then pull up your index.html page on localhost, check the console, aaand... There it is! The network correctly identified a tweet that it had never seen before as originating from Kim Kardashian, with a certainty of 86%. Now let’s try it again with a Trump tweet: And the result... Again, a never-before-seen tweet. And again, correctly identified! This time with 97% certainty. Now you have a neural network that can be trained on any text that you want! You could easily adapt this to identify the sentiment of an email or your company’s online reviews, identify spam, classify blog posts, determine whether a message is urgent or not, or any of a thousand different applications. And as useless as our tweet identifier is, it still illustrates a really interesting point: that a neural network like this can perform tasks as nuanced as identifying someone based on the way they write. So even if you don’t go out and create an innovative or useful tool that’s powered by machine learning, this is still a great bit of experience to have in your developer tool belt. You never know when it might come in handy or even open up new opportunities down the road. Once again, all of this is available in a GitHub repo here: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Web developer, JavaScript enthusiast, boxing fan ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies.
Logan Spears
2.3K
6
https://hackernoon.com/coursera-vs-udacity-for-machine-learning-f9c0d464a0eb?source=tag_archive---------9----------------
Coursera vs Udacity for Machine Learning – Hacker Noon
2018 is an exciting time for students of machine learning. There is a wealth of readily available educational materials, and the industry’s importance only continues to grow. That said, with so many easily accessible resources, choosing the right fit for your interests can be difficult. To help those considering entering the machine learning world, I’d like to share my experience from two courses I took in 2017: Coursera’s Machine Learning course and Udacity’s Machine Learning Engineer Nanodegree program. I found both courses to be very instructive and worthwhile, but very different in nature. If you don’t have time to take both then hopefully this post can help you decide which one is best for you. Coursera Coursera’s Machine Learning course is the “OG” machine learning course. Led by famed Stanford Professor Andrew Ng, this course feels like a college course with a syllabus, weekly schedule, and standard lectures. The college feel extends to the curriculum as well. Here is an example slide: If that scared you, you aren’t alone. I usually shy away from courses heavy in math, but I actually appreciated the approach in this course. The course begins with a linear algebra refresher and explains machine learning concepts like gradient descent, cost function, regularization, etc. along the way. It is structured better than any in person college course I ever attended. The material isn’t easy, but that’s a good thing. You come away from the course with the satisfaction of genuinely understanding machine learning, enough so that you could even build your own machine learning framework from scratch. Udacity Udacity’s Machine Learning Engineer Nanodegree program is the trade school alternative to Coursera’s academia. From basic statistics to full-fledged deep learning, Udacity teaches you a plethora of industry standard techniques to complete the program’s well-crafted projects. The projects are so good, in fact, that I forked their repos on Github and left my solutions up as portfolio items. The final step of the program is to complete a capstone project of your own choosing. While you could theoretically do a similar project on your own, I found the desire to complete my Nanodegree to be a strong motivator; I ended up putting in much more time and effort than I normally would have put into an independent side project. Ultimately, I ended up creating something of which I am truly proud. Udacity’s program doesn’t so much teach as it does provide a framework and motivation for you to teach yourself. Comparison Now that I’ve introduced the two programs, I’ll highlight the strengths and weakness of each across a number of categories. Programming Environment As I mentioned, Coursera is the “OG” machine learning course; so, it should come as no surprise that the it’s taught in the “OG” 3D math language and programming environment: Matlab. Due to Matlab’s cost and licensing issues, the machine learning world has mostly moved to Python. This move severely limits the utility of the programming assignments because you’ll have to relearn a lot of that work in Python. If you are a seasoned programmer who knows many languages, that might not be a big deal. However, if you are relatively new to programming then this detour may cost you a lot of time. The Udacity course is taught in a modern Python environment with popular frameworks like Sklearn, Tensorflow, and Keras. The course even teaches students how to use AWS to deploy machine learning software to the cloud. The course also simplifies the process of installing machine learning dependencies with a Docker image and AMI (Amazon Machine Image) for local and AWS development respectively. In fact, the entire Udacity environment is in line with industry best practices and students who learn it will be well equipped in the job market. Winner = Udacity Lectures Coursera’s Machine Learning course was created and taught by the AI godfather himself: Andrew Ng. And this course has contributed in no small part to his reputation within the industry. The lectures follow a single uniform format and each one builds upon the last in a methodical way. Not to mention, he leads every one himself. Lastly, Professor Ng is also very encouraging in his videos, which I thought was a nice touch. Udacity’s lectures, by contrast, featured a rotating cast of characters, which can create very jarring transitions between sections. I counted at least seven different people lecturing throughout the program. While Udacity attempts to provide multiple content sources for its students, the lack of homogeneity definitely dented my enthusiasm for the lectures. By the end of the program I just skipped right to the projects and watched the lectures, or even searched Youtube, as needed. Winner = Coursera Projects Coursera’s course has programming assignments in which student’s submit code to be tested against automated unit tests. While this model helps the class scale, it leaves you hunting through the forums when things go wrong. That said, I never hit any major roadblocks. The assignments themselves were directly related to the course material and reinforced the lectures. Sometimes it felt like I was actually creating my own machine learning framework; at other times, however, it felt like I was just implementing methods until the unit tests passed. Udacity’s projects were extremely well designed. In fact, they constituted some of the best educational materials I’ve ever encountered. Each project covered a subject, such as unsupervised learning, reinforcement learning, linear regression, in which you solve a multi-step machine learning problem and write about your approach and understanding. When you feel that you have completed a project, you submit it to be graded by a HUMAN. The quality of the feedback that I got was incredible. The final project is a capstone that you get to pick yourself, but it is still reviewed by Udacity’s staff. The proposal and final report ended up being one of the best portfolio items I have ever created and one of the things I am most proud of in my programming career. Winner = Udacity Cost Coursera’s price is hard to beat because it’s free. To get the certification its $80. If you are machine learning on a budget then Coursera is a great choice. Udacity has recently changed its pricing model for the Machine Learning Nanodegree. When I entered the program, it was $200 a month. Now it is a $999 flat fee. The per month pricing model incentivized me to finish the program quickly in only three months. Though I must admit, given the quality of instructor feedback, even with the price hike tuition still seems reasonable. The highly-skilled labor that is meticulously reviewing projects can’t pay for itself. With such a high dollar amount, however, signing up for the Nanodegree program is obviously a much bigger consideration. Winner = Coursera Conclusion While the courses tied on the number categories won, I am going to pick a winner. It is... Udacity. It may come as no surprise that a paid course beats out a free one, but the Udacity Machine Learning Engineer Nanodegree program gave me the confidence to professional pursue machine learning positions and opportunities; and for that, its entry fee was a very small price to pay. That said, I would still recommend you do both courses. Start with Coursera, so that when you use “batteries included” high level frameworks, you understand the low level details and have a better appreciation of what you’re actually coding. After you’ve built a strong conceptual foundation, further refine your skills by learning practical, industry standard practices at Udacity. Overall, I am so glad I took concrete steps to enter the machine learning world in 2017, and I would encourage you to do the same in 2018. Coursera’s Machine Learning Certificate Machine Learning Engineer Nanodegree Certificate From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Programmer and Entrepreneur. Find me @ spearsx.com Github: notnil how hackers start their afternoons.
James Le
2K
9
https://medium.com/nanonets/how-to-do-image-segmentation-using-deep-learning-c673cc5862ef?source=---------0----------------
How to do Semantic Segmentation using Deep learning
This article is a comprehensive overview including a step-by-step guide to implement a deep learning image segmentation model. Nowadays, semantic segmentation is one of the key problems in the field of computer vision. Looking at the big picture, semantic segmentation is one of the high-level task that paves the way towards complete scene understanding. The importance of scene understanding as a core computer vision problem is highlighted by the fact that an increasing number of applications nourish from inferring knowledge from imagery. Some of those applications include self-driving vehicles, human-computer interaction, virtual reality etc. With the popularity of deep learning in recent years, many semantic segmentation problems are being tackled using deep architectures, most often Convolutional Neural Nets, which surpass other approaches by a large margin in terms of accuracy and efficiency. Semantic segmentation is a natural step in the progression from coarse to fine inference: It is also worthy to review some standard deep networks that have made significant contributions to the field of computer vision, as they are often used as the basis of semantic segmentation systems: A general semantic segmentation architecture can be broadly thought of as an encoder network followed by a decoder network: Unlike classification where the end result of the very deep network is the only important thing, semantic segmentation not only requires discrimination at pixel level but also a mechanism to project the discriminative features learnt at different stages of the encoder onto the pixel space. Different approaches employ different mechanisms as a part of the decoding mechanism. Let’s explore the 3 main approaches: The region-based methods generally follow the “segmentation using recognition” pipeline, which first extracts free-form regions from an image and describes them, followed by region-based classification. At test time, the region-based predictions are transformed to pixel predictions, usually by labeling a pixel according to the highest scoring region that contains it. R-CNN (Regions with CNN feature) is one representative work for the region-based methods. It performs the semantic segmentation based on the object detection results. To be specific, R-CNN first utilizes selective search to extract a large quantity of object proposals and then computes CNN features for each of them. Finally, it classifies each region using the class-specific linear SVMs. Compared with traditional CNN structures which are mainly intended for image classification, R-CNN can address more complicated tasks, such as object detection and image segmentation, and it even becomes one important basis for both fields. Moreover, R-CNN can be built on top of any CNN benchmark structures, such as AlexNet, VGG, GoogLeNet, and ResNet. For the image segmentation task, R-CNN extracted 2 types of features for each region: full region feature and foreground feature, and found that it could lead to better performance when concatenating them together as the region feature. R-CNN achieved significant performance improvements due to using the highly discriminative CNN features. However, it also suffers from a couple of drawbacks for the segmentation task: Due to these bottlenecks, recent research has been proposed to address the problems, including SDS, Hypercolumns, Mask R-CNN. The original Fully Convolutional Network (FCN) learns a mapping from pixels to pixels, without extracting the region proposals. The FCN network pipeline is an extension of the classical CNN. The main idea is to make the classical CNN take as input arbitrary-sized images. The restriction of CNNs to accept and produce labels only for specific sized inputs comes from the fully-connected layers which are fixed. Contrary to them, FCNs only have convolutional and pooling layers which give them the ability to make predictions on arbitrary-sized inputs. One issue in this specific FCN is that by propagating through several alternated convolutional and pooling layers, the resolution of the output feature maps is down sampled. Therefore, the direct predictions of FCN are typically in low resolution, resulting in relatively fuzzy object boundaries. A variety of more advanced FCN-based approaches have been proposed to address this issue, including SegNet, DeepLab-CRF, and Dilated Convolutions. Most of the relevant methods in semantic segmentation rely on a large number of images with pixel-wise segmentation masks. However, manually annotating these masks is quite time-consuming, frustrating and commercially expensive. Therefore, some weakly supervised methods have recently been proposed, which are dedicated to fulfilling the semantic segmentation by utilizing annotated bounding boxes. For example, Boxsup employed the bounding box annotations as a supervision to train the network and iteratively improve the estimated masks for semantic segmentation. Simple Does It treated the weak supervision limitation as an issue of input label noise and explored recursive training as a de-noising strategy. Pixel-level Labeling interpreted the segmentation task within the multiple-instance learning framework and added an extra layer to constrain the model to assign more weight to important pixels for image-level classification. In this section, let’s walk through a step-by-step implementation of the most popular architecture for semantic segmentation — the Fully-Convolutional Net (FCN). We’ll implement it using the TensorFlow library in Python 3, along with other dependencies such as Numpy and Scipy. In this exercise we will label the pixels of a road in images using FCN. We’ll work with the Kitti Road Dataset for road/lane detection. This is a simple exercise from the Udacity’s Self-Driving Car Nano-degree program, which you can learn more about the setup in this GitHub repo. Here are the key features of the FCN architecture: There are 3 versions of FCN (FCN-32, FCN-16, FCN-8). We’ll implement FCN-8, as detailed step-by-step below: We first load the pre-trained VGG-16 model into TensorFlow. Taking in the TensorFlow session and the path to the VGG Folder (which is downloadable here), we return the tuple of tensors from VGG model, including the image input, keep_prob (to control dropout rate), layer 3, layer 4, and layer 7. Now we focus on creating the layers for a FCN, using the tensors from the VGG model. Given the tensors for VGG layer output and the number of classes to classify, we return the tensor for the last layer of that output. In particular, we apply a 1x1 convolution to the encoder layers, and then add decoder layers to the network with skip connections and upsampling. The next step is to optimize our neural network, aka building TensorFlow loss functions and optimizer operations. Here we use cross entropy as our loss function and Adam as our optimization algorithm. Here we define the train_nn function, which takes in important parameters including number of epochs, batch size, loss function, optimizer operation, and placeholders for input images, label images, learning rate. For the training process, we also set keep_probability to 0.5 and learning_rate to 0.001. To keep track of the progress, we also print out the loss during training. Finally, it’s time to train our net! In this run function, we first build our net using the load_vgg, layers, and optimize function. Then we train the net using the train_nn function and save the inference data for records. About our parameters, we choose epochs = 40, batch_size = 16, num_classes = 2, and image_shape = (160, 576). After doing 2 trial passes with dropout = 0.5 and dropout = 0.75, we found that the 2nd trial yields better results with better average losses. To see the full code, check out this link: https://gist.github.com/khanhnamle1994/e2ff59ddca93c0205ac4e566d40b5e88 If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Blue Ocean Thinker (https://jameskle.com/) NanoNets: Machine Learning API
Sarthak Jain
3.9K
10
https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=---------1----------------
How to easily Detect Objects with Deep Learning on Raspberry Pi
Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi. Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house. 20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps). To mimic human level performance scientists broke down the visual perception task into four different categories. Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box. Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below: Object Detection can be used to answer a variety of questions. These are the broad categories: There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments). Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below. You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on. Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task. You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model. You can find a bunch of pretrained models here The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train. To start training the model you can run: The docker image has a run.sh script that can be called with the following parameters You can find more details at: To train a model you need to select the right hyper parameters. Finding the right parameters The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters. Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile) Small devices like Mobile Phones and Rasberry PI have very little memory and computation power. Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs. Why Quantize? Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%. Code for Quantization: You need the Raspberry Pi camera live and working. Then capture a new Image For instructions on how to install checkout this link Download Model Once your done training the model you can download it on to your pi. To export the model run: Then download the model onto the Raspberry Pi. Install TensorFlow on the Raspberry Pi Depending on your device you might need to change the installation a little Run model for predicting on the new Image The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image. We have removed the need to annotate Images, we have expert annotators who will annotate your images for you. We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use. Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you Get your free API Key from http://app.nanonets.com/user/api_key Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset. Once the Images have been uploaded, begin training the Model The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model Once the model is trained. You can make predictions using the model From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder & CEO @ NanoNets.com NanoNets: Machine Learning API
Bharath Raj
2.2K
15
https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced?source=---------2----------------
Data Augmentation | How to use Deep Learning when you have Limited Data — Part 2
We have all been there. You have a stellar concept that can be implemented using a machine learning model. Feeling ebullient, you open your web browser and search for relevant data. Chances are, you find a dataset that has around a few hundred images. You recall that most popular datasets have images in the order of tens of thousands (or more). You also recall someone mentioning having a large dataset is crucial for good performance. Feeling disappointed, you wonder; can my “state-of-the-art” neural network perform well with the meagre amount of data I have? The answer is, yes! But before we get into the magic of making that happen, we need to reflect upon some basic questions. When you train a machine learning model, what you’re really doing is tuning its parameters such that it can map a particular input (say, an image) to some output (a label). Our optimization goal is to chase that sweet spot where our model’s loss is low, which happens when your parameters are tuned in the right way. Naturally, if you have a lot of parameters, you would need to show your machine learning model a proportional amount of examples, to get good performance. Also, the number of parameters you need is proportional to the complexity of the task your model has to perform. You don’t need to hunt for novel new images that can be added to your dataset. Why? Because, neural networks aren’t smart to begin with. For instance, a poorly trained neural network would think that these three tennis balls shown below, are distinct, unique images. So, to get more data, we just need to make minor alterations to our existing dataset. Minor changes such as flips or translations or rotations. Our neural network would think these are distinct images anyway. A convolutional neural network that can robustly classify objects even if its placed in different orientations is said to have the property called invariance. More specifically, a CNN can be invariant to translation, viewpoint, size or illumination (Or a combination of the above). This essentially is the premise of data augmentation. In the real world scenario, we may have a dataset of images taken in a limited set of conditions. But, our target application may exist in a variety of conditions, such as different orientation, location, scale, brightness etc. We account for these situations by training our neural network with additional synthetically modified data. Yes. It can help to increase the amount of relevant data in your dataset. This is related to the way with which neural networks learn. Let me illustrate it with an example. Imagine that you have a dataset, consisting of two brands of cars, as shown above. Let’s assume that all cars of brand A are aligned exactly like the picture in the left (i.e. All cars are facing left) . Likewise, all cars of brand B are aligned exactly like the picture in the right (i.e. Facing right) . Now, you feed this dataset to your “state-of-the-art” neural network, and you hope to get impressive results once it’s trained. Let’s say it’s done training, and you feed the image above, which is a Brand A car. But your neural network outputs that it’s a Brand B car! You’re confused. Didn’t you just get a 95% accuracy on your dataset using your “state-of-the-art” neural network? I’m not exaggerating, similar incidents and goof-ups have occurred in the past. Why does this happen? It happens because that’s how most machine learning algorithms work. It finds the most obvious features that distinguishes one class from another. Here, the feature was that all cars of Brand A were facing left, and all cars of Brand B are facing right. How do we prevent this happening? We have to reduce the amount of irrelevant features in the dataset. For our car model classifier above, a simple solution would be to add pictures of cars of both classes, facing the other direction to our original dataset. Better yet, you can just flip the images in the existing dataset horizontally such that they face the other side! Now, on training the neural network on this new dataset, you get the performance that you intended to get. Before we dive into the various augmentation techniques, there’s one issue that we must consider beforehand. The answer may seem quite obvious; we do augmentation before we feed the data to the model right? Yes, but you have two options here. One option is to perform all the necessary transformations beforehand, essentially increasing the size of your dataset. The other option is to perform these transformations on a mini-batch, just before feeding it to your machine learning model. The first option is known as offline augmentation. This method is preferred for relatively smaller datasets, as you would end up increasing the size of the dataset by a factor equal to the number of transformations you perform (For example, by flipping all my images, I would increase the size of my dataset by a factor of 2). The second option is known as online augmentation, or augmentation on the fly. This method is preferred for larger datasets, as you can’t afford the explosive increase in size. Instead, you would perform transformations on the mini-batches that you would feed to your model. Some machine learning frameworks have support for online augmentation, which can be accelerated on the GPU. In this section, we present some basic but powerful augmentation techniques that are popularly used. Before we explore these techniques, for simplicity, let us make one assumption. The assumption is that, we don’t need to consider what lies beyond the image’s boundary. We’ll use the below techniques such that our assumption is valid. What would happen if we use a technique that forces us to guess what lies beyond an image’s boundary? In this case, we need to interpolate some information. We’ll discuss this in detail after we cover the types of augmentation. For each of these techniques, we also specify the factor by which the size of your dataset would get increased (aka. Data Augmentation Factor). You can flip images horizontally and vertically. Some frameworks do not provide function for vertical flips. But, a vertical flip is equivalent to rotating an image by 180 degrees and then performing a horizontal flip. Below are examples for images that are flipped. You can perform flips by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x One key thing to note about this operation is that image dimensions may not be preserved after rotation. If your image is a square, rotating it at right angles will preserve the image size. If it’s a rectangle, rotating it by 180 degrees would preserve the size. Rotating the image by finer angles will also change the final image size. We’ll see how we can deal with this issue in the next section. Below are examples of square images rotated at right angles. You can perform rotations by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x The image can be scaled outward or inward. While scaling outward, the final image size will be larger than the original image size. Most image frameworks cut out a section from the new image, with size equal to the original image. We’ll deal with scaling inward in the next section, as it reduces the image size, forcing us to make assumptions about what lies beyond the boundary. Below are examples or images being scaled. You can perform scaling by using the following commands, using scikit-image. Data Augmentation Factor = Arbitrary. Unlike scaling, we just randomly sample a section from the original image. We then resize this section to the original image size. This method is popularly known as random cropping. Below are examples of random cropping. If you look closely, you can notice the difference between this method and scaling. You can perform random crops by using any the following command for TensorFlow. Data Augmentation Factor = Arbitrary. Translation just involves moving the image along the X or Y direction (or both). In the following example, we assume that the image has a black background beyond its boundary, and are translated appropriately. This method of augmentation is very useful as most objects can be located at almost anywhere in the image. This forces your convolutional neural network to look everywhere. You can perform translations in TensorFlow by using the following commands. Data Augmentation Factor = Arbitrary. Over-fitting usually happens when your neural network tries to learn high frequency features (patterns that occur a lot) that may not be useful. Gaussian noise, which has zero mean, essentially has data points in all frequencies, effectively distorting the high frequency features. This also means that lower frequency components (usually, your intended data) are also distorted, but your neural network can learn to look past that. Adding just the right amount of noise can enhance the learning capability. A toned down version of this is the salt and pepper noise, which presents itself as random black and white pixels spread through the image. This is similar to the effect produced by adding Gaussian noise to an image, but may have a lower information distortion level. You can add Gaussian noise to your image by using the following command, on TensorFlow. Data Augmentation Factor = 2x. Real world, natural data can still exist in a variety of conditions that cannot be accounted for by the above simple methods. For instance, let us take the task of identifying the landscape in photograph. The landscape could be anything: freezing tundras, grasslands, forests and so on. Sounds like a pretty straight forward classification task right? You’d be right, except for one thing. We are overlooking a crucial feature in the photographs that would affect the performance — The season in which the photograph was taken. If our neural network does not understand the fact that certain landscapes can exist in a variety of conditions (snow, damp, bright etc.), it may spuriously label frozen lakeshores as glaciers or wet fields as swamps. One way to mitigate this situation is to add more pictures such that we account for all the seasonal changes. But that is an arduous task. Extending our data augmentation concept, imagine how cool it would be to generate effects such as different seasons artificially? Without going into gory detail, conditional GANs can transform an image from one domain to an image to another domain. If you think it sounds too vague, it’s not; that’s literally how powerful this neural network is! Below is an example of conditional GANs used to transform photographs of summer sceneries to winter sceneries. The above method is robust, but computationally intensive. A cheaper alternative would be something called neural style transfer. It grabs the texture/ambiance/appearance of one image (aka, the “style”) and mixes it with the content of another. Using this powerful technique, we produce an effect similar to that of our conditional GAN (In fact, this method was introduced before cGANs were invented!). The only downside of this method is that, the output tends to looks more artistic rather than realistic. However, there are certain advancements such as Deep Photo Style Transfer, shown below, that have impressive results. We have not explored these techniques in great depth as we are not concerned with their inner working. We can use existing trained models, along with the magic of transfer learning, to use it for augmentation. What if you wanted to translate an image that doesn’t have a black background? What if you wanted to scale inward? Or rotate in finer angles? After we perform these transformations, we need to preserve our original image size. Since our image does not have any information about things outside it’s boundary, we need to make some assumptions. Usually, the space beyond the image’s boundary is assumed to be the constant 0 at every point. Hence, when you do these transformations, you get a black region where the image is not defined. But is that the right assumption? In the real world scenario, it’s mostly a no. Image processing and ML frameworks have some standard ways with which you can decide on how to fill the unknown space. They are defined as follows. The simplest interpolation method is to fill the unknown region with some constant value. This may not work for natural images, but can work for images taken in a monochromatic background The edge values of the image are extended after the boundary. This method can work for mild translations. The image pixel values are reflected along the image boundary. This method is useful for continuous or natural backgrounds containing trees, mountains etc. This method is similar to reflect, except for the fact that, at the boundary of reflection, a copy of the edge pixels are made. Normally, reflect and symmetric can be used interchangeably, but differences will be visible while dealing with very small images or patterns. The image is just repeated beyond its boundary, as if it’s being tiled. This method is not as popularly used as the rest as it does not make sense for a lot of scenarios. Besides these, you can design your own methods for dealing with undefined space, but usually these methods would just do fine for most classification problems. If you use it in the right way, then yes! What is the right way you ask? Well, sometimes not all augmentation techniques make sense for a dataset. Consider our car example again. Below are some of the ways by which you can modify the image. Sure, they are pictures of the same car, but your target application may never see cars presented in these orientations. For instance, if you’re just going to classify random cars on the road, only the second image would make sense to be on the dataset. But, if you own an insurance company that deals with car accidents, and you want to identify models of upside-down, broken cars as well, the third image makes sense. The last image may not make sense for both the above scenarios. The point is, while using augmentation techniques, we have to make sure to not increase irrelevant data. You’re probably expecting some results to motivate you to walk the extra mile. Fair enough; I’ve got that covered too. Let me prove that augmentation really works, using a toy example. You can replicate this experiment to verify. Let’s create two neural networks to classify data to one among four classes: cat, lion, tiger or a leopard. The catch is, one will not use data augmentation, whereas the other will. You can download the dataset from here link. If you’ve checked out the dataset, you’ll notice that there’s only 50 images per class for both training and testing. Clearly, we can’t use augmentation for one of the classifiers. To make the odds more fair, we use Transfer Learning to give the models a better chance with the scarce amount of data. For the one without augmentation, let’s use a VGG19 network. I’ve written a TensorFlow implementation here, which is based on this implementation. Once you’ve cloned my repo, you can get the dataset from here, and vgg19.npy (used for transfer learning) from here. You can now run the model to verify the performance. I would agree though, writing extra code for data augmentation is indeed a bit of an effort. So, to build our second model, I turned to Nanonets. They internally use transfer learning and data augmentation to provide the best results using minimal data. All you need to do is upload the data on their website, and wait until it’s trained in their servers (Usually around 30 minutes). What do you know, it’s perfect for our comparison experiment. Once it’s done training, you can request calls to their API to calculate the test accuracy. Checkout out my repo for a sample code snippet(Don’t forget to insert your model’s ID in the code snippet). Impressive isn’t it. It is a fact that most models perform well with more data. So to provide a concrete proof, I’ve mentioned the table below. It shows the error rate of popular neural networks on the Cifar 10 (C10) and Cifar 100 (C100) datasets. C10+ and C100+ columns are the error rates with data augmentation. Thank you for reading this article! Hit that clap button if you did! Hope it shed some light about data augmentation. If you have any questions, you could hit me up on social media or send me an email (bharathrajn98@gmail.com). From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Undergrad | Computer Vision and AI Enthusiast | Hungry NanoNets: Machine Learning API
Daniel Rothmann
302
8
https://towardsdatascience.com/human-like-machine-hearing-with-ai-1-3-a5713af6e2f8?source=---------3----------------
Human-Like Machine Hearing With AI (1/3) – Towards Data Science
Significant breakthroughs in AI technology have been achieved through modeling human systems. While artificial neural networks (NNs) are mathematical models which are only loosely coupled with the way actual human neurons function, their application in solving complex and ambiguous real-world problems has been profound. Additionally, modeling the architectural depth of the brain in NNs has opened up broad possibilities in learning more meaningful representations of data. In image recognition and processing, the inspiration from the complex and more spatially invariant cells of the visual system in CNNs has also produced great improvements to our technologies. If you’re interested in applying image recognition technologies on audio spectrograms, check out my article “What’s wrong with CNNs and spectrograms for audio processing?”. As long as human perceptual capacity exceeds that of machines, we stand to gain by understanding the principles of human systems. Humans are very skillful when it comes to perceptual tasks and the contrast between human understanding and the status quo of AI becomes particularly apparent in the area of machine hearing. Considering the benefits reaped from getting inspired by human systems in visual processing, I propose that we stand to gain from a similar process in machine hearing with neural networks. In this article series, I will detail a framework for real-time audio signal processing with AI which was developed in cooperation with Aarhus University and intelligent loudspeaker manufacturer Dynaudio A/S. Its inspiration is primarily drawn from cognitive science which attempts to combine perspectives of biology, neuroscience, psychology and philosophy to gain greater understanding of our cognitive faculties. Perhaps the most abstract domain of sound is how we, as humans, perceive it. While a solution for a signal processing problem has to operate within the parameters of intensity, spectral and temporal properties on a low level, the end goal is most often a cognitive one: Transforming a signal in such a way that our perceptions of the sounds it contains are altered. If one wishes to programatically change the gender of a recorded spoken voice for example, it is necessary to describe this problem in more meaningful terms before defining its lower level characteristics. The gender of a speaker can be conceived as a cognitive property which is constructed from many factors: General pitch and timbre of a voice, differences in pronunciation, differences in choice of words and language and a common understanding of how these properties relate to gender. These parameters can be described in lower level features like intensity, spectral and temporal properties but only in more complex combinations do they form high-level representations. This forms a hierarchy of audio features from which the “meaning” of a sound can be derived. The cognitive property representing a human voice can be thought of as a combinatory pattern of temporal developments in a sound’s intensity, spectral and statistical properties. NNs are great at extracting abstracted representations of data and are therefore well suited for the task of detecting cognitive properties in sound. In order to build a system for this purpose, let’s examine how sound is represented in human auditory organs that we can use to inspire representation of sound for processing with NNs. Hearing in humans starts at the outer ear which firstly consists of the pinna. The pinna acts as a form of spectral preprocessing in which the incoming sound is modified depending on its direction in relation to the listener. Sound then travels through the opening in the pinna into the ear canal which further acts to modify spectral properties of incoming sound by resonating in a way that amplifies frequencies in the range ~1–6 kHz [1]. As sound waves reach the end of the ear canal, they excite the eardrum onto which the ossicles (the smallest bones in the body) are attached. These bones transmit the pressure from the ear canal to the fluid-filled cochlea in the inner ear [1]. The cochlea is of great interest in guiding sound representation for NNs because this is the organ responsible for transducing acoustic vibrations into neural activity in humans . It is a coiled tube which is separated along its length by two membranes being the Reissner’s membrane and the basilar membrane. Along the length of the cochlea, there is a row of around 3,500 inner hair cells [1]. As pressures enter the cochlea, its two membranes are pushed down. The basilar membrane is narrow and stiff at its base but loose and wide at its apex so that each place along its length responds more intensely at a particular frequency. To simplify, the basilar membrane can be thought of as a continuous array of bandpass filters which, along the length of the membrane, acts to separate sounds into their spectral components. This is the primary mechanism by which humans convert sound pressures into neural activity. Therefore, it is reasonable to assume that spectral representations of audio would be beneficial in modeling sound perception with AI. Since frequency responses along the basilar membrane vary exponentially [2], logarithmic frequency representations might prove most efficient. One such representation could be derived using a gammatone filterbank. These filters are commonly applied in modeling spectral filtering in the auditory system since they approximate the impulse response of human auditory filters derived from the measured auditory nerve fiber response to white noise stimuli called the “revcor” function [3]. Since the cochlea has ~3500 inner hair cells and humans can detect gaps in sounds down to ~2–5 ms in length [1], a spectral resolution of 3500 gammatone filters separated into 2 ms windows seem optimal parameters for achieving human-like spectral representation in machines. In practical scenarios however, I assume that lesser resolutions could still achieve desirable effects in most analysis and processing tasks while being more viable from a computational standpoint. A number of software libraries for auditory analysis are available online. A notable example is the Gammatone Filterbank Toolkit by Jason Heeris. It provides adjustable filters as well as tools for spectrogram-like analysis of audio signals with gammatone filters. As neural activity moves from the cochlea onto the auditory nerve and the ascending auditory pathways, a number of processes are applied in brainstem nuclei before it reaches the auditory cortex. These processes form a neural code which represents an interface between stimulus and perception [4]. Much knowledge about the specific inner workings of these nuclei is still speculative or unknown, so I will detail these nuclei only at their higher levels of functioning. Humans have a set of these nuclei for each ear that are interconnected, but for simplicity, I’ve illustrated the flow for only one ear. The cochlear nucleus is the first coding step for neural signals coming from the auditory nerve. It consists of a variety of neurons with different properties which serve to perform initial processing of sound features, some of which are directed to the superior olive which is associated with sound localization while others are directed to the lateral lemniscus and inferior colliculus, commonly associated with more advanced features [1]. J. J. Eggermont details this flow of information from the cochlear nucleus in “Between sound and perception: reviewing the search for a neural code” as follows: “The ventral [cochlear nucleus] (VCN) extracts and enhances the frequency and timing information that is multiplexed in the firing patterns of the [auditory nerve] fibers, and distributes the results via two main pathways: the sound localization path and the sound identification path. The anterior part of the VCN (AVCN) mainly serves the sound localization aspects and its two types of bushy cells provide input to the superior olivary complex (SOC), where interaural time differences (ITDs) and level differences (ILDs) are mapped for each frequency separately” [4]. The information carried by the sound identification pathway is a representation of complex spectra such as vowels. This representation is mainly created in the ventral cochlear nucleus by special types of units dubbed “chopper” (stellate) neurons [4]. The details of these auditory encodings are difficult to specify but they indicate to us that a form of “coding” of incoming frequency spectra could improve understanding of low level sound features as well as making sound impressions less expensive to process in NNs. We can apply the unsupervised autoencoder NN architecture as an attempt to learn common properties associated with complex spectra. Like word embeddings, its possible to find commonalities in frequency spectra that represent select features (or a more tightly condensed meaning) of sounds. An autoencoder is trained to encode an input into a compressed representation that can be reconstructed back into a representation with a high similarity to the input. This means that the autoencoder’s target output is the input itself [5]. If an input can be reconstructed without great loss, the network has learnt to encode it in such a way that the compressed internal representation contains enough meaningful information. This internal representation is then what we refer to as the embedding. The encoding part of the autoencoder can be decoupled from the decoder to generate embeddings for other applications. Embeddings also have the benefit that they are often of lower dimensionality than the original data. For instance, an autoencoder could compress a frequency spectrum with a total of 3500 values into a vector with a length of 500 values. Put simply, each value of such a vector could describe higher level factors of a spectrum such as vowel, harshness or harmonicity - These are only examples, as the meaning of statistically common factors derived by an autoencoder might often be difficult to label in plain language. In the next article, we will expand upon this idea with added memory to produce embeddings for temporal developments of audio frequency spectra. This wraps up the first part of my article series on audio processing with artificial intelligence. Next, we will discuss the essential concepts of sensory memory and temporal dependencies in sound. Follow to stay updated and feel free to leave claps if you enjoyed the article! As always, feel free to connect with me on LinkedIn to stay in touch. [1] C. J. Plack, The Sense of Hearing, 2nd ed. Psychology Press, 2014. [2] S. J. Elliott and C. A. Shera, “The cochlea as a smart structure,” Smart Mater. Struct., vol. 21, no. 6, p. 64001, Jun. 2012. [3] A.M. Darling, “Properties and implementation of the gammatone filter: A tutorial”, Speech hearing and language, University College London, 1991. [4] J. J. Eggermont, “Between sound and perception: reviewing the search for a neural code.,” Hear. Res., vol. 157, no. 1–2, pp. 1–42, Jul. 2001. [5] T. P. Lillicrap et al., Learning Deep Architectures for AI, vol. 2, no. 1. 2015. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. AI Engineer @ Convai. Especially interested in audio and time series forecasting. Reach us at convai.dk Sharing concepts, ideas, and codes.
Amine Aoullay
58
4
https://towardsdatascience.com/how-to-use-noise-to-your-advantage-5301071d9dc3?source=---------4----------------
How to use Noise to your advantage ? – Towards Data Science
For scientists, random fluctuations, or noise is undesirable. Although typically assumed to degrade performance, it can sometimes improve information processing in non-linear systems. In this post we’ll see some examples where the noise can be used as an advantage. Recent works have shown that, by allowing some inaccuracy when training deep neural networks, not only the training performance but also the accuracy of the model can be improved. Neural networks are capable of learning output functions that can change wildly with small changes in input. Adding noise to inputs randomly is like telling the network to not change the output in a ball around your exact input. By limiting the amount of information in a network, we force it to learn compact representations of input features. RL is an area of machine learning that assumes there is an agent situated in an environment. At each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize the agent’s total reward, given a previously unknown environment, through a learning process that usually involves lots of trial and error. To understand the challenge with exploration in Deep RL systems think about researchers that spend lot of times in a Lab without producing any practical application. Equivalently, RL agents can spend a huge amount of resources without converging to a local optimum. OpenAI proposes a technique called Parameter-Space-Noise, that introduces noises in the model policy parameters at the beginning of each episode. Other approaches were focused on what is known as Action-Space-Noise which introduce noise to change the likelihoods associated with each action the agent might take from one moment to the next. The initial results of the Parameter-Space-Noise model proved to be really promising. The technique helps algorithms explore their environments more effectively, leading to higher scores and more elegant behaviors. More details can be found in the research paper. The important thing to remember is that adding noise was used as an advantage to boost the exploration performance of reinforcement learning algorithms. Boosting recognition isn’t as simple as throwing more labeled images at these systems. Indeed, manually annotating a large number of images is an expensive and time consuming process. Facebook researchers and engineers have addressed this by training image recognition networks on large sets of public images with hashtags. Since people often caption their photos with hashtags, it woul’d be a good source of training data for models. Facebook developed new approaches that are tailored for doing image recognition experiments using hashtag supervision. This study is described in detail in “Exploring the Limits of Weakly Supervised Pretraining” On the COCO object-detection challenge, it has been shown that the use of hashtags for pretraining can boost the average precision of a model by more than 2 percent. Noise should not be our enemy ! It isn’t always an unwanted disturbance and can often be used as an advantage and even serve as a valuable research tool. If anyone tries to tell you otherwise, well, just give him the examples we presented ... Stay tuned and if you liked this article, please leave a 👏! [1] Weakly-supervised-pretraining: https://research.fb.com/publications/exploring-the-limits-of-weakly-supervised-pretraining/ [2] Better Exploration with Parameter Noise: https://blog.openai.com/better-exploration-with-parameter-noise/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. MSc in Machine Learning (MVA) @ ENS Paris-Saclay Sharing concepts, ideas, and codes.
Jonathan Balaban
804
5
https://towardsdatascience.com/deep-learning-tips-and-tricks-1ef708ec5f53?source=---------5----------------
Deep Learning Tips and Tricks – Towards Data Science
Below is a distilled collection of conversations, messages, and debates I’ve had with peers and students on how to optimize deep models. If you have tricks you’ve found impactful, please share them!! Deep learning models like the Convolutional Neural Network (CNN) have a massive number of parameters; we can actually call these hyper-parameters because they are not optimized inherently in the model. You could gridsearch the optimal values for these hyper-parameters, but you’ll need a lot of hardware and time. So, does a true data scientist settle for guessing these essential parameters? One of the best ways to improve your models is to build on the design and architecture of the experts who have done deep research in your domain, often with powerful hardware at their disposal. Graciously, they often open-source the resulting modeling architectures and rationale. Here are a few ways you can improve your fit time and accuracy with pre-trained models: Here’s how to modify dropout and limit weight sizes in Keras with MNIST: Here’s an example of final layer modification in Keras with 14 classes for MNIST: And an example of how to freeze weights in the first five layers: Alternatively, we can set the learning rate to zero for that layer, or use per-parameter adaptive learning algorithm like Adadelta or Adam. This is somewhat complicated and better implemented in other platforms, like Caffe. It’s often essential to get a visual idea of how your model looks. If you’re working in Keras, abstraction is nice but doesn’t allow you to drill down into sections of your model for deeper analysis. Fortunately, the code below lets us visualize our models directly with Python: This will plot a graph of the model and save it as a png file: plot takes two optional arguments: You can also directly obtain the pydot.Graph object and render it yourself, for example to show it in an ipython notebook : I hope this collection helps with your modeling endeavors! Let me know your best tricks, and connect with me on Twitter and LinkedIn! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Data Science Nomad Sharing concepts, ideas, and codes.
Arthur Juliani
9K
6
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=---------6----------------
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
SAGAR SHARMA
2.5K
5
https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6?source=---------7----------------
Activation Functions: Neural Networks – Towards Data Science
What is Activation Function ? Why we use Activation functions with Neural Networks? The Activation Functions can be basically divided into 2 types- As you can see the function is a line or linear.Therefore, the output of the functions will not be confined between any range. Equation : f(x) = x Range : (-infinity to infinity) It doesn’t help with the complexity or various parameters of usual data that is fed to the neural networks. The Nonlinear Activation Functions are the most used activation functions. Nonlinearity helps to makes the graph look something like this It makes it easy for the model to generalize or adapt with variety of data and to differentiate between the output. The main terminologies needed to understand for nonlinear functions are: The Nonlinear Activation Functions are mainly divided on the basis of their range or curves- 1. Sigmoid or Logistic Activation Function The Sigmoid Function curve looks like a S-shape. The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output.Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable.That means, we can find the slope of the sigmoid curve at any two points. The function is monotonic but function’s derivative is not. The logistic sigmoid function can cause a neural network to get stuck at the training time. The softmax function is a more generalized logistic activation function which is used for multiclass classification. 2. Tanh or hyperbolic tangent Activation Function tanh is also like logistic sigmoid but better. The range of the tanh function is from (-1 to 1). tanh is also sigmoidal (s - shaped). The advantage is that the negative inputs will be mapped strongly negative and the zero inputs will be mapped near zero in the tanh graph. The function is differentiable. The function is monotonic while its derivative is not monotonic. The tanh function is mainly used classification between two classes. 3. ReLU (Rectified Linear Unit) Activation Function The ReLU is the most used activation function in the world right now.Since, it is used in almost all the convolutional neural networks or deep learning. As you can see, the ReLU is half rectified (from bottom). f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to zero. Range: [ 0 to infinity) The function and its derivative both are monotonic. But the issue is that all the negative values become zero immediately which decreases the ability of the model to fit or train from the data properly. That means any negative input given to the ReLU activation function turns the value into zero immediately in the graph, which in turns affects the resulting graph by not mapping the negative values appropriately. 4. Leaky ReLU It is an attempt to solve the dying ReLU problem Can you see the Leak? 😆 The leak helps to increase the range of the ReLU function. Usually, the value of a is 0.01 or so. When a is not 0.01 then it is called Randomized ReLU. Therefore the range of the Leaky ReLU is (-infinity to infinity). Both Leaky and Randomized ReLU functions are monotonic in nature. Also, their derivatives also monotonic in nature. I will be posting 2 posts per week so don’t miss the tutorial. So, follow me on Medium, Facebook, Twitter, LinkedIn, Google+, Quora to see similar posts. Any comments or if you have any question, write it in the comment. Clap it! Share it! Follow Me! Happy to be helpful. kudos..... 2. Epoch vs Batch Size vs Iterations 3. Train Inception with Custom Images on CPU 4. TensorFlow Image Recognition Python API Tutorial On CPU From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. I am interested in Programming (Python, C++), Arduino, Machine learning :) I'm the editor of Arduino Community on Medium. I also like to write stuff. Sharing concepts, ideas, and codes.
Jae Duk Seo
33
6
https://towardsdatascience.com/principal-component-analysis-network-in-tensorflow-with-interactive-code-7be543047704?source=---------8----------------
Principal Component Analysis Network in Tensorflow with Interactive Code
A natural extension from Principle Component Analysis pooling layer would be making a full neural network out of the layer. I wanted to know if this was even possible as well as how well or worse it performs on MNIST data. Principle Component Analysis (PCA) Pooling Layer For anyone who is not familiar with PCAP please read this blog post first. The basic idea is Pooling layers such as Max or Mean pooling operations performs dimensionality reduction to not only to save computational power but also to act as a regularizer. PCA is a dimensionality reduction technique in which converts correlated variables into a set of values of linearly uncorrelated variables called principal components. And we can take advantage of this operation to do a similar job as max/mean pooling. Network Composed of Majority of Pooling Layers Now I know what you are thinking, it doesn’t make sense to have a network that is only composed of pooling layer while performing classification. And you are completely right! It doesn’t! But I just wanted to try this out for fun. Data Set / Network Architecture Blue Rectangle → PCAP or Max Pooling LayerGreen Rectangle → Convolution Layer to increase channel size + Global Averaging Pooling operation The network itself is very simple, only four pooling layers and one convolution layer to increase the channel size. However, in-order for the dimension to match up we will downsample each images into 16*16 dimension. Hence the Tensors will have a shape of ... [Batch Size,16,16,1] → [Batch Size,8,8,1] → [Batch Size,4,4,1] → [Batch Size,2,2,1] → [Batch Size,1,1,1] → [Batch Size,1,1,10] → [Batch Size,10] And we can perform classification with soft max layer as any other network does. Results: Principle Component Network As seen above, the training accuracy have stagnated at 18 percent accuracy which is horrible LOL. But I suspected that the network didn’t have enough learning capacity from the start and this was best it could do. However I wanted to see how each PCAP layer transforms the image. Top Left Image → Original InputTop Right Image → After First LayerBottom Left Image → After Second LayerBottom Right Image → After Fourth Layer One obvious pattern we can observe is the change of brightens. For example if the top left pixel was white in the second layer this pixel will change to black in the next layer. Currently, I am not 100% sure on why this is happening, but with more study I hope to know exactly why. Results: Max Pooling Network As seen above, when we replace all of the PCAP layers with max pooling operation we can observe that the accuracy on training images stagnated around 14 percent, confirming the fact that the network didn’t have enough learning capacity from the start. Top Left Image → Original InputTop Right Image → After First LayerBottom Left Image → After Second LayerBottom Right Image → After Fourth Layer Contrast to PCAP, with max pooling we can clearly observe that the pixel with most high intensity moves on to the next layer. This is expected since, that is what max pooling does. Interactive Code For Google Colab, you would need a google account to view the codes, also you can’t run read only scripts in Google Colab so make a copy on your play ground. Finally, I will never ask for permission to access your files on Google Drive, just FYI. Happy Coding! To access the network with PCAP please click here.To access the network with Max Pooling please click here. Final Words I wasn’t expecting much of this network from the start but I expected at least 30 percent accuracy on training / testing images LOL. If any errors are found, please email me at jae.duk.seo@gmail.com, if you wish to see the list of all of my writing please view my website here. Meanwhile follow me on my twitter here, and visit my website, or my Youtube channel for more content. I also implemented Wide Residual Networks, please click here to view the blog post. Reference From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. https://jaedukseo.me | | | | |Your everyday Seo, who likes kimchi Sharing concepts, ideas, and codes.
Jae Duk Seo
20
7
https://towardsdatascience.com/multi-stream-rnn-concat-rnn-internal-conv-rnn-lag-2-rnn-in-tensorflow-f4f17189a208?source=---------9----------------
Multi-Stream RNN, Concat RNN, Internal Conv RNN, Lag 2 RNN in Tensorflow
For the last two week I have been dying to implement different kinds of Recurrent Neural Networks (RNN) and finally I have the time to implement all of them. Below is the list of different RNN cases I wanted to try out. Case a: Vanilla Recurrent Neural Network Case b: Multi-Stream Recurrent Neural NetworkCase c: Concatenated Recurrent Neural NetworkCase d: Internal Convolutional Recurrent Neural NetworkCase e: Lag 2 Recurrent Neural Network Vanilla Recurrent Neural Network There is in total of 5 different case of RNN I wish to implement. However, in order to fully understand all of the implementations it would be a good idea to have a strong understanding of vanilla RNN (Case a is vanilla RNN so if you understand code for case a you are good to go.) If anyone wishes to review simple RNN please visit my old blog post “Only Numpy: Vanilla Recurrent Neural Network Deriving Back propagation Through Time Practice ”. Case a: Vanilla Recurrent Neural Network ( Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time Stamp As seen above, the base network is simple RNN combined with convolutional neural network for classification. The RNN have time stamp of 4, which means we are going to give the network 4 different kinds of input at each time stamp. And to do that I am going to add some noise to the original image. Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above our base network already performs well. Now the question is how other methods performs and would it be able to regularize better than our base network. Case b: Multi-Stream Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Convolution Input Stream Yellow Circle → Fully Connected Network StreamBlack Box → Recurrent Neural Network with 4 Time Stamp The idea behind this RNN is simply to give different representation of data to the RNN. In our base network we have the network either the raw image or image with some noise added. Red Box → Additional Four CNN/FNN layers to ‘process’ the inputBlue Box → Creating Inputs at each different time stamps As seen below now our RNN takes in input of tensor size with [batch_size, 26, 26, 1] reducing the width and the height by 2. And I was hoping that different representation of the data would act as a regularization. (Similar to data augmentation) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above the network did pretty well, and have outperformed our base network by 1 percent on the testing images. Case c: Concatenated Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampBlack Curved Arrow → Concatenated Input for Each Time Stamp This approach is very simple, the idea was that on each time stamp different features will be extracted and it might be useful for the network to have more features overtime. (For the Recurrent Layers.) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time Sadly, this was a huge failure. I guess the empty hidden values does not help (one bit) for the network to perform well. Case d: Internal Convolutional Recurrent Neural Network (Idea/Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampGray Arrow → Performing Internal Convolution before passing onto the next time stamp As seen above, this network takes in the exact same input as our base network. However this time we are going to perform additional convolution operations in the internal representation of the data. Right Image → Declaring 3 new convolution layerLeft Image (Red Box) → If the current internal layer is not None, we are going to perform additional convolution operation. I actually had no theoretical reason behind this implementation, I just wanted to see if it works LOL. Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time As seen above the network did a fine job at converging, however it was not able to outperform our base network. (Sadly). Case e: Lag 2 Recurrent Neural Network (Idea / Results) Red Box → 3 Convolutional LayerOrange → Global Average Pooling and SoftMaxGreen Circle → Hidden Unit at Time 0 (or Lag of 1)Blue Circle → Input in 4 Time StampBlack Box → Recurrent Neural Network with 4 Time StampPurple Circle → Hidden State Lag of 2 In a traditional RNN setting we only rely on the most previous values to determine the current value. For a while I was thinking that there is no reason for us to limit the look back time (or lag) as 1. We can extend this idea into lag 3 or lag 4 etc. (Just for simplicity I took lag 2) Blue Line → Train Cost Over TimeOrange Line → Train Accuracy Over TimeGreen Line → Test Cost Over TimeRed Line → Test Accuracy Over Time Thankfully the network did better than the base network. (But with very small margin), however this type of network would be most suitable for time series data. Interactive Code / Transparency For Google Colab, you would need a google account to view the codes, also you can’t run read only scripts in Google Colab so make a copy on your play ground. Finally, I will never ask for permission to access your files on Google Drive, just FYI. Happy Coding! Also for transparency I uploaded all of the training logs on my github. To access the code for case a click here, for the logs click here. To access the code for case b click here, for the logs click here.To access the code for case c click here, for the logs click here.To access the code for case c click here, for the logs click here.To access the code for case c click here, for the logs click here. Final Words I wanted to Review RNN for quite a long time now, finally I get to do it. If any errors are found, please email me at jae.duk.seo@gmail.com, if you wish to see the list of all of my writing please view my website here. Meanwhile follow me on my twitter here, and visit my website, or my Youtube channel for more content. I also implemented Wide Residual Networks, please click here to view the blog post. Reference From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. https://jaedukseo.me | | | | |Your everyday Seo, who likes kimchi Sharing concepts, ideas, and codes.
Wallarm
72
4
https://lab.wallarm.com/tensorflow-dataset-api-for-increasing-training-speed-of-neural-networks-43a3050f2080?source=---------3----------------
TensorFlow Dataset API for increasing training speed of neural networks
Wallarm AI engine is the heart of our security solution. Two key parameters of our AI engine efficiency are how fast neural networks can be train to reflect the updated training sets and how much compute power need to be dedicated to the training on the on-going basis. Many of our machine learning algorithms are written on top of TensorFlow, an open-source dataflow software library originally release by Google. Our average CPU load for the AI engine today is as high as 80% so we are always looking for ways to speed things up in software. Our latest find is Dataset API. Dataset is a mid-level TensorFlow APIs which makes working with data faster and more convenient.. In this blog, we will measure just how much faster model training can be with Dataset, compared to the you use of feed_dict. For starters, let’s prepare data that will be used to train the model. Dataset can usually be stored in numpy’s arrays regardless of kind of data they are.. That’s why we prepare all our dataset without TensorFlow and store it in .npz format similar to this: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/storing_in_npz_format.py#L1-L10 This step helps us avoid unnecessary data processing load on CPU and memory during model training. Now we are ready to train the model. First, let’s load preprocessed data from disk: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/load_from_npz.py#L1-L7. Next the data will be converted from numphy arrays into TensorFlow tensors (tf.data.Dataset.from_tensor_slices method is used for that) and loaded into TensorFlow. Dataset.from_tensor_slices method takes placeholders with the same size of the 0th dimension element and returns dataset object. Once the dataset is in TF, you can process it, for example, you can use .map(f) function which can process the data. But we already preprocess our dataset and all we need to do is apply batching and, maybe, shuffling. Fortunately, Dataset API already has needed functions. They are .batch and .shuffle. Ok, if we shuffle our dataset how can we use it for production? It’s easy, we simply make another dataset without data been shuffled. https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/datasets.py#L1-L5 Dataset API has other good methods for preprocessing data. There is a comprehensive list of methods in the. official docs. Next we should extract data from dataset object step by step for each of the training epochs, tf.data.Iterator is tailor-made for it. TF currently supported four type of iterators: Reinitializeble iterator is very useful, all we need to do to start the work is to create an iterator and initializers for it. iterator.get_next() yields the next elements of our dataset when executed. https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/iterator.py#L1-L8 To demonstrate the viability of using Dataset API let’s use proposed approach for MNIST dataset and for our corporate data . First, we prepared data and after that, we processed 1 and 5 epochs with Dataset API and without. Model for this MNIST example can be found on github: https://github.com/wallarm/researches/blob/a719923f6a2da461deea0e01622d11cbfc8b057b/tf_ds_api/model.py#L1-L25 Below are the results we obtained on a machine with one Nvidia GTX 1080 and TF 1.8.0. All code of this experiment is available on GitHub [Link]. MNIST is a very small dataset and profit of Dataset API isn’t representative. By contrast, the results on a real-life dataset are much more impressive. Thus Dataset API is very good for increasing your training speed. With no source code changes, just some modifications in the stack, you can save 20–30% off the training time. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Adaptive Application Security for DevOps. @NGINX partner. @YCombibator S16 Wallarm is DevOps-friendly WAF with hybrid architecture uniquely suited for cloud applications. It applies machine learning to traffic to adaptively generate security rules and verifies the impact of malicious payloads in real time
Maryna Hlaiboroda
5
5
https://blog.heyml.com/%D0%B8%D0%B8-%D0%BF%D1%81%D0%B8%D1%85%D0%BE%D0%BF%D0%B0%D1%82-%D0%B8-%D0%B8%D0%B8-%D0%BE%D0%B1%D0%BC%D0%B0%D0%BD%D1%89%D0%B8%D0%BA-94c6a8e6c63e?source=---------4----------------
ИИ-психопат и ИИ-обманщик – Hey Machine Learning
Команда исследователей из Массачусетского технологического института (MIT) представила нейронную сеть Norman, которая распознает изображения и генерирует текстовое описание к ним. Ее особенность в том, что ученые тренировали сеть на картинках с подписями о смерти из сообщества Reddit и Norman во всем видит ужасы. Алгоритм назван в честь персонажа романа “Психо” — убийцы с раздвоением личности Нормана Бейтса. Специалисты хотели продемонстрировать важность данных, на которых обучают модель, а также их сбалансированность. Чтобы наглядно показать влияние данных на результат, исследователи из MIT показали картинки из теста Роршаха двум нейросетям: ИИ-алгоритму, который обучали на обычных наборах с изображениями людей, кошек и птиц, и Norman. Если обычная нейросеть видела на представленных картинках вазу с цветами, стаю птиц на ветке или сидящих на лавочке людей, то психопатическая сеть определяла это как застреленного мужчину или человека, выпрыгнувшего из окна, сбитого машиной на высокой скорости или убитого разрядом электрического тока. Инженеры из MIT утверждают, что создали нейросеть Norman как напоминание, что поведение ИИ — вина не его алгоритмов, а данных для его обучения. Бывший СVO компании Microsoft Дейв Коплин считает, что создание подобного алгоритма является отличным поводом для публичного обсуждения проблем технологий искусственного интеллекта, на который общество и бизнес начинают все больше полагаться. BBC News Ученые из Торонтского университета — Авишек Бозе и Пархам Аараби — разработали систему, которая ретуширует портретные фотографии таким образом, чтобы алгоритмы распознавания лиц давали сбой. Проект позволяет сохранять конфиденциальность и бороться с раскрытием персональных данных. В качестве данных для обучения исследователи использовали две нейронные сети: одна распознавала лица на фото, а вторая попиксельно ретушировала снимки и повторно отправляла их распознающей сети. Изменения, которые давали наибольшее число ложных срабатываний, формировали ядро фильтра. Ученые также отметили, что разработанная ими система смогла обмануть алгоритм Faster R-CNN, который создала компания Facebook. В будущем она позволит полностью пресекать идентификацию пользователя без его согласия. В нынешнем же варианте технология снижает точность распознавания личности по фото до 0,5%. Данный алгоритм — часть магистерской работы Авишека Бозе. В августе 2018 года он намерен представить проект на семинаре MMSP 2018 в Ванкувере. U of T News Генеральный директор корпорации Google Сундар Пичай заявил, что инженеры компании не будут заниматься военными разработками искусственного интеллекта. Однако, специалисты поискового гиганта будут будут продолжать взаимодействовать с военными и правительственными ведомствами. Решение было принято после массового бойкота сотрудников компании против сотрудничества с Пентагоном. Компания планировала создать искусственный интеллект для военных беспилотников. По словам Пичайа, являясь лидером в ИИ-разработках, Google чувствует огромную ответственность, возложенную на плечи компании. Поэтому он объявил о семи принципах, которых компания будет придерживаться в будущем. Он также отметил, что использование ИИ должно быть “социально полезным”, а при его разработке необходимо предусмотреть “надежные средства обеспечения безопасности”. Алгоритмы ИИ и собранные для них данные должны находиться под контролем людей, а при их разработке должны быть учтены наивысшие научные стандарты и компания будет стремиться к тому, чтобы “ограничить вред” использования таких систем. TSN.ua Инженеры компании Google разработали алгоритм AutoAugment, который дополняет данные для обучения алгоритмов компьютерного зрения изображениями, созданными на основе существующих. Алгоритм трансформирует, обрезает, отражает и изменяет цвета на изображениях, что позволяет увеличить набор исходных данных для обучения. Для создания алгоритма специалисты компании использовали модель обучения с подкреплением. В результате он научился самостоятельно определять правила, по которым необходимо изменить изображение и создать уникальное, не исказив его при этом. AutoAugment умеет отражать изображения по горизонтали и вертикали, поворачивать, менять цвет и так далее. При этом алгоритм может комбинировать правила и предотвращать создание одинаковых копий. Так, система учитывает специфику конкретного набора изображений. В случае с номерами домов в наборе SVHN, алгоритм использует геометрические преобразования изображения, а также изменение его цвета. В наборах CIFAR-10 и ImageNet AutoAugment не использует геометрические преобразования и не меняет цвет, так как это правило может создать нереалистичную фотографию. Вместо этого алгоритм меняет оттенки на изображениях, сохраняя при этом оригинальную цветовую гамму. Blog Google AI Калифорнийский университет в Беркли опубликовал в открытом доступе архив видеороликов BDD100K для обучения автомобилей самостоятельной езде по общественным дорогам. Архив состоит из 100 тыс. роликов по 40 секунд, в разрешении 720р и 30 кадров в секунду. Кроме этого, к каждому файлу прикреплены GPS-данные, собранные мобильными устройствами, которые могут приблизительно описывать траекторию движения транспорта. В роликах содержаться различные дорожные ситуации и погодные условия, снятые в различных уголках США. Также в кадрах архива запечатлены 85 тыс. пешеходов, что может быть полезно разработчикам систем обнаружения пешеходов. Analytics Vidhya Хотите быть в курсе актуальных событий? Читайте нас в Telegram и Facebook и будьте в тренде! From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. We are young and talented team and our passion is Machine Learning, Data Science and Artificial Intelligence. http://heyml.com
Amine Aoullay
58
4
https://towardsdatascience.com/how-to-use-noise-to-your-advantage-5301071d9dc3?source=---------6----------------
How to use Noise to your advantage ? – Towards Data Science
For scientists, random fluctuations, or noise is undesirable. Although typically assumed to degrade performance, it can sometimes improve information processing in non-linear systems. In this post we’ll see some examples where the noise can be used as an advantage. Recent works have shown that, by allowing some inaccuracy when training deep neural networks, not only the training performance but also the accuracy of the model can be improved. Neural networks are capable of learning output functions that can change wildly with small changes in input. Adding noise to inputs randomly is like telling the network to not change the output in a ball around your exact input. By limiting the amount of information in a network, we force it to learn compact representations of input features. RL is an area of machine learning that assumes there is an agent situated in an environment. At each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize the agent’s total reward, given a previously unknown environment, through a learning process that usually involves lots of trial and error. To understand the challenge with exploration in Deep RL systems think about researchers that spend lot of times in a Lab without producing any practical application. Equivalently, RL agents can spend a huge amount of resources without converging to a local optimum. OpenAI proposes a technique called Parameter-Space-Noise, that introduces noises in the model policy parameters at the beginning of each episode. Other approaches were focused on what is known as Action-Space-Noise which introduce noise to change the likelihoods associated with each action the agent might take from one moment to the next. The initial results of the Parameter-Space-Noise model proved to be really promising. The technique helps algorithms explore their environments more effectively, leading to higher scores and more elegant behaviors. More details can be found in the research paper. The important thing to remember is that adding noise was used as an advantage to boost the exploration performance of reinforcement learning algorithms. Boosting recognition isn’t as simple as throwing more labeled images at these systems. Indeed, manually annotating a large number of images is an expensive and time consuming process. Facebook researchers and engineers have addressed this by training image recognition networks on large sets of public images with hashtags. Since people often caption their photos with hashtags, it woul’d be a good source of training data for models. Facebook developed new approaches that are tailored for doing image recognition experiments using hashtag supervision. This study is described in detail in “Exploring the Limits of Weakly Supervised Pretraining” On the COCO object-detection challenge, it has been shown that the use of hashtags for pretraining can boost the average precision of a model by more than 2 percent. Noise should not be our enemy ! It isn’t always an unwanted disturbance and can often be used as an advantage and even serve as a valuable research tool. If anyone tries to tell you otherwise, well, just give him the examples we presented ... Stay tuned and if you liked this article, please leave a 👏! [1] Weakly-supervised-pretraining: https://research.fb.com/publications/exploring-the-limits-of-weakly-supervised-pretraining/ [2] Better Exploration with Parameter Noise: https://blog.openai.com/better-exploration-with-parameter-noise/ From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. MSc in Machine Learning (MVA) @ ENS Paris-Saclay Sharing concepts, ideas, and codes.
Kelvin Li
56
5
https://medium.com/@kelfun5354/the-complex-language-used-in-back-propagation-88c6e58f676c?source=---------9----------------
The Complex language used in Back Propagation – Kelvin Li – Medium
I’ve looked all over the internet for explanations of what exactly back propagation is and everyone either uses complicated mathematical language or complex codes to try to explain what back propagation. If someone who doesn’t know either wants to know what it is then how will they really grasp what it is? In this post, I would like to unveil the secrets of the universe with everyone and hopefully I’ll do a good job at it. According to Wikipedia, Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function If you have taken a basic elementary algebra class, you may have heard of the idea of a slope. Some people might think the idea of a slope is very insignificant but it is actually the game-changing concept that caused all the technological advancement within the last century. To know the slope of something means that you know the rate of something changing over a period of time. Knowing this gives us power to manipulate things to our advantage. Now you can think of a gradient as the slope of something in a higher dimension. I won’t go into details but that is the general gist of what a gradient is. Weights are the values that we want to use to adjust the outputs of our functions in each neuron. So say we have an output of 2 and we want to change the 2 into a 1, then we would multiply the 2 by a .5 to get the desired result. This means that .5 will be the weight in this case. In a way we are weighing down the output to what we want it to be. A neuron is simply just a function. A Neural Network(a bunch of neurons) is simply a bunch of functions. Each neuron also has an activation function that spits out a value for the next neuron to calculate. Think of these functions as how much of a yes or a no an input is. An example would be picture recognizing. When you feed the neural network a picture, the node will spit out a number between 0 and 1. Where 0 is being very NO and 1 being very YES. This process continues between every node until the very end. Which ever node has the highest number, between 0 and 1, would be the decision the machine makes. A loss function is just some function that we use to determine how correct the predicted output is from the real output. For example, we input a picture of a cat into the machine but the machine predicts that it’s a dinosaur. Clearly the machine is not doing a very good job. So we need some way to know how correct the machine is compared to the real data. Which is where the loss function comes in. Now that we have all the necessary understandings, we can go into the real sauce. Now what I am about to explain to you is going to either confuse the crap out of you or make you feel enlightened. Let’s pretend you are trying to build a door lock opening mechanism. This mechanism involves you pressing a button, which triggers a ball rolling down a platform and knocks over a switch that unlocks the door. Now lets think about this. There are a few components that we have to keep in mind. The 1st component being you pressing the button, the 2nd component is the ball rolling down a platform, and the 3rd component being the switch being knocked over. There is actually a lot of physics going on around here but let’s just focus on the ball rolling down the platform. Now when you create this mechanism, you want the door to ideally open in 3 seconds. But you don’t have any tools to measure the time and length, so all you can do is to create a platform through intuition. You build your first platform and let the ball roll and realized that it took 9 seconds for the door to open after pressing the button. So you go back to the platform and make the platform steeper. You performed the same trial and error over and over again until you got the ideal opening time. This my friend, is Backpropagation. Well true. But the idea is basically the same. In a Neural Net, we have weights assigned to each neuron. These weights will get multiplied by a certain input and modified through some activation function. The result of these activation function might not always be what we want. What backpropagation would do is that it will do some calculus (will be covered in another post) to determine the direction of increase/decrease, aka the gradient,(cut less of the platform or cut more of the platform) to achieve the best weights (ideal time the door opens). It then updates these weights every time it has created new weights and runs the neural net again(every trial you cut a piece of the platform to test). Eventually we will achieve the best possible weights that satisfies our desired accuracy. In my next post, I will discuss more in depth about the math that is involved with backpropagation. References and Links From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Getting stuck 24/7
Arthur Juliani
9K
6
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------1----------------
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead). Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here. For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide. In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly. We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this: This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment: (Thanks to Praneet D for finding the optimal hyperparameters for this approach) Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values. In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above. Below is the Tensorflow walkthrough of implementing our simple Q-Network: While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms! If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani. More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
Stefan Kojouharov
14.2K
7
https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------2----------------
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data
Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic. This is the most complete list and the Big-O is at the very end, enjoy... This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it. Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops. In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library. NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy. The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets. The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”. SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3] matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib. pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free. >>> If you like this list, you can let me know here. <<< Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises. Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/ Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs Keras: https://en.wikipedia.org/wiki/Keras Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/ Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY Matpotlib: https://en.wikipedia.org/wiki/Matplotlib Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/ Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/ Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE NumPy: https://en.wikipedia.org/wiki/NumPy Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM Pandas: https://en.wikipedia.org/wiki/Pandas_(software) Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI SciPy: https://en.wikipedia.org/wiki/SciPy TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way. Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
Andrej Karpathy
9.2K
7
https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------3----------------
Yes you should understand backprop – Andrej Karpathy – Medium
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to: > The problem with Backpropagation is that it is a leaky abstraction. In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways. We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy): If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule. Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones. TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video. Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include: If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time. TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video. Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero): This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop. What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead. TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video. Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt: If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good. The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass. The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square: It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple. I submitted an issue on the DQN repo and this was promptly fixed. Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks. The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding. That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :) From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets.
Avinash Sharma V
6.9K
10
https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0?source=tag_archive---------4----------------
Understanding Activation Functions in Neural Networks
Recently, a colleague of mine asked me a few questions like “why do we have so many activation functions?”, “why is that one works better than the other?”, ”how do we know which one to use?”, “is it hardcore maths?” and so on. So I thought, why not write an article on it for those who are familiar with neural network only at a basic level and is therefore, wondering about activation functions and their “why-how-mathematics!”. NOTE: This article assumes that you have a basic knowledge of an artificial “neuron”. I would recommend reading up on the basics of neural networks before reading this article for better understanding. So what does an artificial neuron do? Simply put, it calculates a “weighted sum” of its input, adds a bias and then decides whether it should be “fired” or not ( yeah right, an activation function does this, but let’s go with the flow for a moment ). So consider a neuron. Now, the value of Y can be anything ranging from -inf to +inf. The neuron really doesn’t know the bounds of the value. So how do we decide whether the neuron should fire or not ( why this firing pattern? Because we learnt it from biology that’s the way brain works and brain is a working testimony of an awesome and intelligent system ). We decided to add “activation functions” for this purpose. To check the Y value produced by a neuron and decide whether outside connections should consider this neuron as “fired” or not. Or rather let’s say — “activated” or not. The first thing that comes to our minds is how about a threshold based activation function? If the value of Y is above a certain value, declare it activated. If it’s less than the threshold, then say it’s not. Hmm great. This could work! Activation function A = “activated” if Y > threshold else not Alternatively, A = 1 if y> threshold, 0 otherwise Well, what we just did is a “step function”, see the below figure. Its output is 1 ( activated) when value > 0 (threshold) and outputs a 0 ( not activated) otherwise. Great. So this makes an activation function for a neuron. No confusions. However, there are certain drawbacks with this. To understand it better, think about the following. Suppose you are creating a binary classifier. Something which should say a “yes” or “no” ( activate or not activate ). A Step function could do that for you! That’s exactly what it does, say a 1 or 0. Now, think about the use case where you would want multiple such neurons to be connected to bring in more classes. Class1, class2, class3 etc. What will happen if more than 1 neuron is “activated”. All neurons will output a 1 ( from step function). Now what would you decide? Which class is it? Hmm hard, complicated. You would want the network to activate only 1 neuron and others should be 0 ( only then would you be able to say it classified properly/identified the class ). Ah! This is harder to train and converge this way. It would have been better if the activation was not binary and it instead would say “50% activated” or “20% activated” and so on. And then if more than 1 neuron activates, you could find which neuron has the “highest activation” and so on ( better than max, a softmax, but let’s leave that for now ). In this case as well, if more than 1 neuron says “100% activated”, the problem still persists.I know! But..since there are intermediate activation values for the output, learning can be smoother and easier ( less wiggly ) and chances of more than 1 neuron being 100% activated is lesser when compared to step function while training ( also depending on what you are training and the data ). Ok, so we want something to give us intermediate ( analog ) activation values rather than saying “activated” or not ( binary ). The first thing that comes to our minds would be Linear function. A = cx A straight line function where activation is proportional to input ( which is the weighted sum from neuron ). This way, it gives a range of activations, so it is not binary activation. We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that. So that is ok too. Then what is the problem with this? If you are familiar with gradient descent for training, you would notice that for this function, derivative is a constant. A = cx, derivative with respect to x is c. That means, the gradient has no relationship with X. It is a constant gradient and the descent is going to be on constant gradient. If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !!! This is not that good! ( not always, but bear with me ). There is another problem too. Think about connected layers. Each layer is activated by a linear function. That activation in turn goes into the next level as input and the second layer calculates weighted sum on that input and it in turn, fires based on another linear activation function. No matter how many layers we have, if all are linear in nature, the final activation function of last layer is nothing but just a linear function of the input of first layer! Pause for a bit and think about it. That means these two layers ( or N layers ) can be replaced by a single layer. Ah! We just lost the ability of stacking layers this way. No matter how we stack, the whole network is still equivalent to a single layer with linear activation ( a combination of linear functions in a linear manner is still another linear function ). Let’s move on, shall we? Well, this looks smooth and “step function like”. What are the benefits of this? Think about it for a moment. First things first, it is nonlinear in nature. Combinations of this function are also nonlinear! Great. Now we can stack layers. What about non binary activations? Yes, that too!. It will give an analog activation unlike step function. It has a smooth gradient too. And if you notice, between X values -2 to 2, Y values are very steep. Which means, any small changes in the values of X in that region will cause values of Y to change significantly. Ah, that means this function has a tendency to bring the Y values to either end of the curve. Looks like it’s good for a classifier considering its property? Yes ! It indeed is. It tends to bring the activations to either side of the curve ( above x = 2 and below x = -2 for example). Making clear distinctions on prediction. Another advantage of this activation function is, unlike linear function, the output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function. So we have our activations bound in a range. Nice, it won’t blow up the activations then. This is great. Sigmoid functions are one of the most widely used activation functions today. Then what are the problems with this? If you notice, towards either end of the sigmoid function, the Y values tend to respond very less to changes in X. What does that mean? The gradient at that region is going to be small. It gives rise to a problem of “vanishing gradients”. Hmm. So what happens when the activations reach near the “near-horizontal” part of the curve on either sides? Gradient is small or has vanished ( cannot make significant change because of the extremely small value ). The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ). There are ways to work around this problem and sigmoid is still very popular in classification problems. Another activation function that is used is the tanh function. Hm. This looks very similar to sigmoid. In fact, it is a scaled sigmoid function! Ok, now this has characteristics similar to sigmoid that we discussed above. It is nonlinear in nature, so great we can stack layers! It is bound to range (-1, 1) so no worries of activations blowing up. One point to mention is that the gradient is stronger for tanh than sigmoid ( derivatives are steeper). Deciding between the sigmoid or tanh will depend on your requirement of gradient strength. Like sigmoid, tanh also has the vanishing gradient problem. Tanh is also a very popular and widely used activation function. Later, comes the ReLu function, A(x) = max(0,x) The ReLu function is as shown above. It gives an output x if x is positive and 0 otherwise. At first look this would look like having the same problems of linear function, as it is linear in positive axis. First of all, ReLu is nonlinear in nature. And combinations of ReLu are also non linear! ( in fact it is a good approximator. Any function can be approximated with combinations of ReLu). Great, so this means we can stack layers. It is not bound though. The range of ReLu is [0, inf). This means it can blow up the activation. Another point that I would like to discuss here is the sparsity of the activation. Imagine a big neural network with a lot of neurons. Using a sigmoid or tanh will cause almost all neurons to fire in an analog way ( remember? ). That means almost all activations will be processed to describe the output of a network. In other words the activation is dense. This is costly. We would ideally want a few neurons in the network to not activate and thereby making the activations sparse and efficient. ReLu give us this benefit. Imagine a network with random initialized weights ( or normalised ) and almost 50% of the network yields 0 activation because of the characteristic of ReLu ( output 0 for negative values of x ). This means a fewer neurons are firing ( sparse activation ) and the network is lighter. Woah, nice! ReLu seems to be awesome! Yes it is, but nothing is flawless.. Not even ReLu. Because of the horizontal line in ReLu( for negative X ), the gradient can go towards 0. For activations in that region of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input ( simply because gradient is 0, nothing changes ). This is called dying ReLu problem. This problem can cause several neurons to just die and not respond making a substantial part of the network passive. There are variations in ReLu to mitigate this issue by simply making the horizontal line into non-horizontal component . for example y = 0.01x for x<0 will make it a slightly inclined line rather than horizontal line. This is leaky ReLu. There are other variations too. The main idea is to let the gradient be non zero and recover during training eventually. ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. That is a good point to consider when we are designing deep neural nets. Now, which activation functions to use. Does that mean we just use ReLu for everything we do? Or sigmoid or tanh? Well, yes and no. When you know the function you are trying to approximate has certain characteristics, you can choose an activation function which will approximate the function faster leading to faster training process. For example, a sigmoid works well for a classifier ( see the graph of sigmoid, doesn’t it show the properties of an ideal classifier? ) because approximating a classifier function as combinations of sigmoid is easier than maybe ReLu, for example. Which will lead to faster training process and convergence. You can use your own custom functions too!. If you don’t know the nature of the function you are trying to learn, then maybe i would suggest start with ReLu, and then work backwards. ReLu works most of the time as a general approximator! In this article, I tried to describe a few activation functions used commonly. There are other activation functions too, but the general idea remains the same. Research for better activation functions is still ongoing. Hope you got the idea behind activation function, why they are used and how do we decide which one to use. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Musings of an AI, Deep Learning, Mathematics addict
Arthur Juliani
3.5K
8
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------5----------------
Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C)
In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series. So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments. Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself. Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse. Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods. Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately. The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow: Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage. In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation. In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository. The general outline of the code architecture is: The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself. Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU. ~ From here we go asynchronous ~ Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network. Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment. Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action. A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy. A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment. Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again. To view the full and functional code, see the Github repository here. The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as: pip install vizdoom Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory. The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks. I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs. (There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.) If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani. If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated! More from my Simple Reinforcement Learning with Tensorflow series: From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Deep Learning @Unity3D & Cognitive Neuroscience PhD student. Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
Elle O'Brien
2.3K
6
https://towardsdatascience.com/romance-novels-generated-by-artificial-intelligence-1b31d9c872b2?source=tag_archive---------6----------------
Romance Novels, Generated by Artificial Intelligence
I’ve always been fascinated with romance novels — the kind they sell at the drugstore for a couple of dollars, usually with some attractive, soft-lit couples on the cover. So when I started futzing around with text-generating neural networks a few weeks ago, I developed an urgent curiosity to discover what artificial intelligence could contribute to the ever-popular genre. Maybe one day there will be entire books written by computers. For now, let’s start with titles. I gathered over 20,000 Harlequin Romance novel titles and gave them to a neural network, a type of artificial intelligence that learns the structure of text. It’s powerful enough to string together words in a way that seems almost human. 90% human. The other 10% is all wackiness. I was not disappointed with what came out. I even photoshopped some of my favorites into existence (the author names are synthesized from machine learning, too). Let’s have a look by theme: A common theme in romance novels is pregnancy, and the word “baby” had a strong showing in the titles I trained the neural network on. Naturally, the neural network came up with a lot of baby-themed titles: There’s an unusually high concentration of sheikhs, vikings, and billionaires in the Harlequin world. Likewise, the neural network generated some colorful new bachelor-types: I have so many questions. How is the prince pregnant? What sort of consulting does the count do? Who is Butterfly Earl? And what makes the sheikh’s desires so convenient? Although there are exceptions, most romance novels end in happily-ever-afters. A lot of them even start with an unexpected wedding — a marriage of convenience, or a stipulation of a business contract, or a sham that turns into real love. The neural network seems to have internalized something about matrimony: Doctors and surgeons are common paramours for mistresses headed towards the marriage valley: Christmas is a magical time for surgeons, sheikhs, playboys, dads, consultants, and the women who love them: What or where is Knith? I just like Mission: Christmas... This neural network has never seen the big Montana sky, but it has some questionable ideas about cowboys: The neural network generated some decidedly PG-13 titles: They can’t all live happily ever after. Some of the generated titles sounded like M. Night Shyamalan was a collaborator: How did the word “fear” get in there? It’s possible the network generated it without having “fear” in the training set, but a subset of the Harlequin empire is geared towards paranormal and gothic romance that might have included the word (*Note: I checked, and there was “Veil of Fear” published in 2012). To wrap it up, some of the adorable failures and near-misses generated by the neural network: I hope you’ve enjoyed computer-generated romance novel titles half as much as I have. Maybe someone out there can write about the Virgin Viking, or the Consultant Count, or the Baby Surgeon Seduction. I’d buy it. I built a webscraper in Python (thanks, Beautiful Soup!) that grabbed about 20,000 romance novel titles published under the Harlequin brand off of FictionDB.com. Harlequin is, to me, synonymous with the romance genre, although it comprises only a fraction (albeit a healthy one) of the entire market. I fed this list of book titles into a recurrent neural network, using software I got from GitHub, and waited a few hours for the magic to happen. The model I fit was a 3-layer, 256-node recurrent neural network. I also trained the network on the author list in to create some new pen names. For more about the neural network I used, have a look at the fabulous work of Andrej Karpathy. I discovered that “Surgery by the Sea” is actually a real novel, written by Sheila Douglas and published in 1979! So, this one isn’t an original neural network creation. Because the training set is rather small (only about 1 MB of text data), it’s to be expected that sometimes, the machine will spit out one of the titles it was trained on. One of the more challenging aspects of this project was discerning when that happened, since the real published titles can be more surprising than anything born out of artificial intelligence. For example: “The $4.98 Daddy” and “6'1” Grinch” are both real. In fact, the very first romance novel published by Harlequin was called “The Manatee”. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Computational scientist, software developer, science writer Sharing concepts, ideas, and codes.
Slav Ivanov
2.9K
9
https://blog.slavv.com/picking-a-gpu-for-deep-learning-3d4795c273b9?source=tag_archive---------8----------------
Picking a GPU for Deep Learning – Slav
Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was building my personal Deep Learning box, I reviewed all the GPUs on the market. In this article, I’m going to share my insights about choosing the right graphics processor. Also, we’ll go over: Deep Learning (DL) is part of the field of Machine Learning (ML). DL works by approximating a solution to a problem using neural networks. One of the nice properties of about neural networks is that they find patterns in the data (features) by themselves. This is opposed to having to tell your algorithm what to look for, as in the olde times. However, often this means the model starts with a blank state (unless we are transfer learning). To capture the nature of the data from scratch the neural net needs to process a lot of information. There are two ways to do so — with a CPU or a GPU. The main computational module in a computer is the Central Processing Unit (better known as CPU). It is designed to do computation rapidly on a small amount of data. For example, multiplying a few numbers on a CPU is blazingly fast. But it struggles when operating on a large amount of data. E.g., multiplying matrices of tens or hundreds thousand numbers. Behind the scenes, DL is mostly comprised of operations like matrix multiplication. Amusingly, 3D computer games rely on these same operations to render that beautiful landscape you see in Rise of the Tomb Raider. Thus, GPUs were developed to handle lots of parallel computations using thousands of cores. Also, they have a large memory bandwidth to deal with the data for these computations. This makes them the ideal commodity hardware to do DL on. Or at least, until ASICs for Machine Learning like Google’s TPU make their way to market. For me, the most important reason for picking a powerful graphics processor is saving time while prototyping models. If the networks train faster the feedback time will be shorter. Thus, it would be easier for my brain to connect the dots between the assumptions I had for the model and its results. See Tim Dettmers’ answer to “Why are GPUs well-suited to deep learning?” on Quora for a better explanation. Also for an in-depth, albeit slightly outdated GPUs comparison see his article “Which GPU(s) to Get for Deep Learning”. There are main characteristics of a GPU related to DL are: There are two reasons for having multiple GPUs: you want to train several models at once, or you want to do distributed training of a single model. We’ll go over each one. Training several models at once is a great technique to test different prototypes and hyperparameters. It also shortens your feedback cycle and lets you try out many things at once. Distributed training, or training a single network on several video cards is slowly but surely gaining traction. Nowadays, there are easy to use approaches to this for Tensorflow and Keras (via Horovod), CNTK and PyTorch. The distributed training libraries offer almost linear speed-ups to the number of cards. For example, with 2 GPUs you get 1.8x faster training. PCIe Lanes (Updated): The caveat to using multiple video cards is that you need to be able to feed them with data. For this purpose, each GPU should have 16 PCIe lanes available for data transfer. Tim Dettmers points out that having 8 PCIe lanes per card should only decrease performance by “0–10%” for two GPUs. For a single card, any desktop processor and chipset like Intel i5 7500 and Asus TUF Z270 will use 16 lanes. However, for two GPUs, you can go 8x/8x lanes or get a processor AND a motherboard that support 32 PCIe lanes. 32 lanes are outside the realm of desktop CPUs. An Intel Xeon with a MSI — X99A SLI PLUS will do the job. For 3 or 4 GPUs, go with 8x lanes per card with a Xeon with 24 to 32 PCIe lanes. To have 16 PCIe lanes available for 3 or 4 GPUs, you need a monstrous processor. Something in the class of or AMD ThreadRipper (64 lanes) with a corresponding motherboard. Also, for more GPUs you need a faster processor and hard disk to be able to feed them data quickly enough, so they don’t sit idle. Nvidia has been focusing on Deep Learning for a while now, and the head start is paying off. Their CUDA toolkit is deeply entrenched. It works with all major DL frameworks — Tensoflow, Pytorch, Caffe, CNTK, etc. As of now, none of these work out of the box with OpenCL (CUDA alternative), which runs on AMD GPUs. I hope support for OpenCL comes soon as there are great inexpensive GPUs from AMD on the market. Also, some AMD cards support half-precision computation which doubles their performance and VRAM size. Currently, if you want to do DL and want to avoid major headaches, choose Nvidia. Your GPU needs a computer around it: Hard Disk: First, you need to read the data off the disk. An SSD is recommended here, but an HDD can work as well. CPU: That data might have to be decoded by the CPU (e.g. jpegs). Fortunately, any mid-range modern processor will do just fine. Motherboard: The data passes via the motherboard to reach the GPU. For a single video card, almost any chipset will work. If you are planning on working with multiple graphic cards, read this section. RAM: It is recommended to have 2 gigabytes of memory for every gigabyte of video card RAM. Having more certainly helps in some situations, like when you want to keep an entire dataset in memory. Power supply: It should provide enough power for the CPU and the GPUs, plus 100 watts extra. You can get all of this for $500 to $1000. Or even less if you buy a used workstation. Here is performance comparison between all cards. Check the individual card profiles below. Notably, the performance of Titan XP and GTX 1080 Ti is very close despite the huge price gap between them. The price comparison reveals that GTX 1080 Ti, GTX 1070 and GTX 1060 have great value for the compute performance they provide. All the cards are in the same league value-wise, except Titan XP. The king of the hill. When every GB of VRAM matters, this card has more than any other on the (consumer) market. It’s only a recommended buy if you know why you want it. For the price of Titan X, you could get two GTX 1080s, which is a lot of power and 16 GBs of VRAM. This card is what I currently use. It’s a great high-end option, with lots of RAM and high throughput. Very good value. I recommend this GPU if you can afford it. It works great for Computer Vision or Kaggle competitions. Quite capable mid to high-end card. The price was reduced from $700 to $550 when 1080 Ti was introduced. 8 GB is enough for most Computer Vision tasks. People regularly compete on Kaggle with these. The newest card in Nvidia’s lineup. If 1080 is over budget, this will get you the same amount of VRAM (8 GB). Also, 80% of the performance for 80% of the price. Pretty sweet deal. It’s hard to get these nowadays because they are used for cryptocurrency mining. With a considerable amount of VRAM for this price but somewhat slower. If you can get it (or a couple) second-hand at a good price, go for it. It’s quite cheap but 6 GB VRAM is limiting. That’s probably the minimum you want to have if you are doing Computer Vision. It will be okay for NLP and categorical data models. Also available as P106–100 for cryptocurrency mining, but it’s the same card without a display output. The entry-level card which will get you started but not much more. Still, if you are unsure about getting in Deep Learning, this might be a cheap way to get your feet wet. Titan X Pascal It used to be the best consumer GPU Nvidia had to offer. Made obsolete by 1080 Ti, which has the same specs and is 40% cheaper. Tesla GPUsThis includes K40, K80 (which is 2x K40 in one), P100, and others. You might already be using these via Amazon Web Services, Google Cloud Platform, or another cloud provider. In my previous article, I did some benchmarks on GTX 1080 Ti vs. K40. The 1080 performed five times faster than the Tesla card and 2.5x faster than K80. K40 has 12 GB VRAM and K80 a whopping 24 GBs. In theory, the P100 and GTX 1080 Ti should be in the same league performance-wise. However, this cryptocurrency comparison has P100 lagging in every benchmark. It is worth noting that you can do half-precision on P100, effectively doubling the performance and VRAM size. On top of all this, K40 goes for over $2000, K80 for over $3000, and P100 is about $4500. And they get still get eaten alive by a desktop-grade card. Obviously, as it stands, I don’t recommend getting them. All the specs in the world won’t help you if you don’t know what you are looking for. Here are my GPU recommendations depending on your budget: I have over $1000: Get as many GTX 1080 Ti or GTX 1080 as you can. If you have 3 or 4 GPUs running in the same box, beware of issues with feeding them with data. Also keep in mind the airflow in the case and the space on the motherboard. I have $700 to $900: GTX 1080 Ti is highly recommended. If you want to go multi-GPU, get 2x GTX 1070 (if you can find them) or 2x GTX 1070 Ti. Kaggle, here I come! I have $400 to $700: Get the GTX 1080 or GTX 1070 Ti. Maybe 2x GTX 1060 if you really want 2 GPUs. However, know that 6 GB per model can be limiting. I have $300 to $400: GTX 1060 will get you started. Unless you can find a used GTX 1070. I have less than $300: Get GTX 1050 Ti or save for GTX 1060 if you are serious about Deep Learning. Deep Learning has the great promise of transforming many areas of our life. Unfortunately, learning to wield this powerful tool, requires good hardware. Hopefully, I’ve given you some clarity on where to start in this quest. Disclosure: The above are affiliate links, to help me pay for, well, more GPUs. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Entrepreneur / Hacker Machine learning, Deep learning and other types of learning.
Datafiniti
3
5
https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------0----------------
Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog
At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like: and do the following: Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages: Or Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML. While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks. Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation: A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible. Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them. In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like: For a more detailed explanation of neural networks, you can check out the following links: In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including: Our output layer had the following: Our algorithm for the neural network took the following steps: The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so: So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are: Our implementation had the following results: These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time. Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search. We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the <h1> and <h2> tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post! We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff! You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Instant Access to Web Data Building the world’s largest database of web data — follow our journey.
Yingjie Miao
43
6
https://medium.com/kifi-engineering/from-word2vec-to-doc2vec-an-approach-driven-by-chinese-restaurant-process-93d3602eaa31?source=tag_archive---------0----------------
From word2vec to doc2vec: an approach driven by Chinese restaurant process
Google’s word2vec project has created lots of interests in the text mining community. It’s a neural network language model that is “both supervised and unsupervised”. Unsupervised in the sense that you only have to provide a big corpus, say English wiki. Supervised in the sense that the model cleverly generates supervised learning tasks from the corpus. How? Two approaches, known as Continuous Bag of Words (CBOW) and Skip-Gram (See Figure 1 in this paper). CBOW forces the neural net to predict current word by surrounding words, and Skip-Gram forces the neural net to predict surrounding words of the current word. Training is essentially a classic back-propagation method with a few optimization and approximation tricks (e.g. hierarchical softmax). Word vectors generated by the neural net have nice semantic and syntactic behaviors. Semantically, “iOS” is close to “Android”. Syntactically, “boys” minus “boy” is close to “girls” minus “girl”. One can checkout more examples here. Although this provides high quality word vectors, there is still no clear way to combine them into a high quality document vector. In this article, we discuss one possible heuristic, inspired by a stochastic process called Chinese Restaurant Process (CRP). Basic idea is to use CRP to drive a clustering process and summing word vectors in the right cluster. Imagine we have an document about chicken recipe. It contains words like “chicken”, “pepper”, “salt”, “cheese”. It also contains words like “use”, “buy”, “definitely”, “my”, “the”. The word2vec model gives us a vector for each word. One could naively sum up every word vector as the doc vector. This clearly introduces lots of noise. A better heuristic is to use a weighted sum, based on other information like idf or Part of Speech (POS) tag. The question is: could we be more selective when adding terms? If this is a chicken recipe document, I shouldn’t even consider words like “definitely”, “use”, “my” in the summation. One can argue that idf based weights can significantly reduce noise of boring words like “the” and “is”. However, for words like “definitely”, “overwhelming”, the idfs are not necessarily small as you would hope. It’s natural to think that if we can first group words into clusters, words like “chicken”, “pepper” may stay in one cluster, along with other clusters of “junk” words. If we can identify the “relevant” clusters, and only summing up word vectors from relevant clusters, we should have a good doc vector. This boils down to clustering the words in the document. One can of course use off-the-shelf algorithms like K-means, but most these algorithms require a distance metric. Word2vec behaves nicely by cosine similarity, this doesn’t necessarily mean it behaves as well under Eucledian distance (even after projection to unit sphere, it’s perhaps best to use geodesic distance.) It would be nice if we can directly work with cosine similarity. We have done a quick experiment on clustering words driven by CRP-like stochastic process. It worked surprisingly well — so far. Now let’s explain CRP. Imagine you go to a (Chinese) restaurant. There are already n tables with different number of peoples. There is also an empty table. CRP has a hyperparamter r > 0, which can be regarded as the “imagined” number of people on the empty table. You go to one of the (n+1) tables with probability proportional to existing number of people on the table. (For the empty table, the number is r). If you go to one of the n existing tables, you are done. If you decide to sit down at the empty table, the Chinese restaurant will automatically create a new empty table. In that case, the next customer comes in will choose from (n+2) tables (including the new empty table). Inspired by CRP, we tried the following variations of CRP to include the similarity factor. Common setup is the following: we are given M vectors to be clustered. We maintain two things: cluster sum (not centroid!), and vectors in clusters. We iterate through vectors. For current vector V, suppose we have n clusters already. Now we find the cluster C whose cluster sum is most similar to current vector. Call this score sim(V, C). Variant 1: v creates a new cluster with probability 1/(1 + n). Otherwise v goes to cluster C. Variant 2: If sim(V, C) > 1/(1 + n), goes to cluster C. Otherwise with probability 1/(1+n) it creates a new cluster and with probability n/(1+n) it goes to C. In any of the two variants, if v goes to a cluster, we update cluster sum and cluster membership. There is one distinct difference to traditional CRP: if we don’t go to empty table, we deterministically go to the “most similar” table. In practice, we find these variants create similar results. One difference is that variant 1 tend to have more clusters and smaller clusters, variant 2 tend to have fewer but larger clusters. The examples below are from variant 2. For example, for a chicken recipe document, the clusters look like this: Apparently, the first cluster is most relevant. Now let’s take the cluster sum vector (which is the sum of all vectors from this cluster), and test if it really preserves semantic. Below is a snippet of python console. We trained word vector using the c implementation on a fraction of English Wiki, and read the model file using python library gensim.model.word2vec. c[0] below denotes the cluster 0. Looks like the semantic is preserved well. It’s convincing that we can use this as the doc vector. The recipe document seems easy. Now let’s try something more challenging, like a news article. News articles tend to tell stories, and thus has less concentrated “topic words”. We tried the clustering on this article, titled “Signals on Radar Puzzle Officials in Hunt for Malaysian Jet”. We got 4 clusters: Again, looks decent. Note that this is a simple 1-pass clustering process and we don’t have to specify number of clusters! Could be very helpful for latency sensitive services. There is still a missing step: how to find out the relevant cluster(s)? We haven’t yet done extensive experiments on this part. A few heuristics to consider: There are other problems to think about: 1) how do we merge clusters? Based on similarity among cluster sum vectors? Or averaging similarity between cluster members? 2) what is the minimal set of words that can reconstruct cluster sum vector (in the sense of cosine similarity)? This could be used as a semantic keyword extraction method. Conclusion: Google’s word2vec provides powerful word vectors. We are interested in using these vectors to generate high quality document vectors in an efficient way. We tried a strategy based on a variant of Chinese Restaurant Process and obtained interesting results. There are some open problems to explore, and we would like to hear what you think. Appendix: python style pseudo-code for similarity driven CRP We wrote this post while working on Kifi — Connecting people with knowledge. Learn more. Originally published at eng.kifi.com on March 17, 2014. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. The Kifi Engineering Blog