{"id": "06edf116f01d5ef4bb68d6fa102c6127", "title": "Understanding Convolutions on Graphs", "url": "https://distill.pub/2021/understanding-gnns", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Introduction](#introduction)\n[The Challenges of Computation on Graphs](#challenges)\n* [Lack of Consistent Structure](#lack-of-consistent-structure)\n* [Node-Order Equivariance](#node-order)\n* [Scalability](#scalability)\n\n\n[Problem Setting and Notation](#problem-and-notation)\n[Extending Convolutions to Graphs](#extending)\n[Polynomial Filters on Graphs](#polynomial-filters)\n[Modern Graph Neural Networks](#modern-gnns)\n[Interactive Graph Neural Networks](#interactive)\n[From Local to Global Convolutions](#from-local-to-global)\n* [Spectral Convolutions](#spectral)\n* [Global Propagation via Graph Embeddings](#graph-embeddings)\n\n\n[Learning GNN Parameters](#learning)\n\n[Conclusions and Further Reading](#further-reading)\n* [GNNs in Practice](#practical-techniques)\n* [Different Kinds of Graphs](#different-kinds-of-graphs)\n* [Pooling](#pooling)\n\n\n[Supplementary Material](#supplementary)\n* [Reproducing Experiments](#experiments-notebooks)\n* [Recreating Visualizations](#visualizations-notebooks)\n\n\n\n\n\n*This article is one of two Distill publications about graph neural networks.\n Take a look at\n [A Gentle Introduction to Graph Neural Networks](https://distill.pub/2021/gnn-intro/)\n\n for a companion view on many things graph and neural network related.* \n\n\n\n Many systems and interactions - social networks, molecules, organizations, citations, physical models, transactions - can be represented quite naturally as graphs.\n How can we reason about and make predictions within these systems?\n \n\n\n\n One idea is to look at tools that have worked well in other domains: neural networks have shown immense predictive power in a variety of learning tasks.\n However, neural networks have been traditionally used to operate on fixed-size and/or regular-structured inputs (such as sentences, images and video).\n This makes them unable to elegantly process graph-structured data.\n \n\n\n\n![Neural networks generally operate on fixed-size input vectors. How do we input a graph to a neural network?](images/standard-neural-networks.svg \"How do we input a graph to a neural network?\")\n\n\n\n Graph neural networks (GNNs) are a family of neural networks that can operate naturally on graph-structured data. \n By extracting and utilizing features from the underlying graph,\n GNNs can make more informed predictions about entities in these interactions,\n as compared to models that consider individual entities in isolation.\n \n\n\n\n GNNs are not the only tools available to model graph-structured data:\n graph kernels \n and random-walk methods \n were some of the most popular ones.\n Today, however, GNNs have largely replaced these techniques\n because of their inherent flexibility to model the underlying systems\n better.\n \n\n\n\n In this article, we will illustrate\n the challenges of computing over graphs, \n describe the origin and design of graph neural networks,\n and explore the most popular GNN variants in recent times.\n Particularly, we will see that many of these variants\n are composed of similar building blocks.\n \n\n\n\n First, let’s discuss some of the complications that graphs come with.\n \n\n\n\n The Challenges of Computation on Graphs\n-----------------------------------------\n\n\n### \n Lack of Consistent Structure\n\n\n\n Graphs are extremely flexible mathematical models; but this means they lack consistent structure across instances.\n Consider the task of predicting whether a given chemical molecule is toxic  :\n \n\n\n\n![The molecular structure of non-toxic 1,2,6-trigalloyl-glucose.](images/1,2,6-trigalloyl-glucose-molecule.svg)\n![The molecular structure of toxic caramboxin.](images/caramboxin-molecule.svg)\n\n\n\n**Left:** A non-toxic 1,2,6-trigalloyl-glucose molecule.\n\n\n**Right:** A toxic caramboxin molecule.\n\n\n\n Looking at a few examples, the following issues quickly become apparent:\n \n\n\n* Molecules may have different numbers of atoms.\n* The atoms in a molecule may be of different types.\n* Each of these atoms may have different number of connections.\n* These connections can have different strengths.\n\n\n\n Representing graphs in a format that can be computed over is non-trivial,\n and the final representation chosen often depends significantly on the actual problem.\n \n\n\n### \n Node-Order Equivariance\n\n\n\n Extending the point above: graphs often have no inherent ordering present amongst the nodes.\n Compare this to images, where every pixel is uniquely determined by its absolute position within the image!\n \n\n\n\n![Representing the graph as one vector requires us to fix an order on the nodes. But what do we do when the nodes have no inherent order?](images/node-order-alternatives.svg)\n\n Representing the graph as one vector requires us to fix an order on the nodes.\n But what do we do when the nodes have no inherent order?\n **Above:** \n The same graph labelled in two different ways. The alphabets indicate the ordering of the nodes.\n \n\n\n As a result, we would like our algorithms to be node-order equivariant:\n they should not depend on the ordering of the nodes of the graph.\n If we permute the nodes in some way, the resulting representations of \n the nodes as computed by our algorithms should also be permuted in the same way.\n \n\n\n### \n Scalability\n\n\n\n Graphs can be really large! Think about social networks like Facebook and Twitter, which have over a billion users. \n Operating on data this large is not easy.\n \n\n\n\n Luckily, most naturally occuring graphs are ‘sparse’:\n they tend to have their number of edges linear in their number of vertices.\n We will see that this allows the use of clever methods\n to efficiently compute representations of nodes within the graph.\n Further, the methods that we look at here will have significantly fewer parameters\n in comparison to the size of the graphs they operate on.\n \n\n\n\n Problem Setting and Notation\n------------------------------\n\n\n\n There are many useful problems that can be formulated over graphs:\n \n\n\n* **Node Classification:** Classifying individual nodes.\n* **Graph Classification:** Classifying entire graphs.\n* **Node Clustering:** Grouping together similar nodes based on connectivity.\n* **Link Prediction:** Predicting missing links.\n* **Influence Maximization:** Identifying influential nodes.\n\n\n\n![Examples of problems that can be defined over graphs.](images/graph-tasks.svg)\n\n Examples of problems that can be defined over graphs.\n This list is not exhaustive!\n \n\n\n A common precursor in solving many of these problems is **node representation learning**:\n learning to map individual nodes to fixed-size real-valued vectors (called ‘representations’ or ‘embeddings’).\n \n\n\n\n In [Learning GNN Parameters](#learning), we will see how the learnt embeddings can be used for these tasks.\n \n\n\n\n Different GNN variants are distinguished by the way these representations are computed.\n Generally, however, GNNs compute node representations in an iterative process.\n We will use the notation hv(k)h\\_v^{(k)}hv(k)​ to indicate the representation of node vvv after the kthk^{\\text{th}}kth iteration.\n Each iteration can be thought of as the equivalent of a ‘layer’ in standard neural networks.\n \n\n\n\n We will define a graph GGG as a set of nodes, VVV, with a set of edges EEE connecting them.\n Nodes can have individual features as part of the input: we will denote by xvx\\_vxv​ the individual feature for node v∈Vv \\in Vv∈V.\n For example, the ‘node features’ for a pixel in a color image\n would be the red, green and blue channel (RGB) values at that pixel.\n \n\n\n\n For ease of exposition, we will assume GGG is undirected, and all nodes are of the same type.\n These kinds of graphs are called ‘homogeneous’.\n Many of the same ideas we will see here \n apply to other kinds of graphs:\n we will discuss this later in [Different Kinds of Graphs](#different-kinds-of-graphs).\n \n\n\n\n Sometimes we will need to denote a graph property by a matrix MMM,\n where each row MvM\\_vMv​ represents a property corresponding to a particular vertex vvv.\n \n\n\n\n Extending Convolutions to Graphs\n----------------------------------\n\n\n\n Convolutional Neural Networks have been seen to be quite powerful in extracting features from images.\n However, images themselves can be seen as graphs with a very regular grid-like structure,\n where the individual pixels are nodes, and the RGB channel values at each pixel as the node features.\n \n\n\n\n A natural idea, then, is to consider generalizing convolutions to arbitrary graphs. Recall, however, the challenges\n listed out in the [previous section](#challenges): in particular, ordinary convolutions are not node-order invariant, because\n they depend on the absolute positions of pixels.\n It is initially unclear as how to generalize convolutions over grids to convolutions over general graphs,\n where the neighbourhood structure differs from node to node.\n \n The curious reader may wonder if performing some sort of padding and ordering\n could be done to ensure the consistency of neighbourhood structure across nodes.\n This has been attempted with some success ,\n but the techniques we will look at here are more general and powerful.\n \n\n\n\n\n\n\n\n Convolutions in CNNs are inherently localized.\n Neighbours participating in the convolution at the center pixel are highlighted in gray.\n \n\n\n\n\n GNNs can perform localized convolutions mimicking CNNs.\n Hover over a node to see its immediate neighbourhood highlighted on the left.\n The structure of this neighbourhood changes from node to node.\n \n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n import define from \"./notebooks/neighbourhoods-for-cnns-and-gnns.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"cnn\\_svg\") return new Inspector(document.querySelector(\"#observablehq-cnn\\_svg-35509536\"));\n if (name === \"svg\") return new Inspector(document.querySelector(\"#observablehq-svg-35509536\"));\n return [\"adjust\\_dimensions\",\"reset\\_nodes\",\"highlight\\_nodes\",\"get\\_node\\_position\",\"remove\\_old\\_arrows\",\"draw\\_arrows\\_to\\_updated\\_node\",\"add\\_interactivity\",\"on\\_selected\\_node\\_change\",\"updated\\_node\\_position\"].includes(name);\n });\n }, 200);\n \n\n\n We begin by introducing the idea of constructing polynomial filters over node neighbourhoods,\n much like how CNNs compute localized filters over neighbouring pixels.\n Then, we will see how more recent approaches extend on this idea with more powerful mechanisms.\n Finally, we will discuss alternative methods\n that can use ‘global’ graph-level information for computing node representations.\n \n\n\n\n Polynomial Filters on Graphs\n------------------------------\n\n\n### \n The Graph Laplacian\n\n\n\n Given a graph GGG, let us fix an arbitrary ordering of the nnn nodes of GGG.\n We denote the 0−10-10−1 adjacency matrix of GGG by AAA, we can construct the diagonal degree matrix DDD of GGG as: \n \n\n\n\nDv=∑uAvu.\n D\\_v = \\sum\\_u A\\_{vu}.\n Dv​=u∑​Avu​.\n\n\n The degree of node vvv is the number of edges incident at vvv.\n \n\n\n\n where AvuA\\_{vu}Avu​ denotes the entry in the row corresponding to vvv and the column corresponding to uuu\n in the matrix AAA. We will use this notation throughout this section.\n \n\n\n\n Then, the graph Laplacian LLL is the square n×nn \\times nn×n matrix defined as:\n L=D−A.\n L = D - A.\n L=D−A.\n\n\n\n\n![](images/laplacian.svg)\n\n The Laplacian LLL for an undirected graph GGG, with the row corresponding to node C\\textsf{C}C highlighted.\n Zeros in LLL are not displayed above.\n The Laplacian LLL depends only on the structure of the graph GGG, not on any node features.\n \n\n\n The graph Laplacian gets its name from being the discrete analog of the\n [Laplacian operator](https://mathworld.wolfram.com/Laplacian.html)\n from calculus.\n \n\n\n\n Although it encodes precisely the same information as the adjacency matrix AAA\n\n In the sense that given either of the matrices AAA or LLL, you can construct the other.\n ,\n the graph Laplacian has many interesting properties of its own.\n \n The graph Laplacian shows up in many mathematical problems involving graphs:\n [random walks](https://people.math.sc.edu/lu/talks/nankai_2014/spec_nankai_2.pdf),\n [spectral clustering](https://arxiv.org/abs/0711.0189),\n and\n [diffusion](https://www.math.fsu.edu/~bertram/lectures/Diffusion.pdf), to name a few.\n \n We will see some of these properties\n in [a later section](#spectral),\n but will instead point readers to\n [this tutorial](https://csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf)\n for greater insight into the graph Laplacian.\n \n\n\n### \n Polynomials of the Laplacian\n\n\n\n Now that we have understood what the graph Laplacian is,\n we can build polynomials of the form:\n pw(L)=w0In+w1L+w2L2+…+wdLd=∑i=0dwiLi.\n p\\_w(L) = w\\_0 I\\_n + w\\_1 L + w\\_2 L^2 + \\ldots + w\\_d L^d = \\sum\\_{i = 0}^d w\\_i L^i.\n pw​(L)=w0​In​+w1​L+w2​L2+…+wd​Ld=i=0∑d​wi​Li.\n Each polynomial of this form can alternately be represented by\n its vector of coefficients w=[w0,…,wd]w = [w\\_0, \\ldots, w\\_d]w=[w0​,…,wd​].\n Note that for every www, pw(L)p\\_w(L)pw​(L) is an n×nn \\times nn×n matrix, just like LLL.\n \n\n\n\n These polynomials can be thought of as the equivalent of ‘filters’ in CNNs,\n and the coefficients www as the weights of the ‘filters’.\n \n\n\n\n For ease of exposition, we will focus on the case where nodes have one-dimensional features:\n each of the xvx\\_vxv​ for v∈Vv \\in Vv∈V is just a real number. \n The same ideas hold when each of the xvx\\_vxv​ are higher-dimensional vectors, as well.\n \n\n\n\n Using the previously chosen ordering of the nodes,\n we can stack all of the node features xvx\\_vxv​\n to get a vector x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n \n\n\n\n![Fixing a node order and collecting all node features into a single vector.](images/node-order-vector.svg)\n\n Fixing a node order (indicated by the alphabets) and collecting all node features into a single vector xxx.\n \n\n\n Once we have constructed the feature vector xxx,\n we can define its convolution with a polynomial filter pwp\\_wpw​ as:\n x′=pw(L) x\n x’ = p\\_w(L) \\ x\n x′=pw​(L) x\n To understand how the coefficients www affect the convolution,\n let us begin by considering the ‘simplest’ polynomial:\n when w0=1w\\_0 = 1w0​=1 and all of the other coefficients are 000.\n In this case, x′x’x′ is just xxx:\n x′=pw(L) x=∑i=0dwiLix=w0Inx=x.\n x’ = p\\_w(L) \\ x = \\sum\\_{i = 0}^d w\\_i L^ix = w\\_0 I\\_n x = x.\n x′=pw​(L) x=i=0∑d​wi​Lix=w0​In​x=x.\n Now, if we increase the degree, and consider the case where\n instead w1=1w\\_1 = 1w1​=1 and and all of the other coefficients are 000.\n Then, x′x’x′ is just LxLxLx, and so:\n xv′=(Lx)v=Lvx=∑u∈GLvuxu=∑u∈G(Dvu−Avu)xu=Dv xv−∑u∈N(v)xu\n \\begin{aligned}\n x’\\_v = (Lx)\\_v &= L\\_v x \\\\ \n &= \\sum\\_{u \\in G} L\\_{vu} x\\_u \\\\ \n &= \\sum\\_{u \\in G} (D\\_{vu} - A\\_{vu}) x\\_u \\\\ \n &= D\\_v \\ x\\_v - \\sum\\_{u \\in \\mathcal{N}(v)} x\\_u\n \\end{aligned}\n xv′​=(Lx)v​​=Lv​x=u∈G∑​Lvu​xu​=u∈G∑​(Dvu​−Avu​)xu​=Dv​ xv​−u∈N(v)∑​xu​​\n We see that the features at each node vvv are combined\n with the features of its immediate neighbours u∈N(v)u \\in \\mathcal{N}(v)u∈N(v).\n \n For readers familiar with\n [Laplacian filtering of images](https://docs.opencv.org/3.4/d5/db5/tutorial_laplace_operator.html),\n this is the exact same idea. When xxx is an image, \n x′=Lxx’ = Lxx′=Lx is exactly the result of applying a ‘Laplacian filter’ to xxx.\n \n\n\n\n\n At this point, a natural question to ask is:\n How does the degree ddd of the polynomial influence the behaviour of the convolution?\n Indeed, it is not too hard to show that:\n This is Lemma 5.2 from .\ndistG(v,u)>i⟹Lvui=0.\n \\text{dist}\\_G(v, u) > i \\quad \\Longrightarrow \\quad L\\_{vu}^i = 0.\n distG​(v,u)>i⟹Lvui​=0.\n \n This implies, when we convolve xxx with pw(L)p\\_w(L)pw​(L) of degree ddd to get x′x’x′:\n xv′=(pw(L)x)v=(pw(L))vx=∑i=0dwiLvix=∑i=0dwi∑u∈GLvuixu=∑i=0dwi∑u∈GdistG(v,u)≤iLvuixu.\n \\begin{aligned}\n x’\\_v = (p\\_w(L)x)\\_v &= (p\\_w(L))\\_v x \\\\\n &= \\sum\\_{i = 0}^d w\\_i L\\_v^i x \\\\\n &= \\sum\\_{i = 0}^d w\\_i \\sum\\_{u \\in G} L\\_{vu}^i x\\_u \\\\\n &= \\sum\\_{i = 0}^d w\\_i \\sum\\_{u \\in G \\atop \\text{dist}\\_G(v, u) \\leq i} L\\_{vu}^i x\\_u.\n \\end{aligned}\n xv′​=(pw​(L)x)v​​=(pw​(L))v​x=i=0∑d​wi​Lvi​x=i=0∑d​wi​u∈G∑​Lvui​xu​=i=0∑d​wi​distG​(v,u)≤iu∈G​∑​Lvui​xu​.​\n\n\n\n Effectively, the convolution at node vvv occurs only with nodes uuu which are not more than ddd hops away.\n Thus, these polynomial filters are localized. The degree of the localization is governed completely by ddd.\n \n\n\n\n To help you understand these ‘polynomial-based’ convolutions better, we have created the visualization below.\n Vary the polynomial coefficients and the input grid xxx to see how the result x′x’x′ of the convolution changes.\n The grid under the arrow shows the equivalent convolutional kernel applied at the highlighted pixel in xxx to get\n the resulting pixel in x′x’x′.\n The kernel corresponds to the row of pw(L)p\\_w(L)pw​(L) for the highlighted pixel.\n Note that even after adjusting for position,\n this kernel is different for different pixels, depending on their position within the grid.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n import define from \"./notebooks/cleaner-interactive-graph-polynomial-convolutions.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"grid\\_buttons\\_display\") return new Inspector(document.querySelector(\"#observablehq-grid\\_buttons\\_display-05850d43\"));\n if (name === \"poly\\_color\\_scale\") return new Inspector(document.querySelector(\"#observablehq-poly\\_color\\_scale-05850d43\"));\n if (name === \"poly\\_figcaptions\") return new Inspector(document.querySelector(\"#observablehq-poly\\_figcaptions-05850d43\"));\n if (name === \"poly\\_conv\\_main\\_div\") return new Inspector(document.querySelector(\"#observablehq-poly\\_conv\\_main\\_div-05850d43\"));\n if (name === \"viewof laplacian\\_type\") return new Inspector(document.querySelector(\"#observablehq-viewof-laplacian\\_type-05850d43\"));\n if (name === \"polynomial\\_display\") return new Inspector(document.querySelector(\"#observablehq-polynomial\\_display-05850d43\"));\n if (name === \"poly\\_conv\\_sliders\") return new Inspector(document.querySelector(\"#observablehq-poly\\_conv\\_sliders-05850d43\"));\n if (name === \"highlight\\_selected\\_cell\") return new Inspector(document.querySelector(\"#observablehq-highlight\\_selected\\_cell-05850d43\"));\n if (name === \"reset\\_coeffs\\_button\\_display\") return new Inspector(document.querySelector(\"#observablehq-reset\\_coeffs\\_button\\_display-05850d43\"));\n if (name === \"poly\\_input\\_slider\\_watch\") return new Inspector(document.querySelector(\"#observablehq-poly\\_input\\_slider\\_watch-05850d43\"));\n return [\"svg\",\"draw\\_bottom\\_line\",\"draw\\_arrow\",\"draw\\_original\\_img\",\"draw\\_convolutional\\_kernel\",\"draw\\_updated\\_img\",\"draw\\_static\\_graph\\_orig\",\"draw\\_static\\_graph\\_upd\",\"draw\\_dyn\\_graph\\_orig\",\"draw\\_dyn\\_graph\\_upd\"].includes(name);\n });\n }, 200);\n \n\n\n\n\n Hover over a pixel in the input grid (left, representing xxx)\n to highlight it and see the equivalent convolutional kernel\n for that pixel under the arrow.\n The result x′x’x′ of the convolution is shown on the right:\n note that different convolutional kernels are applied at different pixels,\n depending on their location.\n \n\n\n\n Click on the input grid to toggle pixel values between 000 (white) and 111 (blue).\n To randomize the input grid, press ‘Randomize Grid’. To reset all pixels to 000, press ‘Reset Grid’.\n Use the sliders at the bottom to change the coefficients www.\n To reset all coefficients www to 000, press ‘Reset Coefficients.’\n \n\n\n\n\n### \n ChebNet\n\n\n\n ChebNet refines this idea of polynomial filters by looking at polynomial filters of the form:\n \npw(L)=∑i=1dwiTi(L~)\n p\\_w(L) = \\sum\\_{i = 1}^d w\\_i T\\_i(\\tilde{L})\n pw​(L)=i=1∑d​wi​Ti​(L~)\n\n where TiT\\_iTi​ is the degree-iii\n[Chebyshev polynomial of the first kind](https://en.wikipedia.org/wiki/Chebyshev_polynomials) and\n L~\\tilde{L}L~ is the normalized Laplacian defined using the largest eigenvalue of LLL:\n \n We discuss the eigenvalues of the Laplacian LLL in more detail in [a later section](#spectral).\n \n\nL~=2Lλmax(L)−In.\n \\tilde{L} = \\frac{2L}{\\lambda\\_{\\max}(L)} - I\\_n.\n L~=λmax​(L)2L​−In​.\n\n What is the motivation behind these choices?\n \n\n\n* LLL is actually positive semi-definite: all of the eigenvalues of LLL are not lesser than 000.\n If λmax(L)>1\\lambda\\_{\\max}(L) > 1λmax​(L)>1, the entries in the powers of LLL rapidly increase in size.\n L~\\tilde{L}L~ is effectively a scaled-down version of LLL, with eigenvalues guaranteed to be in the range [−1,1][-1, 1][−1,1].\n This prevents the entries of powers of L~\\tilde{L}L~ from blowing up.\n Indeed, in the [visualization above](#polynomial-convolutions): we restrict the higher-order coefficients\n when the unnormalized Laplacian LLL is selected, but allow larger values when the normalized Laplacian L~\\tilde{L}L~ is selected,\n in order to show the result x′x’x′ on the same color scale.\n* The Chebyshev polynomials have certain interesting properties that make interpolation more numerically stable.\n We won’t talk about this in more depth here,\n but will advise interested readers to take a look at as a definitive resource.\n\n\n### \n Polynomial Filters are Node-Order Equivariant\n\n\n\n The polynomial filters we considered here are actually independent of the ordering of the nodes.\n This is particularly easy to see when the degree of the polynomial pwp\\_wpw​ is 111:\n where each node’s feature is aggregated with the sum of its neighbour’s features.\n Clearly, this sum does not depend on the order of the neighbours.\n A similar proof follows for higher degree polynomials:\n the entries in the powers of LLL are equivariant to the ordering of the nodes.\n \n\n\n\n**Details for the Interested Reader**\n\n As above, let’s assume an arbitrary node-order over the nnn nodes of our graph.\n Any other node-order can be thought of as a permutation of this original node-order.\n We can represent any permutation by a\n [permutation matrix](https://en.wikipedia.org/wiki/Permutation_matrix) PPP.\n PPP will always be an orthogonal 0−10-10−1 matrix:\n PPT=PTP=In.\n PP^T = P^TP = I\\_n.\n PPT=PTP=In​.\n Then, we call a function fff node-order equivariant iff for all permutations PPP:\n f(Px)=Pf(x).\n f(Px) = P f(x).\n f(Px)=Pf(x).\n\n When switching to the new node-order using the permutation PPP,\n the quantities below transform in the following way:\n x→PxL→PLPTLi→PLiPT\n \\begin{aligned}\n x &\\to Px \\\\\n L &\\to PLP^T \\\\\n L^i &\\to PL^iP^T\n \\end{aligned}\n xLLi​→Px→PLPT→PLiPT​\n and so, for the case of polynomial filters where f(x)=pw(L) xf(x) = p\\_w(L) \\ xf(x)=pw​(L) x, we can see that:\n f(Px)=∑i=0dwi(PLiPT)(Px)=P∑i=0dwiLix=Pf(x).\n \\begin{aligned}\n f(Px) & = \\sum\\_{i = 0}^d w\\_i (PL^iP^T) (Px) \\\\\n & = P \\sum\\_{i = 0}^d w\\_i L^i x \\\\\n & = P f(x).\n \\end{aligned}\n f(Px)​=i=0∑d​wi​(PLiPT)(Px)=Pi=0∑d​wi​Lix=Pf(x).​ \n as claimed.\n \n\n\n\n### \n Embedding Computation\n\n\n\n We now describe how we can build a graph neural network\n by stacking ChebNet (or any polynomial filter) layers\n one after the other with non-linearities,\n much like a standard CNN.\n In particular, if we have KKK different polynomial filter layers,\n the kthk^{\\text{th}}kth of which has its own learnable weights w(k)w^{(k)}w(k),\n we would perform the following computation:\n \n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n import define from \"./notebooks/updated-chebnet-equations.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"cheb\\_figure\") return new Inspector(document.querySelector(\"#observablehq-cheb\\_figure-fa1f970f\"));\n if (name === \"style\") return new Inspector(document.querySelector(\"#observablehq-style-fa1f970f\"));\n });\n }, 200);\n \n\n Note that these networks\n reuse the same filter weights across different nodes,\n exactly mimicking weight-sharing in Convolutional Neural Networks (CNNs)\n which reuse weights for convolutional filters across a grid.\n \n\n\n\n Modern Graph Neural Networks\n------------------------------\n\n\n\n ChebNet was a breakthrough in learning localized filters over graphs,\n and it motivated many to think of graph convolutions from a different perspective.\n \n\n\n\n We return back to the result of convolving xxx by the polynomial kernel pw(L)=Lp\\_w(L) = Lpw​(L)=L,\n focussing on a particular vertex vvv:\n \n(Lx)v=Lvx=∑u∈GLvuxu=∑u∈G(Dvu−Avu)xu=Dv xv−∑u∈N(v)xu\n \\begin{aligned}\n (Lx)\\_v &= L\\_v x \\\\ \n &= \\sum\\_{u \\in G} L\\_{vu} x\\_u \\\\ \n &= \\sum\\_{u \\in G} (D\\_{vu} - A\\_{vu}) x\\_u \\\\ \n &= D\\_v \\ x\\_v - \\sum\\_{u \\in \\mathcal{N}(v)} x\\_u\n \\end{aligned}\n (Lx)v​​=Lv​x=u∈G∑​Lvu​xu​=u∈G∑​(Dvu​−Avu​)xu​=Dv​ xv​−u∈N(v)∑​xu​​\n \n As we noted before, this is a 111-hop localized convolution.\n But more importantly, we can think of this convolution as arising of two steps:\n \n\n\n* Aggregating over immediate neighbour features xux\\_uxu​.\n* Combining with the node’s own feature xvx\\_vxv​.\n\n\n\n**Key Idea:**\n What if we consider different kinds of ‘aggregation’ and ‘combination’ steps,\n beyond what are possible using polynomial filters?\n \n\n\n\n By ensuring that the aggregation is node-order equivariant,\n the overall convolution becomes node-order equivariant.\n \n\n\n\n These convolutions can be thought of as ‘message-passing’ between adjacent nodes:\n after each step, every node receives some ‘information’ from its neighbours.\n \n\n\n\n By iteratively repeating the 111-hop localized convolutions KKK times (i.e., repeatedly ‘passing messages’),\n the receptive field of the convolution effectively includes all nodes upto KKK hops away.\n \n\n\n### \n Embedding Computation\n\n\n\n Message-passing forms the backbone of many GNN architectures today.\n We describe the most popular ones in depth below:\n \n\n\n* Graph Convolutional Networks (GCN)\n* Graph Attention Networks (GAT)\n* Graph Sample and Aggregate (GraphSAGE)\n* Graph Isomorphism Network (GIN)\n\n\n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n import define from \"./notebooks/interactive-gnn-equations.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"fig\\_div\") return Inspector.into(\".interactive-gnn-equations-fig\\_div\")();\n if (name === \"text\\_div\") return Inspector.into(\".interactive-gnn-equations-text\\_div\")();\n if (name === \"interactive\\_list\") return Inspector.into(\".interactive-gnn-equations-interactive\\_list\")();\n if (name === \"style\") return Inspector.into(\".interactive-gnn-equations-style\")();\n });\n }, 200);\n \n\n### \n Thoughts\n\n\n\n An interesting point is to assess different aggregation functions: are some better and others worse?\n demonstrates that aggregation functions indeed can be compared on how well\n they can uniquely preserve node neighbourhood features;\n we recommend the interested reader take a look at the detailed theoretical analysis there.\n \n\n\n\n Here, we’ve talk about GNNs where the computation only occurs at the nodes.\n More recent GNN models\n such as Message-Passing Neural Networks \n and Graph Networks \n perform computation over the edges as well;\n they compute edge embeddings together with node embeddings.\n This is an even more general framework -\n but the same ‘message passing’ ideas from this section apply.\n \n\n\n\n Interactive Graph Neural Networks\n-----------------------------------\n\n\n\n Below is an interactive visualization of these GNN models on small graphs.\n For clarity, the node features are just real numbers here, shown inside the squares next to each node,\n but the same equations hold when the node features are vectors.\n \n\n\n\n\n\n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n import define from \"./notebooks/interactive-gnn-visualizations.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"viz\\_list\") return Inspector.into(\".interactive-gnn-visualizations-viz\\_list\")();\n if (name === \"buttons\") return Inspector.into(\".interactive-gnn-visualizations-buttons\")();\n if (name === \"fig\") return Inspector.into(\".interactive-gnn-visualizations-fig\")();\n if (name === \"eqn\") return Inspector.into(\".interactive-gnn-visualizations-eqn\")();\n if (name === \"network\\_display\\_hack\") return Inspector.into(\".interactive-gnn-visualizations-network\\_display\\_hack\")();\n if (name === \"style\") return Inspector.into(\".interactive-gnn-visualizations-style\")();\n return [\"interactive\\_list\",\"select\\_fig\",\"handle\\_click\"].includes(name) || null;\n });\n }, 200);\n \n\n\n Choose a GNN model using the tabs at the top. Click on a node to see the update equation at that node for the next iteration.\n Use the sliders on the left to change the weights for the current iteration, and watch how the update equation changes. \n \n\n\n In practice, each iteration above is generally thought of as a single ‘neural network layer’.\n This ideology is followed by many popular Graph Neural Network libraries,\n \n For example: [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html)\n and [StellarGraph](https://stellargraph.readthedocs.io/en/stable/api.html#module-stellargraph.layer).\n \n allowing one to compose different types of graph convolutions in the same model.\n \n\n\n\n From Local to Global Convolutions\n-----------------------------------\n\n\n\n The methods we’ve seen so far perform ‘local’ convolutions:\n every node’s feature is updated using a function of its local neighbours’ features.\n \n\n\n\n While performing enough steps of message-passing will eventually ensure that\n information from all nodes in the graph is passed,\n one may wonder if there are more direct ways to perform ‘global’ convolutions.\n \n\n\n\n The answer is yes; we will now describe an approach that was actually first put forward\n in the context of neural networks by ,\n much before any of the GNN models we looked at above.\n \n\n\n### \n Spectral Convolutions\n\n\n\n As before, we will focus on the case where nodes have one-dimensional features.\n After choosing an arbitrary node-order, we can stack all of the node features to get a\n ‘feature vector’ x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n \n\n\n\n**Key Idea:**\n Given a feature vector xxx, \n the Laplacian LLL allows us to quantify how smooth xxx is, with respect to GGG.\n \n\n\n\n How?\n \n\n\n\n After normalizing xxx such that ∑i=1nxi2=1\\sum\\_{i = 1}^n x\\_i^2 = 1∑i=1n​xi2​=1,\n if we look at the following quantity involving LLL:\n \nRLR\\_LRL​ is formally called the [Rayleigh quotient](https://en.wikipedia.org/wiki/Rayleigh_quotient).\n \nRL(x)=xTLxxTx=∑(i,j)∈E(xi−xj)2∑ixi2=∑(i,j)∈E(xi−xj)2.\n R\\_L(x) = \\frac{x^T L x}{x^T x} = \\frac{\\sum\\_{(i, j) \\in E} (x\\_i - x\\_j)^2}{\\sum\\_i x\\_i^2} = \\sum\\_{(i, j) \\in E} (x\\_i - x\\_j)^2.\n RL​(x)=xTxxTLx​=∑i​xi2​∑(i,j)∈E​(xi​−xj​)2​=(i,j)∈E∑​(xi​−xj​)2.\n we immediately see that feature vectors xxx that assign similar values to \n adjacent nodes in GGG (hence, are smooth) would have smaller values of RL(x)R\\_L(x)RL​(x).\n \n\n\n\nLLL is a real, symmetric matrix, which means it has all real eigenvalues λ1≤…≤λn\\lambda\\_1 \\leq \\ldots \\leq \\lambda\\_{n}λ1​≤…≤λn​.\n \n An eigenvalue λ\\lambdaλ of a matrix AAA is a value\n satisfying the equation Au=λuAu = \\lambda uAu=λu for a certain vector uuu, called an eigenvector.\n For a friendly introduction to eigenvectors,\n please see [this tutorial](http://www.sosmath.com/matrix/eigen0/eigen0.html).\n \n Further, the corresponding eigenvectors u1,…,unu\\_1, \\ldots, u\\_{n}u1​,…,un​ can be taken to be orthonormal:\n uk1Tuk2={1 if k1=k2.0 if k1≠k2.\n u\\_{k\\_1}^T u\\_{k\\_2} =\n \\begin{cases}\n 1 \\quad \\text{ if } {k\\_1} = {k\\_2}. \\\\\n 0 \\quad \\text{ if } {k\\_1} \\neq {k\\_2}.\n \\end{cases}\n uk1​T​uk2​​={1 if k1​=k2​.0 if k1​≠k2​.​\n It turns out that these eigenvectors of LLL are successively less smooth, as RLR\\_LRL​ indicates:\n This is the [min-max theorem for eigenvalues.](https://en.wikipedia.org/wiki/Min-max_theorem)\nargminx, x⊥{u1,…,ui−1}RL(x)=ui.minx, x⊥{u1,…,ui−1}RL(x)=λi.\n \\underset{x, \\ x \\perp \\{u\\_1, \\ldots, u\\_{i - 1}\\}}{\\text{argmin}} R\\_L(x) = u\\_i.\n \\qquad\n \\qquad\n \\qquad\n \\min\\_{x, \\ x \\perp \\{u\\_1, \\ldots, u\\_{i - 1}\\}} R\\_L(x) = \\lambda\\_i.\n x, x⊥{u1​,…,ui−1​}argmin​RL​(x)=ui​.x, x⊥{u1​,…,ui−1​}min​RL​(x)=λi​.\n The set of eigenvalues of LLL are called its ‘spectrum’, hence the name!\n We denote the ‘spectral’ decomposition of LLL as:\n L=UΛUT.\n L = U \\Lambda U^T.\n L=UΛUT.\n where Λ\\LambdaΛ is the diagonal matrix of sorted eigenvalues,\n and UUU denotes the matrix of the eigenvectors (sorted corresponding to increasing eigenvalues):\n Λ=[λ1⋱λn]U=[u1 ⋯ un].\n \\Lambda = \\begin{bmatrix}\n \\lambda\\_{1} & & \\\\\n & \\ddots & \\\\\n & & \\lambda\\_{n}\n \\end{bmatrix}\n \\qquad\n \\qquad\n \\qquad\n \\qquad\n U = \\begin{bmatrix} \\\\ u\\_1 \\ \\cdots \\ u\\_n \\\\ \\end{bmatrix}.\n Λ=⎣⎡​λ1​​⋱​λn​​⎦⎤​U=⎣⎡​u1​ ⋯ un​​⎦⎤​.\n The orthonormality condition between eigenvectors gives us that UTU=IU^T U = IUTU=I, the identity matrix.\n As these nnn eigenvectors form a basis for Rn\\mathbb{R}^nRn,\n any feature vector xxx can be represented as a linear combination of these eigenvectors:\n x=∑i=1nxi^ui=Ux^.\n x = \\sum\\_{i = 1}^n \\hat{x\\_i} u\\_i = U \\hat{x}.\n x=i=1∑n​xi​^​ui​=Ux^.\n where x^\\hat{x}x^ is the vector of coefficients [x0,…xn][x\\_0, \\ldots x\\_n][x0​,…xn​].\n We call x^\\hat{x}x^ as the spectral representation of the feature vector xxx.\n The orthonormality condition allows us to state:\n x=Ux^⟺UTx=x^.\n x = U \\hat{x} \\quad \\Longleftrightarrow \\quad U^T x = \\hat{x}.\n x=Ux^⟺UTx=x^.\n This pair of equations allows us to interconvert\n between the ‘natural’ representation xxx and the ‘spectral’ representation x^\\hat{x}x^\n for any vector x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n \n\n\n### \n Spectral Representations of Natural Images\n\n\n\n As discussed before, we can consider any image as a grid graph, where each pixel is a node,\n connected by edges to adjacent pixels.\n Thus, a pixel can have either 3,5,3, 5,3,5, or 888 neighbours, depending on its location within the image grid.\n Each pixel gets a value as part of the image. If the image is grayscale, each value will be a single \n real number indicating how dark the pixel is. If the image is colored, each value will be a 333-dimensional\n vector, indicating the values for the red, green and blue (RGB) channels.\n We use the alpha channel as well in the visualization below, so this is actually RGBA.\n\n\n\n\n This construction allows us to compute the graph Laplacian and the eigenvector matrix UUU.\n Given an image, we can then investigate what its spectral representation looks like.\n \n\n\n\n To shed some light on what the spectral representation actually encodes,\n we perform the following experiment over each channel of the image independently: \n \n\n\n* We first collect all pixel values across a channel into a feature vector xxx.\n* Then, we obtain its spectral representation x^\\hat{x}x^.\n x^=UTx\n \\hat{x} = U^T x\n x^=UTx\n* We truncate this to the first mmm components to get x^m\\hat{x}\\_mx^m​.\n By truncation, we mean zeroing out all of the remaining n−mn - mn−m components of x^\\hat{x}x^.\n This truncation is equivalent to using only the first mmm eigenvectors to compute the spectral representation.\n x^m=Truncatem(x^)\n \\hat{x}\\_m = \\text{Truncate}\\_m(\\hat{x})\n x^m​=Truncatem​(x^)\n* Then, we convert this truncated representation x^m\\hat{x}\\_mx^m​ back to the natural basis to get xmx\\_mxm​.\n xm=Ux^m\n x\\_m = U \\hat{x}\\_m\n xm​=Ux^m​\n\n\n\n Finally, we stack the resulting channels back together to get back an image.\n We can now see how the resulting image changes with choices of mmm.\n Note that when m=nm = nm=n, the resulting image is identical to the original image,\n as we can reconstruct each channel exactly.\n \n\n\n\n\n\n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n import define from \"./notebooks/spectral-decompositions-of-natural-images.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"spectralDecompositionsAll\") return new Inspector(document.querySelector(\"#observablehq-spectralDecompositionsAll-59114e0b\"));\n if (name === \"updateArrowCaption\") return new Inspector(document.querySelector(\"#observablehq-updateArrowCaption-59114e0b\"));\n if (name === \"drawArrow\") return new Inspector(document.querySelector(\"#observablehq-drawArrow-59114e0b\"));\n if (name === \"drawCurrBaseImg\") return new Inspector(document.querySelector(\"#observablehq-drawCurrBaseImg-59114e0b\"));\n if (name === \"drawCurrUpdImg\") return new Inspector(document.querySelector(\"#observablehq-drawCurrUpdImg-59114e0b\"));\n if (name === \"style\") return new Inspector(document.querySelector(\"#observablehq-style-59114e0b\"));\n });\n }, 200);\n \n\n\n\n Use the radio buttons at the top to chose one of the four sample images.\n Each of these images has been taken from the ImageNet \n dataset and downsampled to 505050 pixels wide and 404040 pixels tall.\n As there are n=50×40=2000n = 50 \\times 40 = 2000n=50×40=2000 pixels in each image, there are 200020002000 Laplacian eigenvectors.\n Use the slider at the bottom to change the number of spectral components to keep, noting how\n images get progressively blurrier as the number of components decrease.\n \n\n\n As mmm decreases, we see that the output image xmx\\_mxm​ gets blurrier.\n If we decrease mmm to 111, the output image xmx\\_mxm​ is entirely the same color throughout.\n We see that we do not need to keep all nnn components;\n we can retain a lot of the information in the image with significantly fewer components.\n\n We can relate this to the Fourier decomposition of images:\n the more eigenvectors we use, the higher frequencies we can represent on the grid.\n \n\n\n\n To complement the visualization above,\n we additionally visualize the first few eigenvectors on a smaller 8×88 \\times 88×8 grid below.\n We change the coefficients of the first 101010 out of 646464 eigenvectors\n in the spectral representation\n and see how the resulting image changes:\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n import define from \"./notebooks/interactive-spectral-conversions.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"figcaptions\") return new Inspector(document.querySelector(\"#observablehq-figcaptions-6ac8e785\"));\n if (name === \"spec\\_conv\\_main\\_div\") return new Inspector(document.querySelector(\"#observablehq-spec\\_conv\\_main\\_div-6ac8e785\"));\n if (name === \"spec\\_color\\_scale\") return new Inspector(document.querySelector(\"#observablehq-spec\\_color\\_scale-6ac8e785\"));\n if (name === \"subgrid\\_main\\_div\") return new Inspector(document.querySelector(\"#observablehq-subgrid\\_main\\_div-6ac8e785\"));\n if (name === \"spec\\_conv\\_sliders\") return new Inspector(document.querySelector(\"#observablehq-spec\\_conv\\_sliders-6ac8e785\"));\n if (name === \"spec\\_conv\\_buttons\\_display\") return new Inspector(document.querySelector(\"#observablehq-spec\\_conv\\_buttons\\_display-6ac8e785\"));\n if (name === \"spec\\_input\\_slider\\_watch\") return new Inspector(document.querySelector(\"#observablehq-spec\\_input\\_slider\\_watch-6ac8e785\"));\n return [\"svg\",\"draw\\_img\",\"draw\\_static\\_graph\",\"draw\\_dyn\\_graph\",\"draw\\_eigenvectors\"].includes(name);\n });\n }, 200);\n \n\n\n Move the sliders to change the spectral representation x^\\hat{x}x^ (right),\n and see how xxx itself changes on the image (left).\n Note how the first eigenvectors are much ‘smoother’ than the later ones,\n and the many patterns we can make with only 101010 eigenvectors.\n \n\n\n These visualizations should convince you that the first eigenvectors are indeed smooth,\n and the smoothness correspondingly decreases as we consider later eigenvectors.\n \n\n\n\n For any image xxx, we can think of\n the initial entries of the spectral representation x^\\hat{x}x^\n as capturing ‘global’ image-wide trends, which are the low-frequency components,\n while the later entries as capturing ‘local’ details, which are the high-frequency components.\n \n\n\n### Embedding Computation\n\n\n\n We now have the background to understand spectral convolutions\n and how they can be used to compute embeddings/feature representations of nodes.\n \n\n\n\n As before, the model we describe below has KKK layers:\n each layer kkk has learnable parameters w^(k)\\hat{w}^{(k)}w^(k),\n called the ‘filter weights’.\n These weights will be convolved with the spectral representations of the node features.\n As a result, the number of weights needed in each layer is equal to mmm, the number of \n eigenvectors used to compute the spectral representations.\n We had shown in the previous section that we can take m≪nm \\ll nm≪n\n and still not lose out on significant amounts of information.\n \n\n\n\n Thus, convolution in the spectral domain enables the use of significantly fewer parameters\n than just direct convolution in the natural domain.\n Further, by virtue of the smoothness of the Laplacian eigenvectors across the graph,\n using spectral representations automatically enforces an inductive bias for\n neighbouring nodes to get similar representations.\n \n\n\n\n Assuming one-dimensional node features for now,\n the output of each layer is a vector of node representations h(k)h^{(k)}h(k),\n where each node’s representation corresponds to a row\n of the vector.\n \n\n\n\n\n We fix an ordering of the nodes in GGG. This gives us the adjacency matrix AAA and the graph Laplacian LLL,\n allowing us to compute UmU\\_mUm​.\n Finally, we can describe the computation that the layers perform, one after the other:\n \n\n\n\n\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n import define from \"./notebooks/spectral-convolutions-equation.js\";\n setTimeout(() => {\n new Runtime().module(define, name => {\n if (name === \"spec\\_figure\\_init\") return Inspector.into(\".spec\\_figure\\_init\")();\n if (name === \"spec\\_figure\") return Inspector.into(\".spec\\_figure\")();\n if (name === \"style\") return Inspector.into(\".spec\\_figure\\_style\")();\n });\n }, 200);\n \n\n The method above generalizes easily to the case where each h(k)∈Rdkh^{(k)} \\in \\mathbb{R}^{d\\_k}h(k)∈Rdk​, as well:\n see for details.\n \n\n\n\n With the insights from the previous section, we see that convolution in the spectral-domain of graphs\n can be thought of as the generalization of convolution in the frequency-domain of images.\n \n\n\n### \n Spectral Convolutions are Node-Order Equivariant\n\n\n\n We can show spectral convolutions are node-order equivariant using a similar approach\n as for Laplacian polynomial filters. \n \n\n\n\n**Details for the Interested Reader** \n\n As in [our proof before](#poly-filters-equivariance),\n let’s fix an arbitrary node-order.\n Then, any other node-order can be represented by a\n permutation of this original node-order.\n We can associate this permutation with its permutation matrix PPP.\n\n Under this new node-order,\n the quantities below transform in the following way:\n x→PxA→PAPTL→PLPTUm→PUm\n \\begin{aligned}\n x &\\to Px \\\\\n A &\\to PAP^T \\\\\n L &\\to PLP^T \\\\\n U\\_m &\\to PU\\_m\n \\end{aligned}\n xALUm​​→Px→PAPT→PLPT→PUm​​\n which implies that, in the embedding computation:\n x^→(PUm)T(Px)=UmTx=x^w^→(PUm)T(Pw)=UmTw=w^g^→g^g→(PUm)g^=P(Umg^)=Pg\n \\begin{aligned}\n \\hat{x} &\\to \\left(PU\\_m\\right)^T (Px) = U\\_m^T x = \\hat{x} \\\\\n \\hat{w} &\\to \\left(PU\\_m\\right)^T (Pw) = U\\_m^T w = \\hat{w} \\\\\n \\hat{g} &\\to \\hat{g} \\\\\n g &\\to (PU\\_m)\\hat{g} = P(U\\_m\\hat{g}) = Pg\n \\end{aligned}\n x^w^g^​g​→(PUm​)T(Px)=UmT​x=x^→(PUm​)T(Pw)=UmT​w=w^→g^​→(PUm​)g^​=P(Um​g^​)=Pg​\n Hence, as σ\\sigmaσ is applied elementwise:\n f(Px)=σ(Pg)=Pσ(g)=Pf(x)\n f(Px) = \\sigma(Pg) = P \\sigma(g) = P f(x)\n f(Px)=σ(Pg)=Pσ(g)=Pf(x)\n as required.\n Further, we see that the spectral quantities x^,w^\\hat{x}, \\hat{w}x^,w^ and g^\\hat{g}g^​\n are unchanged by permutations of the nodes.\n \n Formally, they are what we would call node-order invariant.\n \n\n\n\n\n\n The theory of spectral convolutions is mathematically well-grounded;\n however, there are some key disadvantages that we must talk about:\n \n\n\n* We need to compute the eigenvector matrix UmU\\_mUm​ from LLL. For large graphs, this becomes quite infeasible.\n* Even if we can compute UmU\\_mUm​, global convolutions themselves are inefficient to compute,\n because of the repeated\n multiplications with UmU\\_mUm​ and UmTU\\_m^TUmT​.\n* The learned filters are specific to the input graphs,\n as they are represented in terms\n of the spectral decomposition of input graph Laplacian LLL.\n This means they do not transfer well to new graphs\n which have significantly different structure (and hence, significantly\n different eigenvalues) .\n\n\n\n While spectral convolutions have largely been superseded by\n ‘local’ convolutions for the reasons discussed above,\n there is still much merit to understanding the ideas behind them.\n Indeed, a recently proposed GNN model called Directional Graph Networks\n \n actually uses the Laplacian eigenvectors\n and their mathematical properties\n extensively.\n \n\n\n### \n Global Propagation via Graph Embeddings\n\n\n\n A simpler way to incorporate graph-level information\n is to compute embeddings of the entire graph by pooling node\n (and possibly edge) embeddings,\n and then using the graph embedding to update node embeddings,\n following an iterative scheme similar to what we have looked at here.\n This is an approach used by Graph Networks\n .\n We will briefly discuss how graph-level embeddings\n can be constructed in [Pooling](#pooling).\n However, such approaches tend to ignore the underlying\n topology of the graph that spectral convolutions can capture.\n \n\n\n\n Learning GNN Parameters\n-------------------------\n\n\n\n All of the embedding computations we’ve described here, whether spectral or spatial, are completely differentiable.\n This allows GNNs to be trained in an end-to-end fashion, just like a standard neural network,\n once a suitable loss function L\\mathcal{L}L is defined:\n \n\n\n* **Node Classification**: By minimizing any of the standard losses for classification tasks,\n such as categorical cross-entropy when multiple classes are present:\n L(yv,yv^)=−∑cyvclogyvc^.\n \\mathcal{L}(y\\_v, \\hat{y\\_v}) = -\\sum\\_{c} y\\_{vc} \\log{\\hat{y\\_{vc}}}.\n L(yv​,yv​^​)=−c∑​yvc​logyvc​^​.\n where yvc^\\hat{y\\_{vc}}yvc​^​ is the predicted probability that node vvv is in class ccc.\n GNNs adapt well to the semi-supervised setting, which is when only some nodes in the graph are labelled.\n In this setting, one way to define a loss LG\\mathcal{L}\\_{G}LG​ over an input graph GGG is:\n LG=∑v∈Lab(G)L(yv,yv^)∣Lab(G)∣\n \\mathcal{L}\\_{G} = \\frac{\\sum\\limits\\_{v \\in \\text{Lab}(G)} \\mathcal{L}(y\\_v, \\hat{y\\_v})}{| \\text{Lab}(G) |}\n LG​=∣Lab(G)∣v∈Lab(G)∑​L(yv​,yv​^​)​\n where, we only compute losses over labelled nodes Lab(G)\\text{Lab}(G)Lab(G).\n* **Graph Classification**: By aggregating node representations,\n one can construct a vector representation of the entire graph.\n This graph representation can be used for any graph-level task, even beyond classification.\n See [Pooling](#pooling) for how representations of graphs can be constructed.\n* **Link Prediction**: By sampling pairs of adjacent and non-adjacent nodes,\n and use these vector pairs as inputs to predict the presence/absence of an edge.\n For a concrete example, by minimizing the following ‘logistic regression’-like loss:\n L(yv,yu,evu)=−evulog(pvu)−(1−evu)log(1−pvu)pvu=σ(yvTyu)\n \\begin{aligned}\n \\mathcal{L}(y\\_v, y\\_u, e\\_{vu}) &= -e\\_{vu} \\log(p\\_{vu}) - (1 - e\\_{vu}) \\log(1 - p\\_{vu}) \\\\\n p\\_{vu} &= \\sigma(y\\_v^Ty\\_u)\n \\end{aligned}\n L(yv​,yu​,evu​)pvu​​=−evu​log(pvu​)−(1−evu​)log(1−pvu​)=σ(yvT​yu​)​\n where σ\\sigmaσ is the [sigmoid function](https://en.wikipedia.org/wiki/Sigmoid_function),\n and evu=1e\\_{vu} = 1evu​=1 iff there is an edge between nodes vvv and uuu, being 000 otherwise.\n* **Node Clustering**: By simply clustering the learned node representations.\n\n\n\n The broad success of pre-training for natural language processing models\n such as ELMo and BERT \n has sparked interest in similar techniques for GNNs\n .\n The key idea in each of these papers is to train GNNs to predict\n local (eg. node degrees, clustering coefficient, masked node attributes)\n and/or global graph properties (eg. pairwise distances, masked global attributes).\n \n\n\n\n Another self-supervised technique is to enforce that neighbouring nodes get similar embeddings,\n mimicking random-walk approaches such as node2vec and DeepWalk :\n \n\n\nLG=∑v∑u∈NR(v)logexpzvTzu∑u′expzu′Tzu.\n L\\_{G} = \\sum\\_{v} \\sum\\_{u \\in N\\_R(v)} \\log\\frac{\\exp{z\\_v^T z\\_u}}{\\sum\\limits\\_{u’} \\exp{z\\_{u’}^T z\\_u}}.\n LG​=v∑​u∈NR​(v)∑​logu′∑​expzu′T​zu​expzvT​zu​​.\n\n where NR(v)N\\_R(v)NR​(v) is a multi-set of nodes visited when random walks are started from vvv.\n For large graphs, where computing the sum over all nodes may be computationally expensive,\n techniques such as Noise Contrastive Estimation are especially useful.\n \n\n\n\n\n Conclusion and Further Reading\n--------------------------------\n\n\n\n While we have looked at many techniques and ideas in this article,\n the field of Graph Neural Networks is extremely vast.\n We have been forced to restrict our discussion to a small subset of the entire literature,\n while still communicating the key ideas and design principles behind GNNs.\n We recommend the interested reader take a look at\n for a more comprehensive survey.\n \n\n\n\n We end with pointers and references for additional concepts readers might be interested in:\n \n\n\n### \n GNNs in Practice\n\n\n\n It turns out that accomodating the different structures of graphs is often hard to do efficiently,\n but we can still represent many GNN update equations using\n as sparse matrix-vector products (since generally, the adjacency matrix is sparse for most real-world graph datasets.)\n For example, the GCN variant discussed here can be represented as:\n h(k)=D−1A⋅h(k−1)W(k)T+h(k−1)B(k)T.\n h^{(k)} = D^{-1} A \\cdot h^{(k - 1)} {W^{(k)}}^T + h^{(k - 1)} {B^{(k)}}^T.\n h(k)=D−1A⋅h(k−1)W(k)T+h(k−1)B(k)T.\n Restructuring the update equations in this way allows for efficient vectorized implementations of GNNs on accelerators\n such as GPUs.\n \n\n\n\n Regularization techniques for standard neural networks,\n such as Dropout ,\n can be applied in a straightforward manner to the parameters\n (for example, zero out entire rows of W(k)W^{(k)}W(k) above).\n However, there are graph-specific techniques such as DropEdge \n that removes entire edges at random from the graph,\n that also boost the performance of many GNN models.\n \n\n\n### \n Different Kinds of Graphs\n\n\n\n Here, we have focused on undirected graphs, to avoid going into too many unnecessary details.\n However, there are some simple variants of spatial convolutions for:\n \n\n\n* Directed graphs: Aggregate across in-neighbourhood and/or out-neighbourhood features.\n* Temporal graphs: Aggregate across previous and/or future node features.\n* Heterogeneous graphs: Learn different aggregation functions for each node/edge type.\n\n\n\n There do exist more sophisticated techniques that can take advantage of the different structures of these graphs:\n see for more discussion.\n \n\n\n### \n Pooling\n\n\n\n This article discusses how GNNs compute useful representations of nodes.\n But what if we wanted to compute representations of graphs for graph-level tasks (for example, predicting the toxicity of a molecule)?\n \n\n\n\n A simple solution is to just aggregate the final node embeddings and pass them through another neural network PREDICTG\\text{PREDICT}\\_GPREDICTG​:\n hG=PREDICTG(AGGv∈G({hv}))\n h\\_G = \\text{PREDICT}\\_G \\Big( \\text{AGG}\\_{v \\in G}\\left(\\{ h\\_v \\} \\right) \\Big)\n hG​=PREDICTG​(AGGv∈G​({hv​}))\n However, there do exist more powerful techniques for ‘pooling’ together node representations:\n \n\n\n* SortPool: Sort vertices of the graph to get a fixed-size node-order invariant representation of the graph, and then apply any standard neural network architecture.\n* DiffPool: Learn to cluster vertices, build a coarser graph over clusters instead of nodes, then apply a GNN over the coarser graph. Repeat until only one cluster is left.\n* SAGPool: Apply a GNN to learn node scores, then keep only the nodes with the top scores, throwing away the rest. Repeat until only one node is left.\n\n\n\n Supplementary Material\n------------------------\n\n\n### \n Reproducing Experiments\n\n\n\n The experiments from\n [Spectral Representations of Natural Images](#spectral-decompositions-of-natural-images)\n can be reproduced using the following\n Colab ![Google Colaboratory](images/colab.svg) notebook:\n [Spectral Representations of Natural Images](https://colab.research.google.com/github/google-research/google-research/blob/master/understanding_convolutions_on_graphs/SpectralRepresentations.ipynb).\n \n\n\n\n### \n Recreating Visualizations\n\n\n\n To aid in the creation of future interactive articles,\n we have created ObservableHQ\n ![ObservableHQ](images/observable.svg)\n notebooks for each of the interactive visualizations here:\n \n\n\n* [Neighbourhood Definitions for CNNs and GNNs](https://observablehq.com/@ameyasd/neighbourhoods-for-cnns-and-gnns)\n* [Graph Polynomial Convolutions on a Grid](https://observablehq.com/@ameyasd/cleaner-interactive-graph-polynomial-convolutions)\n* [Graph Polynomial Convolutions: Equations](https://observablehq.com/@ameyasd/updated-chebnet-equations)\n* [Modern Graph Neural Networks: Equations](https://observablehq.com/@ameyasd/interactive-gnn-equations)\n* [Modern Graph Neural Networks: Interactive Models](https://observablehq.com/@ameyasd/interactive-gnn-visualizations)\n which pulls together the following standalone notebooks:\n\t+ [Graph Convolutional Networks](https://observablehq.com/@ameyasd/graph-convolutional-networks)\n\t+ [Graph Attention Networks](https://observablehq.com/@ameyasd/graph-attention-networks)\n\t+ [GraphSAGE](https://observablehq.com/@ameyasd/graph-sample-and-aggregate-graphsage)\n\t+ [Graph Isomorphism Networks](https://observablehq.com/@ameyasd/graph-isomorphism-networks)\n* [Laplacian Eigenvectors for Grids](https://observablehq.com/@ameyasd/interactive-spectral-conversions)\n* [Spectral Decomposition of Natural Images](https://observablehq.com/@ameyasd/spectral-decompositions-of-natural-images)\n* [Spectral Convolutions: Equations](https://observablehq.com/@ameyasd/spectral-convolutions-equation)", "date_published": "2021-09-02T20:00:00Z", "authors": ["Ameya Daigavane", "Balaraman Ravindran", "Gaurav Aggarwal"], "summaries": ["Understanding the building blocks and design choices of graph neural networks."], "doi": "10.23915/distill.00032", "journal_ref": "distill-pub", "bibliography": [{"link": "https://doi.org/10.23915/distill.00033", "title": "A Gentle Introduction to Graph Neural Networks"}, {"link": "http://jmlr.org/papers/v11/vishwanathan10a.html", "title": "Graph Kernels"}, {"link": "https://doi.org/10.1145/2939672.2939754", "title": "Node2vec: Scalable Feature Learning for Networks"}, {"link": "https://doi.org/10.1145/2623330.2623732", "title": "DeepWalk: Online Learning of Social Representations"}, {"link": "https://proceedings.neurips.cc/paper/2015/file/f9be311e65d81a9ad8150a60844bb94c-Paper.pdf", "title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"link": "http://proceedings.mlr.press/v70/gilmer17a.html", "title": "Neural Message Passing for Quantum Chemistry"}, {"link": "http://arxiv.org/pdf/0711.0189.pdf", "title": "A Tutorial on Spectral Clustering"}, {"link": "https://proceedings.neurips.cc/paper/2016/file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf", "title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering"}, {"link": "http://www.sciencedirect.com/science/article/pii/S1063520310000552", "title": "Wavelets on Graphs via Spectral Graph Theory"}, {"link": "https://books.google.co.in/books?id=8FHf0P3to0UC", "title": "Chebyshev Polynomials"}, {"link": "https://openreview.net/forum?id=SJU4ayYgl", "title": "Semi-Supervised Classification with Graph Convolutional Networks"}, {"link": "https://openreview.net/forum?id=rJXMpikCZ", "title": "Graph Attention Networks"}, {"link": "https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf", "title": "Inductive Representation Learning on Large Graphs"}, {"link": "https://openreview.net/forum?id=ryGs6iA5Km", "title": "How Powerful are Graph Neural Networks?"}, {"link": "http://arxiv.org/pdf/1806.01261.pdf", "title": "Relational inductive biases, deep learning, and graph networks"}, {"link": "http://arxiv.org/pdf/1312.6203.pdf", "title": "Spectral Networks and Locally Connected Networks on Graphs"}, {"link": "https://doi.org/10.1109/SampTA45681.2019.9030932", "title": "On the Transferability of Spectral Graph Filters"}, {"link": "https://www.aclweb.org/anthology/N19-1423", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"}, {"link": "https://openreview.net/forum?id=HJlWWJSFDH", "title": "Strategies for Pre-training Graph Neural Networks"}, {"link": "https://aaai.org/ojs/index.php/AAAI/article/view/6048", "title": "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes"}, {"link": "http://arxiv.org/pdf/2006.09136.pdf", "title": "When Does Self-Supervision Help Graph Convolutional Networks?"}, {"link": "http://arxiv.org/pdf/2006.10141.pdf", "title": "Self-supervised Learning on Graphs: Deep Insights and New Direction"}, {"link": "http://jmlr.org/papers/v13/gutmann12a.html", "title": "Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics"}, {"link": "https://proceedings.neurips.cc/paper/2013/file/db2b4182156b2f1f817860ac9f409ad7-Paper.pdf", "title": "Learning word embeddings efficiently with noise-contrastive estimation"}, {"link": "https://ieeexplore.ieee.org/document/9046288", "title": "A Comprehensive Survey on Graph Neural Networks"}, {"link": "http://arxiv.org/pdf/1812.08434.pdf", "title": "Graph Neural Networks: A Review of Methods and Applications"}, {"link": "http://jmlr.org/papers/v15/srivastava14a.html", "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting"}, {"link": "https://openreview.net/forum?id=Hkx1qkrKPr", "title": "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification"}, {"link": "https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17146", "title": "An End-to-End Deep Learning Architecture for Graph Classification"}, {"link": "https://proceedings.neurips.cc/paper/2018/file/e77dbaf6759253c7c6d0efc5690369c7-Paper.pdf", "title": "Hierarchical Graph Representation Learning with Differentiable Pooling"}, {"link": "http://proceedings.mlr.press/v97/lee19c.html", "title": "Self-Attention Graph Pooling"}]} {"id": "dd6a446845d57fed72059ee8bdb37857", "title": "A Gentle Introduction to Graph Neural Networks", "url": "https://distill.pub/2021/gnn-intro", "source": "distill", "source_type": "blog", "text": "*This article is one of two Distill publications about graph neural networks. Take a look at [Understanding Convolutions on Graphs](https://distill.pub/2021/understanding-gnns/) to understand how convolutions over images generalize naturally to convolutions over graphs.*\n\n\nGraphs are all around us; real world objects are often defined in terms of their connections to other things. A set of objects, and the connections between them, are naturally expressed as a *graph*. Researchers have developed neural networks that operate on graph data (called graph neural networks, or GNNs) for over a decade. Recent developments have increased their capabilities and expressive power. We are starting to see practical applications in areas such as antibacterial discovery , physics simulations , fake news detection , traffic prediction and recommendation systems .\n\n\nThis article explores and explains modern graph neural networks. We divide this work into four parts. First, we look at what kind of data is most naturally phrased as a graph, and some common examples. Second, we explore what makes graphs different from other types of data, and some of the specialized choices we have to make when using graphs. Third, we build a modern GNN, walking through each of the parts of the model, starting with historic modeling innovations in the field. We move gradually from a bare-bones implementation to a state-of-the-art GNN model. Fourth and finally, we provide a GNN playground where you can play around with a real-word task and dataset to build a stronger intuition of how each component of a GNN model contributes to the predictions it makes.\n\n\nTo start, let’s establish what a graph is. A graph represents the relations (*edges*) between a collection of entities (*nodes*). \n\n\n\n\nThree types of attributes we might find in a graph, hover over to highlight each attribute. Other types of graphs and attributes are explored in the [Other types of graphs](#other-types-of-graphs-multigraphs-hypergraphs-hypernodes) section.\n\nTo further describe each node, edge or the entire graph, we can store information in each of these pieces of the graph. \n\n\n\n\nInformation in the form of scalars or embeddings can be stored at each graph node (left) or edge (right).\n\nWe can additionally specialize graphs by associating directionality to edges (*directed, undirected*). \n\n\n![](directed_undirected.e4b1689d.png)\n\nThe edges can be directed, where an edge $e$ has a source node, $v\\_{src}$, and a destination node $v\\_{dst}$. In this case, information flows from $v\\_{src}$ to $v\\_{dst}$. They can also be undirected, where there is no notion of source or destination nodes, and information flows both directions. Note that having a single undirected edge is equivalent to having one directed edge from $v\\_{src}$ to $v\\_{dst}$, and another directed edge from $v\\_{dst}$ to $v\\_{src}$.\n\nGraphs are very flexible data structures, and if this seems abstract now, we will make it concrete with examples in the next section. \n\n\nGraphs and where to find them\n-----------------------------\n\n\nYou’re probably already familiar with some types of graph data, such as social networks. However, graphs are an extremely powerful and general representation of data, we will show two types of data that you might not think could be modeled as graphs: images and text. Although counterintuitive, one can learn more about the symmetries and structure of images and text by viewing them as graphs,, and build an intuition that will help understand other less grid-like graph data, which we will discuss later.\n\n\n### Images as graphs\n\n\nWe typically think of images as rectangular grids with image channels, representing them as arrays (e.g., 244x244x3 floats). Another way to think of images is as graphs with regular structure, where each pixel represents a node and is connected via an edge to adjacent pixels. Each non-border pixel has exactly 8 neighbors, and the information stored at each node is a 3-dimensional vector representing the RGB value of the pixel.\n\n\nA way of visualizing the connectivity of a graph is through its *adjacency matrix*. We order the nodes, in this case each of 25 pixels in a simple 5x5 image of a smiley face, and fill a matrix of $n\\_{nodes} \\times n\\_{nodes}$ with an entry if two nodes share an edge. Note that each of these three representations below are different views of the same piece of data. \n\n\n\n\n\nClick on an image pixel to toggle its value, and see how the graph representation changes.\n\n\n### Text as graphs\n\n\nWe can digitize text by associating indices to each character, word, or token, and representing text as a sequence of these indices. This creates a simple directed graph, where each character or index is a node and is connected via an edge to the node that follows it.\n\n\n\n\n\nEdit the text above to see how the graph representation changes.\n\n\nOf course, in practice, this is not usually how text and images are encoded: these graph representations are redundant since all images and all text will have very regular structures. For instance, images have a banded structure in their adjacency matrix because all nodes (pixels) are connected in a grid. The adjacency matrix for text is just a diagonal line, because each word only connects to the prior word, and to the next one. \n\n\n\nThis representation (a sequence of character tokens) refers to the way text is often represented in RNNs; other models, such as Transformers, can be considered to view text as a fully connected graph where we learn the relationship between tokens. See more in [Graph Attention Networks](#graph-attention-networks).\n\n### Graph-valued data in the wild\n\n\nGraphs are a useful tool to describe data you might already be familiar with. Let’s move on to data which is more heterogeneously structured. In these examples, the number of neighbors to each node is variable (as opposed to the fixed neighborhood size of images and text). This data is hard to phrase in any other way besides a graph.\n\n\n**Molecules as graphs.** Molecules are the building blocks of matter, and are built of atoms and electrons in 3D space. All particles are interacting, but when a pair of atoms are stuck in a stable distance from each other, we say they share a covalent bond. Different pairs of atoms and bonds have different distances (e.g. single-bonds, double-bonds). It’s a very convenient and common abstraction to describe this 3D object as a graph, where nodes are atoms and edges are covalent bonds. Here are two common molecules, and their associated graphs.\n\n\n\n\n(Left) 3d representation of the Citronellal molecule (Center) Adjacency matrix of the bonds in the molecule (Right) Graph representation of the molecule.\n\n\n\n\n(Left) 3d representation of the Caffeine molecule (Center) Adjacency matrix of the bonds in the molecule (Right) Graph representation of the molecule.\n\n\n**Social networks as graphs.** Social networks are tools to study patterns in collective behaviour of people, institutions and organizations. We can build a graph representing groups of people by modelling individuals as nodes, and their relationships as edges. \n\n\n\n\n(Left) Image of a scene from the play “Othello”. (Center) Adjacency matrix of the interaction between characters in the play. (Right) Graph representation of these interactions.\n\n\nUnlike image and text data, social networks do not have identical adjacency matrices. \n\n\n\n\n(Left) Image of karate tournament. (Center) Adjacency matrix of the interaction between people in a karate club. (Right) Graph representation of these interactions.\n\n\n**Citation networks as graphs.** Scientists routinely cite other scientists’ work when publishing papers. We can visualize these networks of citations as a graph, where each paper is a node, and each *directed* edge is a citation between one paper and another. Additionally, we can add information about each paper into each node, such as a word embedding of the abstract. (see ,  , ). \n\n\n**Other examples.** In computer vision, we sometimes want to tag objects in visual scenes. We can then build graphs by treating these objects as nodes, and their relationships as edges. [Machine learning models](https://www.tensorflow.org/tensorboard/graphs), [programming code](https://openreview.net/pdf?id=BJOFETxR-) and [math equations](https://openreview.net/forum?id=S1eZYeHFDS) can also be phrased as graphs, where the variables are nodes, and edges are operations that have these variables as input and output. You might see the term “dataflow graph” used in some of these contexts.\n\n\nThe structure of real-world graphs can vary greatly between different types of data — some graphs have many nodes with few connections between them, or vice versa. Graph datasets can vary widely (both within a given dataset, and between datasets) in terms of the number of nodes, edges, and the connectivity of nodes.\n\n\n\n\n\nSummary statistics on graphs found in the real world. Numbers are dependent on featurization decisions. More useful statistics and graphs can be found in KONECT\n\n\n\nWhat types of problems have graph structured data?\n--------------------------------------------------\n\n\nWe have described some examples of graphs in the wild, but what tasks do we want to perform on this data? There are three general types of prediction tasks on graphs: graph-level, node-level, and edge-level. \n\n\nIn a graph-level task, we predict a single property for a whole graph. For a node-level task, we predict some property for each node in a graph. For an edge-level task, we want to predict the property or presence of edges in a graph.\n\n\nFor the three levels of prediction problems described above (graph-level, node-level, and edge-level), we will show that all of the following problems can be solved with a single model class, the GNN. But first, let’s take a tour through the three classes of graph prediction problems in more detail, and provide concrete examples of each.\n\n\n\nThere are other related tasks that are areas of active research. For instance, we might want to [generate graphs](#generative-modelling), or [explain predictions on a graph](#graph-explanations-and-attributions). More topics can be found in the [Into the weeds section](#into-the-weeds) .\n\n### Graph-level task\n\n\nIn a graph-level task, our goal is to predict the property of an entire graph. For example, for a molecule represented as a graph, we might want to predict what the molecule smells like, or whether it will bind to a receptor implicated in a disease.\n\n\n\n\n\nThis is analogous to image classification problems with MNIST and CIFAR, where we want to associate a label to an entire image. With text, a similar problem is sentiment analysis where we want to identify the mood or emotion of an entire sentence at once.\n\n\n### Node-level task\n\n\nNode-level tasks are concerned with predicting the identity or role of each node within a graph.\n\n\nA classic example of a node-level prediction problem is Zach’s karate club. The dataset is a single social network graph made up of individuals that have sworn allegiance to one of two karate clubs after a political rift. As the story goes, a feud between Mr. Hi (Instructor) and John H (Administrator) creates a schism in the karate club. The nodes represent individual karate practitioners, and the edges represent interactions between these members outside of karate. The prediction problem is to classify whether a given member becomes loyal to either Mr. Hi or John H, after the feud. In this case, distance between a node to either the Instructor or Administrator is highly correlated to this label.\n\n\n\n\n\nOn the left we have the initial conditions of the problem, on the right we have a possible solution, where each node has been classified based on the alliance. The dataset can be used in other graph problems like unsupervised learning. \n\nFollowing the image analogy, node-level prediction problems are analogous to *image segmentation*, where we are trying to label the role of each pixel in an image. With text, a similar task would be predicting the parts-of-speech of each word in a sentence (e.g. noun, verb, adverb, etc).\n\n\n### Edge-level task\n\n\nThe remaining prediction problem in graphs is *edge prediction*. \n\n\nOne example of edge-level inference is in image scene understanding. Beyond identifying objects in an image, deep learning models can be used to predict the relationship between them. We can phrase this as an edge-level classification: given nodes that represent the objects in the image, we wish to predict which of these nodes share an edge or what the value of that edge is. If we wish to discover connections between entities, we could consider the graph fully connected and based on their predicted value prune edges to arrive at a sparse graph.\n\n\n\n![](merged.0084f617.png)\n\nIn (b), above, the original image (a) has been segmented into five entities: each of the fighters, the referee, the audience and the mat. (C) shows the relationships between these entities. \n\n\n![](edges_level_diagram.c40677db.png)\n\nOn the left we have an initial graph built from the previous visual scene. On the right is a possible edge-labeling of this graph when some connections were pruned based on the model’s output.\n\nThe challenges of using graphs in machine learning\n--------------------------------------------------\n\n\nSo, how do we go about solving these different graph tasks with neural networks? The first step is to think about how we will represent graphs to be compatible with neural networks.\n\n\nMachine learning models typically take rectangular or grid-like arrays as input. So, it’s not immediately intuitive how to represent them in a format that is compatible with deep learning. Graphs have up to four types of information that we will potentially want to use to make predictions: nodes, edges, global-context and connectivity. The first three are relatively straightforward: for example, with nodes we can form a node feature matrix $N$ by assigning each node an index $i$ and storing the feature for $node\\_i$ in $N$. While these matrices have a variable number of examples, they can be processed without any special techniques.\n\n\nHowever, representing a graph’s connectivity is more complicated. Perhaps the most obvious choice would be to use an adjacency matrix, since this is easily tensorisable. However, this representation has a few drawbacks. From the [example dataset table](#table), we see the number of nodes in a graph can be on the order of millions, and the number of edges per node can be highly variable. Often, this leads to very sparse adjacency matrices, which are space-inefficient.\n\n\nAnother problem is that there are many adjacency matrices that can encode the same connectivity, and there is no guarantee that these different matrices would produce the same result in a deep neural network (that is to say, they are not permutation invariant).\n\n\n\nLearning permutation invariant operations is an area of recent research.\n\nFor example, the [Othello graph](mols-as-graph-othello) from before can be described equivalently with these two adjacency matrices. It can also be described with every other possible permutation of the nodes.\n\n\n\n\n![](othello1.246371ea.png)\n![](othello2.6897c848.png)\n\n\nTwo adjacency matrices representing the same graph.\n\n\nThe example below shows every adjacency matrix that can describe this small graph of 4 nodes. This is already a significant number of adjacency matrices–for larger examples like Othello, the number is untenable.\n\n\n\n\nAll of these adjacency matrices represent the same graph. Click on an edge to remove it on a “virtual edge” to add it and the matrices will update accordingly.\n\n\nOne elegant and memory-efficient way of representing sparse matrices is as adjacency lists. These describe the connectivity of edge $e\\_k$ between nodes $n\\_i$ and $n\\_j$ as a tuple (i,j) in the k-th entry of an adjacency list. Since we expect the number of edges to be much lower than the number of entries for an adjacency matrix ($n\\_{nodes}^2$), we avoid computation and storage on the disconnected parts of the graph. \n\n\n\nAnother way of stating this is with Big-O notation, it is preferable to have $O(n\\_{edges})$, rather than $O(n\\_{nodes}^2)$.\n\nTo make this notion concrete, we can see how information in different graphs might be represented under this specification:\n\n\n\n\n\nHover and click on the edges, nodes, and global graph marker to view and change attribute representations. On one side we have a small graph and on the other the information of the graph in a tensor representation.\n\nIt should be noted that the figure uses scalar values per node/edge/global, but most practical tensor representations have vectors per graph attribute. Instead of a node tensor of size $[n\\_{nodes}]$ we will be dealing with node tensors of size $[n\\_{nodes}, node\\_{dim}]$. Same for the other graph attributes.\n\n\nGraph Neural Networks\n---------------------\n\n\nNow that the graph’s description is in a matrix format that is permutation invariant, we will describe using graph neural networks (GNNs) to solve graph prediction tasks. **A GNN is an optimizable transformation on all attributes of the graph (nodes, edges, global-context) that preserves graph symmetries (permutation invariances).** We’re going to build GNNs using the “message passing neural network” framework proposed by Gilmer et al. using the Graph Nets architecture schematics introduced by Battaglia et al. GNNs adopt a “graph-in, graph-out” architecture meaning that these model types accept a graph as input, with information loaded into its nodes, edges and global-context, and progressively transform these embeddings, without changing the connectivity of the input graph. \n\n\n### The simplest GNN\n\n\nWith the numerical representation of graphs that [we’ve constructed above](#graph-to-tensor) (with vectors instead of scalars), we are now ready to build a GNN. We will start with the simplest GNN architecture, one where we learn new embeddings for all graph attributes (nodes, edges, global), but where we do not yet use the connectivity of the graph.\n\n\n\nFor simplicity, the previous diagrams used scalars to represent graph attributes; in practice feature vectors, or embeddings, are much more useful. \n\nThis GNN uses a separate multilayer perceptron (MLP) (or your favorite differentiable model) on each component of a graph; we call this a GNN layer. For each node vector, we apply the MLP and get back a learned node-vector. We do the same for each edge, learning a per-edge embedding, and also for the global-context vector, learning a single embedding for the entire graph.\n\n\n\nYou could also call it a GNN block. Because it contains multiple operations/layers (like a ResNet block).\n\n\n![](arch_independent.0efb8ae7.png)\n\nA single layer of a simple GNN. A graph is the input, and each component (V,E,U) gets updated by a MLP to produce a new graph. Each function subscript indicates a separate function for a different graph attribute at the n-th layer of a GNN model.\n\nAs is common with neural networks modules or layers, we can stack these GNN layers together. \n\n\nBecause a GNN does not update the connectivity of the input graph, we can describe the output graph of a GNN with the same adjacency list and the same number of feature vectors as the input graph. But, the output graph has updated embeddings, since the GNN has updated each of the node, edge and global-context representations.\n\n\n### GNN Predictions by Pooling Information\n\n\nWe have built a simple GNN, but how do we make predictions in any of the tasks we described above?\n\n\nWe will consider the case of binary classification, but this framework can easily be extended to the multi-class or regression case. If the task is to make binary predictions on nodes, and the graph already contains node information, the approach is straightforward — for each node embedding, apply a linear classifier.\n\n\n![](prediction_nodes_nodes.c2c8b4d0.png)\n\nWe could imagine a social network, where we wish to anonymize user data (nodes) by not using them, and only using relational data (edges). One instance of such a scenario is the node task we specified in the [Node-level task](#node-level-task) subsection. In the Karate club example, this would be just using the number of meetings between people to determine the alliance to Mr. Hi or John H.\n\nHowever, it is not always so simple. For instance, you might have information in the graph stored in edges, but no information in nodes, but still need to make predictions on nodes. We need a way to collect information from edges and give them to nodes for prediction. We can do this by *pooling*. Pooling proceeds in two steps:\n\n\n1. For each item to be pooled, *gather* each of their embeddings and concatenate them into a matrix.\n2. The gathered embeddings are then *aggregated*, usually via a sum operation.\n\n\n\nFor a more in-depth discussion on aggregation operations go to the [Comparing aggregation operations](#comparing-aggregation-operations) section.\n\nWe represent the *pooling* operation by the letter $\\rho$, and denote that we are gathering information from edges to nodes as $p\\_{E\\_n \\to V\\_{n}}$. \n\n\n\n\nHover over a node (black node) to visualize which edges are gathered and aggregated to produce an embedding for that target node.\n\nSo If we only have edge-level features, and are trying to predict binary node information, we can use pooling to route (or pass) information to where it needs to go. The model looks like this. \n\n\n![](prediction_edges_nodes.e6796b8e.png)\n\n\n\nIf we only have node-level features, and are trying to predict binary edge-level information, the model looks like this.\n\n\n![](prediction_nodes_edges.26fadbcc.png)\n\nOne example of such a scenario is the edge task we specified in [Edge level task](#edge-level-task) sub section. Nodes can be recognized as image entities, and we are trying to predict if the entities share a relationship (binary edges).\n\nIf we only have node-level features, and need to predict a binary global property, we need to gather all available node information together and aggregate them. This is similar to *Global Average Pooling* layers in CNNs. The same can be done for edges.\n\n\n![](prediction_nodes_edges_global.7a535eb8.png)\n\nThis is a common scenario for predicting molecular properties. For example, we have atomic information, connectivity and we would like to know the toxicity of a molecule (toxic/not toxic), or if it has a particular odor (rose/not rose).\n\nIn our examples, the classification model *$c$* can easily be replaced with any differentiable model, or adapted to multi-class classification using a generalized linear model.\n\n\n![](Overall.e3af58ab.png)\n\nAn end-to-end prediction task with a GNN model.\n\n\nNow we’ve demonstrated that we can build a simple GNN model, and make binary predictions by routing information between different parts of the graph. This pooling technique will serve as a building block for constructing more sophisticated GNN models. If we have new graph attributes, we just have to define how to pass information from one attribute to another. \n\n\nNote that in this simplest GNN formulation, we’re not using the connectivity of the graph at all inside the GNN layer. Each node is processed independently, as is each edge, as well as the global context. We only use connectivity when pooling information for prediction. \n\n\n### Passing messages between parts of the graph\n\n\nWe could make more sophisticated predictions by using pooling within the GNN layer, in order to make our learned embeddings aware of graph connectivity. We can do this using *message passing*, where neighboring nodes or edges exchange information and influence each other’s updated embeddings.\n\n\nMessage passing works in three steps: \n\n\n1. For each node in the graph, *gather* all the neighboring node embeddings (or messages), which is the $g$ function described above.\n2. Aggregate all messages via an aggregate function (like sum).\n3. All pooled messages are passed through an *update function*, usually a learned neural network.\n\n\n\nYou could also 1) gather messages, 3) update them and 2) aggregate them and still have a permutation invariant operation.\n\nJust as pooling can be applied to either nodes or edges, message passing can occur between either nodes or edges.\n\n\nThese steps are key for leveraging the connectivity of graphs. We will build more elaborate variants of message passing in GNN layers that yield GNN models of increasing expressiveness and power. \n\n\n\n\nHover over a node, to highlight adjacent nodes and visualize the adjacent embedding that would be pooled, updated and stored.\n\n\nThis sequence of operations, when applied once, is the simplest type of message-passing GNN layer.\n\n\nThis is reminiscent of standard convolution: in essence, message passing and convolution are operations to aggregate and process the information of an element’s neighbors in order to update the element’s value. In graphs, the element is a node, and in images, the element is a pixel. However, the number of neighboring nodes in a graph can be variable, unlike in an image where each pixel has a set number of neighboring elements.\n\n\nBy stacking message passing GNN layers together, a node can eventually incorporate information from across the entire graph: after three layers, a node has information about the nodes three steps away from it.\n\n\nWe can update our architecture diagram to include this new source of information for nodes:\n\n\n![](arch_gcn.40871750.png)\n\nSchematic for a GCN architecture, which updates node representations of a graph by pooling neighboring nodes at a distance of one degree.\n\n### Learning edge representations\n\n\nOur dataset does not always contain all types of information (node, edge, and global context). \nWhen we want to make a prediction on nodes, but our dataset only has edge information, we showed above how to use pooling to route information from edges to nodes, but only at the final prediction step of the model. We can share information between nodes and edges within the GNN layer using message passing.\n\n\nWe can incorporate the information from neighboring edges in the same way we used neighboring node information earlier, by first pooling the edge information, transforming it with an update function, and storing it.\n\n\nHowever, the node and edge information stored in a graph are not necessarily the same size or shape, so it is not immediately clear how to combine them. One way is to learn a linear mapping from the space of edges to the space of nodes, and vice versa. Alternatively, one may concatenate them together before the update function.\n\n\n![](arch_mpnn.a13c2294.png)\n\nArchitecture schematic for Message Passing layer. The first step “prepares” a message composed of information from an edge and it’s connected nodes and then “passes” the message to the node.\n\nWhich graph attributes we update and in which order we update them is one design decision when constructing GNNs. We could choose whether to update node embeddings before edge embeddings, or the other way around. This is an open area of research with a variety of solutions– for example we could update in a ‘weave’ fashion where we have four updated representations that get combined into new node and edge representations: node to node (linear), edge to edge (linear), node to edge (edge layer), edge to node (node layer).\n\n\n![](arch_weave.352befc0.png)\n\nSome of the different ways we might combine edge and node representation in a GNN layer.\n\n### Adding global representations\n\n\nThere is one flaw with the networks we have described so far: nodes that are far away from each other in the graph may never be able to efficiently transfer information to one another, even if we apply message passing several times. For one node, If we have k-layers, information will propagate at most k-steps away. This can be a problem for situations where the prediction task depends on nodes, or groups of nodes, that are far apart. One solution would be to have all nodes be able to pass information to each other. \nUnfortunately for large graphs, this quickly becomes computationally expensive (although this approach, called ‘virtual edges’, has been used for small graphs such as molecules).\n\n\nOne solution to this problem is by using the global representation of a graph (U) which is sometimes called a **master node** or context vector. This global context vector is connected to all other nodes and edges in the network, and can act as a bridge between them to pass information, building up a representation for the graph as a whole. This creates a richer and more complex representation of the graph than could have otherwise been learned. \n\n\n![](arch_graphnet.b229be6d.png)\nSchematic of a Graph Nets architecture leveraging global representations.\n\nIn this view all graph attributes have learned representations, so we can leverage them during pooling by conditioning the information of our attribute of interest with respect to the rest. For example, for one node we can consider information from neighboring nodes, connected edges and the global information. To condition the new node embedding on all these possible sources of information, we can simply concatenate them. Additionally we may also map them to the same space via a linear map and add them or apply a feature-wise modulation layer, which can be considered a type of featurize-wise attention mechanism.\n\n\n![](graph_conditioning.3017e214.png)\nSchematic for conditioning the information of one node based on three other embeddings (adjacent nodes, adjacent edges, global). This step corresponds to the node operations in the Graph Nets Layer. \n\nGNN playground\n--------------\n\n\nWe’ve described a wide range of GNN components here, but how do they actually differ in practice? This GNN playground allows you to see how these different components and architectures contribute to a GNN’s ability to learn a real task. \n\n\nOur playground shows a graph-level prediction task with small molecular graphs. We use the the Leffingwell Odor Dataset, which is composed of molecules with associated odor percepts (labels). Predicting the relation of a molecular structure (graph) to its smell is a 100 year-old problem straddling chemistry, physics, neuroscience, and machine learning.\n\n\nTo simplify the problem, we consider only a single binary label per molecule, classifying if a molecular graph smells “pungent” or not, as labeled by a professional perfumer. We say a molecule has a “pungent” scent if it has a strong, striking smell. For example, garlic and mustard, which might contain the molecule *allyl alcohol* have this quality. The molecule *piperitone*, often used for peppermint-flavored candy, is also described as having a pungent smell.\n\n\nWe represent each molecule as a graph, where atoms are nodes containing a one-hot encoding for its atomic identity (Carbon, Nitrogen, Oxygen, Fluorine) and bonds are edges containing a one-hot encoding its bond type (single, double, triple or aromatic). \n\n\nOur general modeling template for this problem will be built up using sequential GNN layers, followed by a linear model with a sigmoid activation for classification. The design space for our GNN has many levers that can customize the model:\n\n\n1. The number of GNN layers, also called the *depth*.\n2. The dimensionality of each attribute when updated. The update function is a 1-layer MLP with a relu activation function and a layer norm for normalization of activations.\n3. The aggregation function used in pooling: max, mean or sum.\n4. The graph attributes that get updated, or styles of message passing: nodes, edges and global representation. We control these via boolean toggles (on or off). A baseline model would be a graph-independent GNN (all message-passing off) which aggregates all data at the end into a single global attribute. Toggling on all message-passing functions yields a GraphNets architecture.\n\n\nTo better understand how a GNN is learning a task-optimized representation of a graph, we also look at the penultimate layer activations of the GNN. These ‘graph embeddings’ are the outputs of the GNN model right before prediction. Since we are using a generalized linear model for prediction, a linear mapping is enough to allow us to see how we are learning representations around the decision boundary. \n\n\nSince these are high dimensional vectors, we reduce them to 2D via principal component analysis (PCA). \nA perfect model would visibility separate labeled data, but since we are reducing dimensionality and also have imperfect models, this boundary might be harder to see.\n\n\nPlay around with different model architectures to build your intuition. For example, see if you can edit the molecule on the left to make the model prediction increase. Do the same edits have the same effects for different model architectures?\n\n\nThis playground is running live on the browser in [tfjs](https://www.tensorflow.org/js/).\n\nEdit the molecule to see how the prediction changes, or change the model params to load a different model. Select a different molecule in the scatter plot.\n### Some empirical GNN design lessons\n\n\nWhen exploring the architecture choices above, you might have found some models have better performance than others. Are there some clear GNN design choices that will give us better performance? For example, do deeper GNN models perform better than shallower ones? or is there a clear choice between aggregation functions? The answers are going to depend on the data, , and even different ways of featurizing and constructing graphs can give different answers.\n\n\nWith the following interactive figure, we explore the space of GNN architectures and the performance of this task across a few major design choices: Style of message passing, the dimensionality of embeddings, number of layers, and aggregation operation type.\n\n\nEach point in the scatter plot represents a model: the x axis is the number of trainable variables, and the y axis is the performance. Hover over a point to see the GNN architecture parameters.\n\n\n\n\nvar spec = \"BasicArchitectures.json\";\nvegaEmbed('#BasicArchitectures', spec).then(function (result) {// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view\n}).catch(console.error);\nScatterplot of each model’s performance vs its number of trainable variables. Hover over a point to see the GNN architecture parameters.\n\nThe first thing to notice is that, surprisingly, a higher number of parameters does correlate with higher performance. GNNs are a very parameter-efficient model type: for even a small number of parameters (3k) we can already find models with high performance. \n\n\nNext, we can look at the distributions of performance aggregated based on the dimensionality of the learned representations for different graph attributes.\n\n\n\n\nvar spec = \"ArchitectureNDim.json\";\nvegaEmbed('#ArchitectureNDim', spec).then(function (result) {// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view\n}).catch(console.error);\nAggregate performance of models across varying node, edge, and global dimensions.\n\nWe can notice that models with higher dimensionality tend to have better mean and lower bound performance but the same trend is not found for the maximum. Some of the top-performing models can be found for smaller dimensions. Since higher dimensionality is going to also involve a higher number of parameters, these observations go in hand with the previous figure.\n\n\nNext we can see the breakdown of performance based on the number of GNN layers.\n\n\n\n\nvar spec = \"ArchitectureNLayers.json\";\nvegaEmbed('#ArchitectureNLayers', spec).then(function (result) {// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view\n}).catch(console.error);\n Chart of number of layers vs model performance, and scatterplot of model performance vs number of parameters. Each point is colored by the number of layers. Hover over a point to see the GNN architecture parameters.\n\nThe box plot shows a similar trend, while the mean performance tends to increase with the number of layers, the best performing models do not have three or four layers, but two. Furthermore, the lower bound for performance decreases with four layers. This effect has been observed before, GNN with a higher number of layers will broadcast information at a higher distance and can risk having their node representations ‘diluted’ from many successive iterations .\n\n\nDoes our dataset have a preferred aggregation operation? Our following figure breaks down performance in terms of aggregation type.\n\n\n\n\nvar spec = \"ArchitectureAggregation.json\";\nvegaEmbed('#ArchitectureAggregation', spec).then(function (result) {// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view\n}).catch(console.error);\nChart of aggregation type vs model performance, and scatterplot of model performance vs number of parameters. Each point is colored by aggregation type. Hover over a point to see the GNN architecture parameters.\n\nOverall it appears that sum has a very slight improvement on the mean performance, but max or mean can give equally good models. This is useful to contextualize when looking at the [discriminatory/expressive capabilities](#comparing-aggregation-operations) of aggregation operations .\n\n\nThe previous explorations have given mixed messages. We can find mean trends where more complexity gives better performance but we can find clear counterexamples where models with fewer parameters, number of layers, or dimensionality perform better. One trend that is much clearer is about the number of attributes that are passing information to each other.\n\n\nHere we break down performance based on the style of message passing. On both extremes, we consider models that do not communicate between graph entities (“none”) and models that have messaging passed between nodes, edges, and globals.\n\n\n\n\nvar spec = \"ArchitectureMessagePassing.json\";\nvegaEmbed('#ArchitectureMessagePassing', spec).then(function (result) {// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view\n}).catch(console.error);\nChart of message passing vs model performance, and scatterplot of model performance vs number of parameters. Each point is colored by message passing. Hover over a point to see the GNN architecture parameters\n\nOverall we see that the more graph attributes are communicating, the better the performance of the average model. Our task is centered on global representations, so explicitly learning this attribute also tends to improve performance. Our node representations also seem to be more useful than edge representations, which makes sense since more information is loaded in these attributes.\n\n\nThere are many directions you could go from here to get better performance. We wish two highlight two general directions, one related to more sophisticated graph algorithms and another towards the graph itself.\n\n\nUp until now, our GNN is based on a neighborhood-based pooling operation. There are some graph concepts that are harder to express in this way, for example a linear graph path (a connected chain of nodes). Designing new mechanisms in which graph information can be extracted, executed and propagated in a GNN is one current research area , , , .\n\n\nOne of the frontiers of GNN research is not making new models and architectures, but “how to construct graphs”, to be more precise, imbuing graphs with additional structure or relations that can be leveraged. As we loosely saw, the more graph attributes are communicating the more we tend to have better models. In this particular case, we could consider making molecular graphs more feature rich, by adding additional spatial relationships between nodes, adding edges that are not bonds, or explicit learnable relationships between subgraphs.\n\n\nSee more in [Other types of graphs](#Other-types-of-graphs ).\nInto the Weeds\n--------------\n\n\nNext, we have a few sections on a myriad of graph-related topics that are relevant for GNNs.\n\n\n### Other types of graphs (multigraphs, hypergraphs, hypernodes, hierarchical graphs)\n\n\nWhile we only described graphs with vectorized information for each attribute, graph structures are more flexible and can accommodate other types of information. Fortunately, the message passing framework is flexible enough that often adapting GNNs to more complex graph structures is about defining how information is passed and updated by new graph attributes. \n\n\nFor example, we can consider multi-edge graphs or *multigraphs*, where a pair of nodes can share multiple types of edges, this happens when we want to model the interactions between nodes differently based on their type. For example with a social network, we can specify edge types based on the type of relationships (acquaintance, friend, family). A GNN can be adapted by having different types of message passing steps for each edge type. \nWe can also consider nested graphs, where for example a node represents a graph, also called a hypernode graph. Nested graphs are useful for representing hierarchical information. For example, we can consider a network of molecules, where a node represents a molecule and an edge is shared between two molecules if we have a way (reaction) of transforming one to the other .\nIn this case, we can learn on a nested graph by having a GNN that learns representations at the molecule level and another at the reaction network level, and alternate between them during training.\n\n\nAnother type of graph is a hypergraph, where an edge can be connected to multiple nodes instead of just two. For a given graph, we can build a hypergraph by identifying communities of nodes and assigning a hyper-edge that is connected to all nodes in a community.\n\n\n![](multigraphs.1bb84306.png)\nSchematic of more complex graphs. On the left we have an example of a multigraph with three edge types, including a directed edge. On the right we have a three-level hierarchical graph, the intermediate level nodes are hypernodes.\n\nHow to train and design GNNs that have multiple types of graph attributes is a current area of research , .\n\n\n### Sampling Graphs and Batching in GNNs\n\n\nA common practice for training neural networks is to update network parameters with gradients calculated on randomized constant size (batch size) subsets of the training data (mini-batches). This practice presents a challenge for graphs due to the variability in the number of nodes and edges adjacent to each other, meaning that we cannot have a constant batch size. The main idea for batching with graphs is to create subgraphs that preserve essential properties of the larger graph. This graph sampling operation is highly dependent on context and involves sub-selecting nodes and edges from a graph. These operations might make sense in some contexts (citation networks) and in others, these might be too strong of an operation (molecules, where a subgraph simply represents a new, smaller molecule). How to sample a graph is an open research question. \nIf we care about preserving structure at a neighborhood level, one way would be to randomly sample a uniform number of nodes, our *node-set*. Then add neighboring nodes of distance k adjacent to the node-set, including their edges. Each neighborhood can be considered an individual graph and a GNN can be trained on batches of these subgraphs. The loss can be masked to only consider the node-set since all neighboring nodes would have incomplete neighborhoods.\nA more efficient strategy might be to first randomly sample a single node, expand its neighborhood to distance k, and then pick the other node within the expanded set. These operations can be terminated once a certain amount of nodes, edges, or subgraphs are constructed.\nIf the context allows, we can build constant size neighborhoods by picking an initial node-set and then sub-sampling a constant number of nodes (e.g randomly, or via a random walk or Metropolis algorithm).\n\n\n![](sampling.968003b3.png)\nFour different ways of sampling the same graph. Choice of sampling strategy depends highly on context since they will generate different distributions of graph statistics (# nodes, #edges, etc.). For highly connected graphs, edges can be also subsampled. \n\nSampling a graph is particularly relevant when a graph is large enough that it cannot be fit in memory. Inspiring new architectures and training strategies such as Cluster-GCN and GraphSaint . We expect graph datasets to continue growing in size in the future.\n\n\n### Inductive biases\n\n\nWhen building a model to solve a problem on a specific kind of data, we want to specialize our models to leverage the characteristics of that data. When this is done successfully, we often see better predictive performance, lower training time, fewer parameters and better generalization. \n\n\nWhen labeling on images, for example, we want to take advantage of the fact that a dog is still a dog whether it is in the top-left or bottom-right corner of an image. Thus, most image models use convolutions, which are translation invariant. For text, the order of the tokens is highly important, so recurrent neural networks process data sequentially. Further, the presence of one token (e.g. the word ‘not’) can affect the meaning of the rest of a sentence, and so we need components that can ‘attend’ to other parts of the text, which transformer models like BERT and GPT-3 can do. These are some examples of inductive biases, where we are identifying symmetries or regularities in the data and adding modelling components that take advantage of these properties.\n\n\nIn the case of graphs, we care about how each graph component (edge, node, global) is related to each other so we seek models that have a relational inductive bias. A model should preserve explicit relationships between entities (adjacency matrix) and preserve graph symmetries (permutation invariance). We expect problems where the interaction between entities is important will benefit from a graph structure. Concretely, this means designing transformation on sets: the order of operation on nodes or edges should not matter and the operation should work on a variable number of inputs. \n\n\n### Comparing aggregation operations\n\n\nPooling information from neighboring nodes and edges is a critical step in any reasonably powerful GNN architecture. Because each node has a variable number of neighbors, and because we want a differentiable method of aggregating this information, we want to use a smooth aggregation operation that is invariant to node ordering and the number of nodes provided.\n\n\nSelecting and designing optimal aggregation operations is an open research topic. A desirable property of an aggregation operation is that similar inputs provide similar aggregated outputs, and vice-versa. Some very simple candidate permutation-invariant operations are sum, mean, and max. Summary statistics like variance also work. All of these take a variable number of inputs, and provide an output that is the same, no matter the input ordering. Let’s explore the difference between these operations.\n\n\n \n\nNo pooling type can always distinguish between graph pairs such as max pooling on the left and sum / mean pooling on the right. \n\nThere is no operation that is uniformly the best choice. The mean operation can be useful when nodes have a highly-variable number of neighbors or you need a normalized view of the features of a local neighborhood. The max operation can be useful when you want to highlight single salient features in local neighborhoods. Sum provides a balance between these two, by providing a snapshot of the local distribution of features, but because it is not normalized, can also highlight outliers. In practice, sum is commonly used. \n\n\nDesigning aggregation operations is an open research problem that intersects with machine learning on sets. New approaches such as Principal Neighborhood aggregation take into account several aggregation operations by concatenating them and adding a scaling function that depends on the degree of connectivity of the entity to aggregate. Meanwhile, domain specific aggregation operations can also be designed. One example lies with the “Tetrahedral Chirality” aggregation operators .\n\n\n### GCN as subgraph function approximators\n\n\nAnother way to see GCN (and MPNN) of k-layers with a 1-degree neighbor lookup is as a neural network that operates on learned embeddings of subgraphs of size k.\n\n\nWhen focusing on one node, after k-layers, the updated node representation has a limited viewpoint of all neighbors up to k-distance, essentially a subgraph representation. Same is true for edge representations.\n\n\nSo a GCN is collecting all possible subgraphs of size k and learning vector representations from the vantage point of one node or edge. The number of possible subgraphs can grow combinatorially, so enumerating these subgraphs from the beginning vs building them dynamically as in a GCN, might be prohibitive.\n\n\n![](arch_subgraphs.197f9b0e.png)\n\n\n### Edges and the Graph Dual\n\n\nOne thing to note is that edge predictions and node predictions, while seemingly different, often reduce to the same problem: an edge prediction task on a graph $G$ can be phrased as a node-level prediction on $G$’s dual.\n\n\nTo obtain $G$’s dual, we can convert nodes to edges (and edges to nodes). A graph and its dual contain the same information, just expressed in a different way. Sometimes this property makes solving problems easier in one representation than another, like frequencies in Fourier space. In short, to solve an edge classification problem on $G$, we can think about doing graph convolutions on $G$’s dual (which is the same as learning edge representations on $G$), this idea was developed with Dual-Primal Graph Convolutional Networks.\n\n\n\n### Graph convolutions as matrix multiplications, and matrix multiplications as walks on a graph\n\n\nWe’ve talked a lot about graph convolutions and message passing, and of course, this raises the question of how do we implement these operations in practice? For this section, we explore some of the properties of matrix multiplication, message passing, and its connection to traversing a graph. \n\n\nThe first point we want to illustrate is that the matrix multiplication of an adjacent matrix $A$ $n\\_{nodes} \\times n\\_{nodes}$ with a node feature matrix $X$ of size $n\\_{nodes} \\times node\\_{dim}$ implements an simple message passing with a summation aggregation.\nLet the matrix be $B=AX$, we can observe that any entry $B\\_{ij}$ can be expressed as $= A\\_{i,1}X\\_{1,j}+A\\_{i,2}X\\_{2, j}+…+A\\_{i,n}X\\_{n, j}=\\sum\\_{A\\_{i,k}>0} X\\_{k,j}$. Because $A\\_{i,k}$ are binary entries only when a edge exists between $node\\_i$ and $node\\_k$, the inner product is essentially “gathering” all node features values of dimension $j$” that share an edge with $node\\_i$. It should be noted that this message passing is not updating the representation of the node features, just pooling neighboring node features. But this can be easily adapted by passing $X$ through your favorite differentiable transformation (e.g. MLP) before or after the matrix multiply.\n\n\nFrom this view, we can appreciate the benefit of using adjacency lists. Due to the expected sparsity of $A$ we don’t have to sum all values where $A\\_{i,j}$ is zero. As long as we have an operation to gather values based on an index, we should be able to just retrieve positive entries. Additionally, this matrix multiply-free approach frees us from using summation as an aggregation operation. \n\n\nWe can imagine that applying this operation multiple times allows us to propagate information at greater distances. In this sense, matrix multiplication is a form of traversing over a graph. This relationship is also apparent when we look at powers $A^K$ of the adjacency matrix. If we consider the matrix $A^2$, the term $A^2\\_{ij}$ counts all walks of length 2 from $node\\_{i}$ to $node\\_{j}$ and can be expressed as the inner product $ = A\\_{i,1}A\\_{1, j}+A\\_{i,2}A\\_{2, j}+…+A\\_{i,n}A{n,j}$. The intuition is that the first term $a\\_{i,1}a\\_{1, j}$ is only positive under two conditions, there is edge that connects $node\\_i$ to $node\\_1$ and another edge that connects $node\\_{1}$ to $node\\_{j}$. In other words, both edges form a path of length 2 that goes from $node\\_i$ to $node\\_j$ passing by $node\\_1$. Due to the summation, we are counting over all possible intermediate nodes. This intuition carries over when we consider $A^3=A \\matrix A^2$.. and so on to $A^k$. \n\n\nThere are deeper connections on how we can view matrices as graphs to explore .\n\n\n### Graph Attention Networks\n\n\nAnother way of communicating information between graph attributes is via attention. For example, when we consider the sum-aggregation of a node and its 1-degree neighboring nodes we could also consider using a weighted sum.The challenge then is to associate weights in a permutation invariant fashion. One approach is to consider a scalar scoring function that assigns weights based on pairs of nodes ( $f(node\\_i, node\\_j)$). In this case, the scoring function can be interpreted as a function that measures how relevant a neighboring node is in relation to the center node. Weights can be normalized, for example with a softmax function to focus most of the weight on a neighbor most relevant for a node in relation to a task. This concept is the basis of Graph Attention Networks (GAT) and Set Transformers. Permutation invariance is preserved, because scoring works on pairs of nodes. A common scoring function is the inner product and nodes are often transformed before scoring into query and key vectors via a linear map to increase the expressivity of the scoring mechanism. Additionally for interpretability, the scoring weights can be used as a measure of the importance of an edge in relation to a task. \n\n\n![](attention.3c55769d.png)\nSchematic of attention over one node with respect to it’s adjacent nodes. For each edge an interaction score is computed, normalized and used to weight node embeddings.\n\nAdditionally, transformers can be viewed as GNNs with an attention mechanism . Under this view, the transformer models several elements (i.g. character tokens) as nodes in a fully connected graph and the attention mechanism is assigning edge embeddings to each node-pair which are used to compute attention weights. The difference lies in the assumed pattern of connectivity between entities, a GNN is assuming a sparse pattern and the Transformer is modelling all connections.\n\n\n### Graph explanations and attributions\n\n\nWhen deploying GNN in the wild we might care about model interpretability for building credibility, debugging or scientific discovery. The graph concepts that we care to explain vary from context to context. For example, with molecules we might care about the presence or absence of particular subgraphs, while in a citation network we might care about the degree of connectedness of an article. Due to the variety of graph concepts, there are many ways to build explanations. GNNExplainer casts this problem as extracting the most relevant subgraph that is important for a task. Attribution techniques assign ranked importance values to parts of a graph that are relevant for a task. Because realistic and challenging graph problems can be generated synthetically, GNNs can serve as a rigorous and repeatable testbed for evaluating attribution techniques .\n\n\n![](graph_xai.bce4532f.png)\nSchematic of some explanability techniques on graphs. Attributions assign ranked values to graph attributes. Rankings can be used as a basis to extract connected subgraphs that might be relevant to a task.\n\n### Generative modelling\n\n\nBesides learning predictive models on graphs, we might also care about learning a generative model for graphs. With a generative model we can generate new graphs by sampling from a learned distribution or by completing a graph given a starting point. A relevant application is in the design of new drugs, where novel molecular graphs with specific properties are desired as candidates to treat a disease.\n\n\nA key challenge with graph generative models lies in modelling the topology of a graph, which can vary dramatically in size and has $N\\_{nodes}^2$ terms. One solution lies in modelling the adjacency matrix directly like an image with an autoencoder framework. The prediction of the presence or absence of an edge is treated as a binary classification task. The $N\\_{nodes}^2$ term can be avoided by only predicting known edges and a subset of the edges that are not present. The graphVAE learns to model positive patterns of connectivity and some patterns of non-connectivity in the adjacency matrix.\n\n\nAnother approach is to build a graph sequentially, by starting with a graph and applying discrete actions such as addition or subtraction of nodes and edges iteratively. To avoid estimating a gradient for discrete actions we can use a policy gradient. This has been done via an auto-regressive model, such a RNN, or in a reinforcement learning scenario. Furthermore, sometimes graphs can be modeled as just sequences with grammar elements.\n\n\n\nFinal thoughts\n--------------\n\n\nGraphs are a powerful and rich structured data type that have strengths and challenges that are very different from those of images and text. In this article, we have outlined some of the milestones that researchers have come up with in building neural network based models that process graphs. We have walked through some of the important design choices that must be made when using these architectures, and hopefully the GNN playground can give an intuition on what the empirical results of these design choices are. The success of GNNs in recent years creates a great opportunity for a wide range of new problems, and we are excited to see what the field will bring.", "date_published": "2021-09-02T20:00:00Z", "authors": ["Adam Pearce"], "summaries": ["What components are needed for building learning algorithms that leverage the structure and properties of graphs?"], "doi": "10.23915/distill.00033", "journal_ref": "distill-pub", "bibliography": [{"link": "https://doi.org/10.23915/distill.00032", "title": "Understanding Convolutions on Graphs"}, {"link": "https://papers.nips.cc/paper/2020/hash/417fbbf2e9d5a28a855a11894b2e795a-Abstract.html", "title": "Evaluating Attribution for Graph Neural Networks"}]} {"id": "487640e189145951117a720279b68499", "title": "Distill Hiatus", "url": "https://distill.pub/2021/distill-hiatus", "source": "distill", "source_type": "blog", "text": "*Over the past five years, Distill has supported authors in publishing artifacts that push beyond the traditional expectations of scientific papers. From Gabriel Goh’s interactive exposition of momentum, to an [ongoing collaboration exploring self-organizing systems](https://distill.pub/2020/growing-ca/), to a [community discussion of a highly debated paper](https://distill.pub/2019/advex-bugs-discussion/), Distill has been a venue for authors to experiment in scientific communication.*\n\n*But over this time, the editorial team has become less certain whether it makes sense to run Distill as a journal, rather than encourage authors to self-publish. Running Distill as a journal creates a great deal of structural friction, making it hard for us to focus on the aspects of scientific publishing we’re most excited about. Distill is volunteer run and these frictions have caused our team to struggle with burnout.*\n\n*Starting today Distill will be taking a one year hiatus, which may be extended indefinitely. Papers actively under review are not affected by this change, published threads can continue to add to their exploration, and we may publish commentary articles in limited cases. Authors can continue to write Distill-style papers using the [Distill template](https://github.com/distillpub/template), and either self-publish or submit to venues like [VISxAI](https://visxai.io/).*\n\n\n\n---\n\nThe Distill journal was founded as an adapter between traditional and online scientific publishing. We believed that many valuable scientific contributions — such as explanations, interactive articles, and visualizations — were held back by not being seen as “real scientific publications.” Our theory was that if a journal were to publish such artifacts, it would allow authors to benefit from the traditional academic incentive system and enable more of this kind of work.\n\nAfter four years, we no longer believe this theory of impact. First, we don’t think that publishing in a journal like Distill significantly affects how seriously most institutions take non-traditional publications. Instead, it seems that more liberal institutions will take high-quality articles seriously regardless of their venue and style, while more conservative institutions remain unmoved. Secondly, we don’t believe that having a venue is the primary bottleneck to authors producing more Distill-style articles. Instead, we believe the primary bottleneck is the amount of effort it takes to produce these articles and the unusual combination of scientific and design expertise required.\n\nWe’re proud of the authors Distill has been able to support and the articles it has been able to publish. And we do think that Distill has produced a lot of value. But we don’t think this value has been a product of Distill’s status as a journal. Instead, we believe Distill’s impact has been through:\n\n* Providing mentorship to authors and potential authors.\n* Providing the Distill template (which is used by many non-Distill authors)\n* Individuals involved in Distill producing excellent articles.\n* Providing encouragement and community to authors.\n\nOur sense is that Distill’s journal structure may limit, rather than support, these benefits. It creates a great deal of overhead, political concerns, and is in direct tension with some of these goals.\n\nInstead, we think the future for most types of articles is probably self-publication, either on one-off websites or on a hypothetical “Distill Arxiv.” There are a few exceptions where we think centralized journal-like entities probably have an important enduring role, but we think the majority of papers are best served by self-publication.\n\nChanges in How We Think About Distill\n-------------------------------------\n\n### Mentorship is in Tension with Being a Journal\n\nBehind the scenes, the largest function of Distill is providing feedback and mentorship. For some of our early articles, we provided more than 50 hours of help with designing diagrams, improving writing style, and shaping scientific communication. Although we’ve generally dialed this down over time, each article still requires significant work. All of this is done by our editors in a volunteer capacity, on top of their regular work responsibilities.\n\nThe first problem with providing mentorship through an editorial role is that it’s not a very good mechanism for distributing mentorship. Ideally, one wants to provide mentorship early on in projects, to mentees with similar interests, and to a number of mentees that one is capable of providing good mentorship to. Providing mentorship to everyone who submits an article to Distill is overwhelming. Another problem is that our advice is often too late because the article’s foundation is already set. Finally, many authors don’t realize the amount of effort it takes to publish a Distill article.\n\nProviding mentorship also creates a challenging dual relationship for an editor. They have both the role of closely supporting and championing the author while also having to accept or reject them in the end. We’ve found this to be difficult for both the mentor and mentee.\n\nFinally, the kind of deeply-engaged editing and mentorship that we sometimes provide can often amount to an authorship level contribution, with authors offering co-authorship to editors. This is especially true when an editor was a mentor from early on. In many ways, co-authorship would create healthy incentives, rewarding the editor for spending tens of hours improving the article. But it creates a conflict of interest if the editor is to be an independent decision maker, as the journal format suggests they should be. And even if another editor takes over, it’s a political risk: Distill is sometimes criticized for publishing too many articles with editors as authors.\n\n### Editor Articles are in Tension with Being a Journal\n\nAnother important impact of Distill has been articles written by the editors themselves. Distill’s editorial team consists of volunteer researchers who are deeply excited about explanations and interactive articles and have a long history of doing so. Since the set of people with these interests is small, a non-trivial fraction of Distill’s publications have come from editors. In other cases, authors of existing Distill articles were later invited to become an editor.\n\nEditor articles are sometimes cited as a sign of a kind of corruption for Distill, that Distill is a vehicle for promoting editors. We can see how it might seem dubious for a journal to publish articles by people running it, even if editorial decisions are made by an editor who is at arms-length. This has led Distill to avoid publishing several editor articles despite believing that they are of value to readers.\n\nWe believe that editor articles are actually a good thing about Distill. Each one represents an immense amount of effort in trying new things scientific publishing. Given the large volume of readers and the positive informal comments we receive, we suspect that for every critic there are many silent but happy readers.\n\nWhen a structure turns a public good into an appearance of corruption, it suggests it might not be such a good structure. As editors, we want to share our work with the world in a way that is not seen as corrupt.\n\n### Neutral venues can be achieved in other ways\n\nThe vast majority of Distill articles are written by multiple authors, often from multiple institutions. As a result, an important function of Distill is providing somewhere to publish that isn’t someone’s home turf. If a Distill article were published on one person or organization’s blog, it could lead to a perception that it is primarily theirs and make other authors feel less comfortable with collaboration. Arxiv normally fills this role, but it only supports PDFs.\n\nBut it turns out there’s a simpler solution: self publication on one-off websites. David Ha and his collaborators have done a great job demonstrating this, using the Distill template and GitHub pages to self-publish articles (eg. the [world models](https://worldmodels.github.io/) article). In these cases, the articles are standalone rather than being with a particular author or institution.\n\n### Self-Publication Seems Like the Future (in most cases)\n\nIn many areas of physics, self publishing on Arxiv has become the dominant mode of publication. A great deal of machine learning research is also published on Arxiv. We think this type of self-publication is likely the future for a large fraction of publication, possibly along with alternative models of review that are separated from a publisher.\n\nJournal-led peer review provides many benefits. It can protect against scientific misinformation and non-reproducible results. It can save the research community time by filtering out papers that aren’t worth engaging with. It can provide feedback to junior researchers who may not have other sources of feedback. It can push research groups studying similar topics across institutions to engage with each other’s criticism. And double-blind review may support equity and fairness.\n\nBut is traditional journal-led peer review the most effective way to achieve these benefits? And is it worth the enormous costs it imposes on editors, reviewers, authors, and readers?\n\nFor example, avoiding scientific errors, non-reproducible results, and misinformation is certainly important. But for every paper where there’s a compelling public interest in avoiding misinformation (eg. papers about COVID), there are thousands of papers whose audience is a handful of the same researchers we ask to perform review. Additionally, it’s not clear how effective peer review actually is at catching errors. We suspect that a structure which focuses on reviewing controversial and important papers would be more effective at this goal. Our experience from [discussion articles](https://distill.pub/2019/advex-bugs-discussion/) is that reviewers are willing to spend orders of magnitude more energy when they feel like reviewing a paper genuinely matters to the community, rather than being pro-forma, and their work will be seen as a scientific contribution.\n\nSimilarly, we suspect that journal-led review isn’t a very effective way of providing feedback to junior researchers or of promoting equity. These are all very worthy aims, and we’d like to free energy to pursue them in effective ways.\n\nWe also think there’s a lot of upside to self-publication. Self-publication can move very fast. It doesn’t require a paper to fit into the scope of an existing journal. It allows for more innovation in the format of the paper, such as using interactive diagrams as Distill does. And it aligns incentives better.Self-publication may align certain incentives better than traditional publishing. Many papers go through an informal review process before they’re submitted to a journal or self-published, with authors soliciting feedback from colleagues. This informal review process is often smoother, faster, and provides more constructive and more relevant feedback than a traditional review process. Why is that? In a normal review process, the authors have the highest stakes, but little agency in the process. Meanwhile, neither the reviewers nor the editors share the authors’ incentive to move quickly. And the reviewers are often horribly over-subscribed. In contrast, in an informal review process, the authors have a strong incentive to quickly organize the process and reviewers are focused on providing helpful feedback to someone they know, rather than arbitrating a gatekeeping decision.\n\n### A Half-hearted Distill May Cause Harm\n\nDistill isn’t living up to our standards of author experience. Originally, we had a vision of a much more engaged, responsive, and rapid review process with editors deeply involved in helping authors improve their article. But the truth is that, with us being quite burnt out, our review process has become much slower and more similar to a typical journal. It’s unclear to us whether the value added by our present review process is worth the time costs we impose on authors.\n\nDistill also occupies institutional space, potentially discouraging others from starting similar projects. It’s possible that there are others who could execute something like Distill better than us, but aren’t starting their project because Distill exists.\n\nOn the flip side, Distill often comes up in conversations about the future of publishers and journals in machine learning, as a positive example of the role a journal can play. But if we no longer believe in our model, Distill may be unintentionally supporting something we don’t really stand behind. We may also be setting unrealistic aspirations: if Distill’s level of editorial engagement and editing was unsustainable, even with a deeply passionate set of volunteers and a relatively small number of articles, we should at least be clearly communicating how difficult it is.\n\nWhy a Hiatus?\n-------------\n\nWe think that Distill is a really beautiful artifact which illustrates a vision of scientific publishing. But it is not sustainable for us to continue running the journal in its current form. We think preserving it in its present state is more valuable than diluting it with lower quality editing. We also think that it’s a lot healthier for us and frees up our energy to do new projects that provide value to the community.\n\nWe’ve considered trying to find others to hand Distill off to. But a lot of the value of Distill is illustrating a weird and idiosyncratic vision. We think there’s value in preserving Distill’s original flavor. We are open to changes to better structure Distill, but we feel protective of Distill’s vision and quirkiness.\n\nAlthough Distill is going on hiatus, the [Distill template](https://github.com/distillpub/template) is open source, and we’d love to see others run with it!\n\n### Burnout\n\nOver the last few years, Distill has experienced a significant amount of volunteer burnout. The fact that multiple volunteers experienced burnout makes us think it’s partly caused by the issues described in previous sections.\n\nOne of the biggest risk factors in burnout is having conflicting goals, and as the previous sections describe, we’ve had many conflicting goals. We wanted to mentor people, but we also needed to reject them. We wanted to write beautiful articles ourselves, but we also wanted to be an independent venue.\n\nAnother significant risk factor is having unachievable goals. We set extremely high standards for ourselves: with early articles, volunteer editors would often spend 50 or more hours improving articles that were submitted to Distill and bringing them up to the level of quality we aspired to. This invisible effort was comparable to the work of writing a short article of one’s own. It wasn’t sustainable, and this left us with a constant sense that we were falling short. A related issue is that we had trouble setting well-defined boundaries of what we felt we owed to authors who submitted to us.\n\nBy discussing these challenges, we hope that future projects like Distill will be able to learn from our experiences and find ways to balance these competing values.", "date_published": "2021-07-02T20:00:00Z", "authors": ["Editorial Team"], "summaries": ["After five years, Distill will be taking a break."], "doi": "10.23915/distill.00031", "journal_ref": "distill-pub", "bibliography": []} {"id": "a15bd5a2f0b26e2015a03fdc5a2319a9", "title": "Adversarial Reprogramming of Neural Cellular Automata", "url": "https://distill.pub/selforg/2021/adversarial", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Adversarial MNIST CAs](#adversarial-mnist-cas) | \n[Adversarial Injections for Growing CAs](#adversarial-injections-for-growing-cas) | \n\n[Perturbing the states of Growing CAs](#perturbing-the-states-of-growing-cas) | \n[Related Work](#related-work)\n[Discussion](#discussion)\n \n\n \n\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Self-Organising Textures](/selforg/2021/textures/)\n\n This article makes strong use of colors in figures and demos. Click [here](#colorwheel) to adjust the color palette.\n\n\nIn a complex system, whether biological, technological, or social, how can we discover signaling events that will alter system-level behavior in desired ways? Even when the rules governing the individual components of these complex systems are known, the inverse problem - going from desired behaviour to system design - is at the heart of many barriers for the advance of biomedicine, robotics, and other fields of importance to society.\n\n\nBiology, specifically, is transitioning from a focus on mechanism (what is required for the system to work) to a focus on information (what algorithm is sufficient to implement adaptive behavior). Advances in machine learning represent an exciting and largely untapped source of inspiration and tooling to assist the biological sciences. Growing Neural Cellular Automata and Self-classifying MNIST Digits introduced the Neural Cellular Automata (Neural CA) model and demonstrated how tasks requiring self-organisation, such as pattern growth and self-classification of digits, can be trained in an end-to-end, differentiable fashion. The resulting models were robust to various kinds of perturbations: the growing CA expressed regenerative capabilities when damaged; the MNIST CA were responsive to changes in the underlying digits, triggering reclassification whenever necessary. These computational frameworks represent quantitative models with which to understand important biological phenomena, such as scaling of single cell behavior rules into reliable organ-level anatomies. The latter is a kind of anatomical homeostasis, achieved by feedback loops that must recognize deviations from a correct target morphology and progressively reduce anatomical error.\n\n\nIn this work, we *train adversaries* whose goal is to reprogram CA into doing something other than what they were trained to do. In order to understand what kinds of lower-level signals alter system-level behavior of our CA, it is important to understand how these CA are constructed and where local versus global information resides.\n\n\nThe system-level behavior of Neural CA is affected by:\n\n\n* **Individual cell states.** States store information which is used for both diversification among cell behaviours and for communication with neighbouring cells.\n* **The model parameters.** These describe the input/output behavior of a cell and are shared by every cell of the same family. The model parameters can be seen as *the way the system works*.\n* **The perceptive field.** This is how cells perceive their environment. In Neural CA, we always restrict the perceptive field to be the eight nearest neighbors and the cell itself. The way cells are perceived by each other is different between the Growing CA and MNIST CA. The Growing CA perceptive field is a set of weights fixed both during training and inference, while the MNIST CA perceptive field is learned as part of the model parameters.\n\n\n\nPerturbing any of these components will result in system-level behavioural changes.\n\n\nWe will explore two kinds of adversarial attacks: 1) injecting a few adversarial cells into an existing grid running a pretrained model; and 2) perturbing the global state of all cells on a grid.\n\n\nFor the first type of adversarial attacks we train a new CA model that, when placed in an environment running one of the original models described in the previous articles, is able to hijack the behavior of the collective mix of adversarial and non-adversarial CA. This is an example of injecting CA with differing *model parameters* into the system. In biology, numerous forms of hijacking are known, including viruses that take over genetic and biochemical information flow , bacteria that take over physiological control mechanisms and even regenerative morphology of whole bodies , and fungi and toxoplasma that modulate host behavior . Especially fascinating are the many cases of non-cell-autonomous signaling developmental biology and cancer, showing that some cell behaviors can significantly alter host properties both locally and at long range. For example, bioelectrically-abnormal cells can trigger metastatic conversion in an otherwise normal body (with no genetic defects) , while management of bioelectrical state in one area of the body can suppress tumorigenesis on the other side of the organism . Similarly, amputation damage in one leg initiates changes to ionic properties of cells in the contralateral leg , while the size of the developing brain is in part dictated by the activity of ventral gut cells . All of these phenomena underlie the importance of understanding how cell groups make collective decisions, and how those tissue-level decisions can be subverted by the activity of a small number of cells. It is essential to develop quantitative models of such dynamics, in order to drive meaningful progress in regenerative medicine that controls system-level outcomes top-down, where cell- or molecular-level micromanagement is infeasible .\n\n\nThe second type of adversarial attacks interact with previously trained growing CA models by *perturbing the states within cells*. We apply a global state perturbation to all living cells. This can be seen as inhibiting or enhancing combinations of state values, in turn hijacking proper communications among cells and within the cell’s own states. Models like this represent not only ways of thinking about adversarial relationships in nature (such as parasitism and evolutionary arms races of genetic and physiological mechanisms), but also a roadmap for the development of regenerative medicine strategies. Next-generation biomedicine will need computational tools for inferring minimal, least-effort interventions that can be applied to biological systems to predictively change their large-scale anatomical and behavioral properties.\n\n\nAdversarial MNIST CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_mnist_ca.ipynb)\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nRecall how the Self-classifying MNIST digits task consisted of placing CA cells on a plane forming the shape of an MNIST digit. The cells then had to communicate among themselves in order to come to a complete consensus as to which digit they formed.\n\n\n\n![](images/local_global_figure.svg)\nDiagram showing the local vs. global information available in the cell collective. \n (a) Local information neighbourhood - each cell can only observe itself and its neighbors’ states, or the absence of neighbours. \n (b) Globally, the cell collective aggregates information from all parts of itself. \n (c) It is able to distinguish certain shapes that compose a specific digit (3 in the example). \n\nBelow we show examples of classifications made by the model trained in Self-classifying MNIST Digits.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nThe original model behavior on unseen data. Classification mistakes have a red background.\n\nIn this experiment, **the goal is to create adversarial CA that can hijack the cell collective’s classification consensus to always classify an eight**. We use the CA model from and freeze its parameters. We then train a new CA whose model architecture is identical to the frozen model but is randomly initialized. The training regime also closely approximates that of self-classifying MNIST digits CA. There are three important differences:\n\n\n* Regardless of what the actual digit is, we consider *the correct classification to always be an eight*.\n* For each batch and each pixel, the CA is randomly chosen to be either the pretrained model or the new adversarial one. The adversarial CA is used 10% of the time, and the pre-trained, frozen, model the rest of the time.\n* Only the adversarial CA parameters are trained, the parameters of the pretrained model are kept frozen.\n\n\nThe adversarial attack as defined here only modifies a small percentage of the overall system, but the goal is to propagate signals that affect all the living cells. Therefore, these adversaries have to somehow learn to communicate deceiving information that causes wrong classifications in their neighbours and further cascades in the propagation of deceiving information by ‘unaware’ cells. The unaware cells’ parameters cannot be changed so the only means of attack by the adversaries is to cause a change in the cells’ states. Cells’ states are responsible for communication and diversification.\n\n\nThe task is remarkably simple to optimize, reaching convergence in as little as 2000 training steps (as opposed to the two orders of magnitude more steps needed to construct the original MNIST CA). By visualising what happens when we remove the adversaries, we observe that the adversaries must be constantly communicating with their non-adversarial neighbours to keep them convinced of the malicious classification. While some digits don’t recover after the removal of adversaries, most of them self-correct to the right classification. Below we show examples where we introduce the adversaries at 200 steps and remove them after a further 200 steps.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nWe introduce the adversaries (red pixels) after 200 steps and remove them after 200 more steps. Most digits recover, but not all. We highlight mistakes in classification with a red background.\n\nWhile we trained the adversaries with a 10-to-90% split of adversarial vs. non-adversarial cells, we observe that often significantly fewer adversaries are needed to succeed in the deception. Below we evaluate the experiment with just one percent of cells being adversaries.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nAdversaries constituting up 1% of the cell collective (red pixels). We highlight mistakes in classification with a red background.\n\nWe created a demo playground where the reader can draw digits and place adversaries with surgical precision. We encourage the reader to play with the demo to get a sense of how easily non-adversarial cells are swayed towards the wrong classification.\n\n\n\nAdversarial Injections for Growing CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_growing_ca.ipynb#scrollTo=ByHbsY0EuyqB)\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nThe natural follow up question is whether these adversarial attacks work on Growing CA, too. The Growing CA goal is to be able to grow a complex image from a single cell, and having its result be persistent over time and robust to perturbations. In this article, we focus on the lizard pattern model from Growing CA.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nThe target CA to hijack.\n\nThe goal is to have some adversarial cells change the global configuration of all the cells. We choose two new targets we would like the adversarial cells to try and morph the lizard into: a tailless lizard and a red lizard.\n\n\n\n![](images/lizard_new_targets_exp2.png)\nThe desired mutations we want to apply.\n\nThese targets have different properties: \n\n\n* **Red lizard:** converting a lizard from green to red would show a global change in the behaviour of the cell collective. This behavior is not present in the dynamics observed by the original model. The adversaries are thus tasked with fooling other cells into doing things they have never done before (create the lizard shape as before, but now colored in red).\n* **Tailless lizard:** having a severed tail is a more localized change that only requires some cells to be fooled into behaving in the wrong way: the cells at the base of the tail need to be convinced they constitute the edge or silhouette of the lizard, instead of proceeding to grow a tail as before.\n\n\nJust like in the previous experiment, our adversaries can only indirectly affect the states of the original cells.\n\n\nWe first train adversaries for the tailless target with a 10% chance for any given cell to be an adversary. We prohibit cells to be adversaries if they are outside the target pattern; i.e. the tail contains no adversaries.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\n10% of the cells are adversarial.\n\nThe video above shows six different instances of the same model with differing stochastic placement of the adversaries. The results vary considerably: sometimes the adversaries succeed in removing the tail, sometimes the tail is only shrunk but not completely removed, and other times the pattern becomes unstable. Training these adversaries required many more gradient steps to achieve convergence, and the pattern converged to is qualitatively worse than what was achieved for the adversarial MNIST CA experiment.\n\n\nThe red lizard pattern fares even worse. Using only 10% adversarial cells results in a complete failure: the original cells are unaffected by the adversaries. Some readers may wonder whether the original pretrained CA has the requisite skill, or ‘subroutine’ of producing a red output at all, since there are no red regions in the original target, and may suspect this was an impossible task to begin with. Therefore, we increased the proportion of adversarial cells until we managed to find a successful adversarial CA, if any were possible.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nAdversaries are 60% of the cells. At step 500, we stop the image and show only cells that are from the original model.\n\nIn the video above we can see how, at least in the first stages of morphogenesis, 60% of adversaries are capable of coloring the lizard red. Take particular notice of the “step 500” The still-image of the video is on step 500, and the video stops for a bit more than a second on step 500., where we hide the adversarial cells and show only the original cells. There, we see how a handful of original cells are colored in red. This is proof that the adversaries successfully managed to steer neighboring cells to color themselves red, where needed.\n\n\nHowever, the model is very unstable when iterated for periods of time longer than seen during training. Moreover, the learned adversarial attack is dependent on a majority of cells being adversaries. For instance, when using fewer adversaries on the order of 20-30%, the configuration is unstable.\n\n\nIn comparison to the results of the previous experiment, the Growing CA model shows a greater resistance to adversarial perturbation than those of the MNIST CA. A notable difference between the two models is that the MNIST CA cells have to always be ready and able to change an opinion (a classification) based on information propagated through several neighbors. This is a necessary requirement for that model because at any time the underlying digit may change, but most of the cells would not observe any change in their neighbors’ placements. For instance, imagine the case of a one turning into a seven where the lower stroke of each overlap perfectly. From the point of view of the cells in the lower stroke of the digit, there is no change, yet the digit formed is now a seven. We therefore hypothesise MNIST CA are more reliant and ‘trusting’ of continuous long-distance communication than Growing CA, where cells never have to reconfigure themselves to generate something different to before.\n\n\nWe suspect that more general-purpose Growing CA that have learned a variety of target patterns during training are more likely to be susceptible to adversarial attacks.\n\n\nPerturbing the states of Growing CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_growing_ca.ipynb#scrollTo=JaITnQv0k1iY)\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nWe observed that it is hard to fool Growing CA into changing their morphology by placing adversarial cells inside the cell collective. These adversaries had to devise complex local behaviors that would cause the non-adversarial cells nearby, and ultimately globally throughout the image, to change their overall morphology.\n\n\nIn this section, we explore an alternative approach: perturbing the global state of all cells without changing the model parameters of any cell.\n\n\nAs before, we base our experiments on the Growing CA model trained to produce a lizard. Every cell of a Growing CA has an internal state vector with 16 elements. Some of them are phenotypical elements (the RGBA states) and the remaining 12 serve arbitrary purposes, used for storing and communicating information. We can perturb the states of these cells to hijack the overall system in certain ways (the discovery of such perturbation strategies is a key goal of biomedicine and synthetic morphology). There are a variety of ways we can perform state perturbations. We will focus on *global state perturbations*, defined as perturbations that are applied on every living cell at every time step (analogous to “systemic” biomedical interventions, that are given to the whole organism (e.g., a chemical taken internally), as opposed to highly localized delivery systems). The new goal is to discover a certain type of global state perturbation that results in a stable new pattern.\n\n\n\n![](images/figure_2.svg)\nDiagram showing some possible stages for perturbing a lizard pattern. (a) We start from a seed that grows into a lizard (b) Fully converged lizard. (c) We apply a global state perturbation at every step. As a result, the lizard loses its tail. (d) We stop perturbing the state. We observe the lizard immediately grows back its tail.\n\nWe show 6 target patterns: the tailless and red lizard from the previous experiment, plus a blue lizard and lizards with various severed limbs and severed head.\n\n\n\n![](images/mutations_mosaic.jpeg)\nMosaic of the desired mutations we want to apply.\n\nWe decided to experiment with a simple type of global state perturbation: applying a symmetric 16×1616\\times1616×16 matrix multiplication AAA to every living cell at every step In practice, we also clip the state of cells such that they are bounded in [−3,+3][-3, +3][−3,+3]. This is a minor detail and it helps stabilise the model.. To give insight on why we chose this, an even simpler “state addition” mutation (a mutation consisting only of the addition of a vector to every state) would be insufficient because the value of the states of our models are unbounded, and often we would want to suppress something by setting it to zero. The latter is generally impossible with constant state additions, as a constant addition or subtraction of a value would generally lead to infinity, except for some fortunate cases where the natural residual updates of the cells would cancel out with the constant addition at precisely state value zero. However, matrix multiplications have the possibility of amplifying/suppressing combinations of elements in the states: multiplying a state value repeatedly for a constant value less than one can easily suppress a state value to zero. We constrain the matrix to be symmetric for reasons that will become clear in the following section.\n\n\nWe initialize AAA with the identity matrix III and train AAA just as we would train the original Growing CA, albeit with the following differences:\n\n\n* We perform a global state perturbation as described above, using AAA, at every step.\n* The underlying CA parameters are frozen and we only train AAA.\n* We consider the set of initial image configurations to be both the seed state and the state with a fully grown lizard (as opposed to the Growing CA article, where initial configurations consisted of the seed state only).\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nEffect of applying the trained perturbations.\n\nThe video above shows the model successfully discovering global state perturbations able to change a target pattern to a desired variation. We show what happens when we stop perturbing the states (an out-of-training situation) at step 500 through step 1000, then reapplying the mutation. This demonstrates the ability of our perturbations to achieve the desired result both when starting from a seed, and when starting from a fully grown pattern. Furthermore it demonstrates that the original CA easily recover from these state perturbations once it goes away. This last result is perhaps not surprising given how robust growing CA models are in general.\n\n\nNot all perturbations are equally effective. In particular, the headless perturbation is the least successful as it results in a loss of other details across the whole lizard pattern such as the white coloring on its back. We hypothesize that the best perturbation our training regime managed to find, due to the simplicity of the perturbation, was suppressing a “structure” that contained both the morphology of the head and the white colouring. This may be related to the concept of differentiation and distinction of biological organs. Predicting what kinds of perturbations would be harder or impossible to be done, before trying them out empirically, is still an open research question in biology. On the other hand, a variant of this kind of synthetic analysis might help with defining higher order structures within biological and synthetic systems.\n\n\n### Directions and compositionality of perturbations\n\n\nOur choice of using a symmetric matrix for representing global state perturbations is justified by a desire to have compositionality. Every complex symmetric matrix AAA can be diagonalized as follows: \n\n\nA=QΛQ⊺A = Q \\Lambda Q^\\intercalA=QΛQ⊺\n\n\nwhere Λ\\LambdaΛ is the diagonal eigenvalues matrix and QQQ is the unitary matrix of its eigenvectors. Another way of seeing this is applying a change of basis transformation, scaling each component proportional to the eigenvalues, and then changing back to the original basis. This should also give a clearer intuition on the ease of suppressing or amplifying combinations of states. Moreover, we can now infer what would happen if all the eigenvalues were to be one. In that case, we would naturally have QIQ⊺=IQ I Q^\\intercal = IQIQ⊺=I resulting in a no-op (no change): the lizard would grow as if no perturbation was performed. We can now decompose QΛQ⊺=Q(D+I)Q⊺Q \\Lambda Q^\\intercal = Q (D + I) Q^\\intercalQΛQ⊺=Q(D+I)Q⊺ where D is the *perturbation direction* (Λ−I\\Lambda - IΛ−I) in the “eigenvalue space”. Suppose we use a coefficient kkk to scale D: Ak=Q(kD+I)Q⊺A\\_k = Q (kD + I) Q^\\intercalAk​=Q(kD+I)Q⊺. If k=1k=1k=1, we are left with the original perturbation AAA and when k=0k=0k=0, we have the no-op III. Naturally, one question would be whether we can explore other values for kkk and discover meaningful perturbations. Since \n\n\nAk=Q(kD+I)Q⊺=kA+(1−k)IA\\_k = Q (kD + I) Q^\\intercal = k A + (1-k) IAk​=Q(kD+I)Q⊺=kA+(1−k)I \n\n\nwe do not even have to compute eigenvalues and eigenvectors and we can simply scale AAA and III accordingly.\n\n\nLet us then take the tailless perturbation and see what happens as we vary kkk:\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nEffect of the interpolation between an identity matrix and the ’perturbation direction of the tail perturbation.\n\nAs we change k=1k=1k=1 to k=0k=0k=0 we can observe the tail becoming more complete. Surprisingly, if we make kkk negative, the lizard grows a longer tail. Unfortunately, the further away we go, the more unstable the system becomes and eventually the lizard pattern grows in an unbounded fashion. This behaviour likely stems from that perturbations applied on the states also affect the homeostatic regulation of the system, making some cells die out or grow in different ways than before, resulting in a behavior akin to “cancer” in biological systems.\n\n\n**Can we perform multiple, individually trained, perturbations at the same time?** \n\n\nSuppose we have two perturbations AAA and BBB and their eigenvectors are the same (or, more realistically, sufficiently similar). Then, Ak=Q(kADA+I)Q⊺A\\_k = Q (k\\_A D\\_A + I) Q^\\intercalAk​=Q(kA​DA​+I)Q⊺ and Bk=Q(kBDB+I)Q⊺B\\_k = Q (k\\_B D\\_B + I) Q^\\intercalBk​=Q(kB​DB​+I)Q⊺. \n\n\nIn that case, \n\n\ncomb(Ak,Bk)=Q(kADA+kBDB+I)Q⊺=kAA+kBB+(1−kA−kB)Icomb(A\\_k, B\\_k) = Q(k\\_A D\\_A + k\\_B D\\_B + I)Q^\\intercal = k\\_A A + k\\_B B + (1 - k\\_A - k\\_B)Icomb(Ak​,Bk​)=Q(kA​DA​+kB​DB​+I)Q⊺=kA​A+kB​B+(1−kA​−kB​)I \n\n\nwould result in something meaningful. At the very least, if A=BA = BA=B, setting kA=kB=0.5k\\_A = k\\_B = 0.5kA​=kB​=0.5 would result in exactly the same perturbation.\n\n\nWe note that DAD\\_ADA​ and DBD\\_BDB​ are effectively a displacement from the identity III and we have empirically observed how given any trained displacement DAD\\_ADA​, for 0≤kA≤10 \\leq k\\_A \\leq 10≤kA​≤1 adding kADAk\\_A D\\_AkA​DA​ results in a stable perturbation. We then hypothesize that as long as we have two perturbations whose positive directions kkk are kA+kB≤1k\\_A + k\\_B \\leq 1kA​+kB​≤1, this could result in a stable perturbation. An intuitive understanding of this is interpolating stable perturbations using the direction coefficients.\n\n\nIn practice, however, the eigenvectors are also different, so the results of the combination will likely be worse the more different the respective eigenvector bases are.\n\n\nBelow, we interpolate the direction coefficients, while keeping their sum to be one, of two types of perturbations: tailless and no-leg lizards.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nEffect of composing two trained perturbations while keeping the sum of kkks as 1.\n\nWhile it largely achieves what we expect, we observe some unintended effects such as the whole pattern starting to traverse vertically in the grid. Similar results happen with other combinations of perturbations. What happens if we remove the restriction of the sum of kkks being equal to one, and instead add both perturbations in their entirety? We know that if the two perturbations were the same, we would end twice as far away from the identity perturbation, and in general we expect the variance of these perturbations to increase. Effectively, this means going further and further away from the stable perturbations discovered during training. We would expect more unintended effects that may disrupt the CA as the sum of kkks increases.\n\n\nBelow, we demonstrate what happens when we combine the tailless and the no-leg lizard perturbations at their fullest. Note that when we set both kkks to one, the resulting perturbation is equal to the sum of the two perturbations minus an identity matrix.\n\n\n\n\n\n\nYour browser does not support the video tag.\n \n\n\nEffect of composing two perturbations.\n\nSurprisingly, the resulting pattern is almost as desired. However, it also suffers from the vertical movement of the pattern observed while interpolating kkks.\n\n\n \n\n\nThis framework can be generalized to any arbitrary number of perturbations. Below, we have created a small playground that allows the reader to input their desired combinations. Empirically, we were surprised by how many of these combinations result in the intended perturbations and qualitatively it appears that bounding kkk to one results in generally more stable patterns. We also observed how exploring negative kkk values is usually more unstable.\n\n\n\nRelated work\n------------\n\n\nThis work is inspired by Generative Adversarial Networks (GANs) . While with GANs it is typical to cotrain pairs of models, in this work we froze the original CA and trained the adversaries only. This setup is to the greatest degree inspired by the seminal work *Adversarial Reprogramming of Neural Networks* .\n\n\nThe kinds of state perturbations performed in this article can be seen as targeted latent state manipulations. Word2vec shows how latent vector representations can have compositional properties and Fader Networks show similar behaviors for image processing. Both of these works and their related work were of inspiration to us.\n\n\n### Influence maximization\n\n\nAdversarial cellular automata have parallels to the field of influence maximization. Influence maximization involves determining the optimal nodes to influence in order to maximize influence over an entire graph, commonly a social graph, with the property that nodes can in turn influence their neighbours. Such models are used to model a wide variety of real-world applications involving information spread in a graph. A common setting is that each vertex in a graph has a binary state, which will change if and only if a sufficient fraction of its neighbours’ states switch. Examples of such models are social influence maximization (maximally spreading an idea in a network of people), contagion outbreak modelling (usually to minimize the spread of a disease in a network of people) and cascade modeling (when small perturbations to a system bring about a larger ‘phase change’). At the time of writing this article, for instance, contagion minimization is a model of particular interest. NCA are a graph - each cell is a vertex and has edges to its eight neighbours, through which it can pass information. This graph and message structure is significantly more complex than the typical graph underlying much of the research in influence maximization, because NCA cells pass vector-valued messages and have a complex update rules for their internal states, whereas graphs in influence maximization research typically consist of more simple binary cells states and threshold functions on edges determining whether a node has switched states. Many concepts from the field could be applied and are of interest, however.\n\n\nFor example, in this work, we have made an assumption that our adversaries can be positioned anywhere in a structure to achieve a desired behaviour. A common focus of investigation in influence maximization problems is deciding which nodes in a graph will result in maximal influence on the graph, referred to as target set selection . This problem isn’t always tractable, often NP-hard, and solutions frequently involve simulations. Future work on adversarial NCA may involve applying techniques from influence maximization in order to find the optimal placement of adversarial cells.\n\n\nDiscussion\n----------\n\n\nThis article showed two different kinds of adversarial attacks on Neural CA.\n\n\nInjections of adversarial CA in a pretrained Self-classifying MNIST CA showed how an existing system of cells that are heavily reliant on the passing of information among each other is easily swayed by deceitful signaling. This problem is routinely faced by biological systems, which face hijacking of behavioral, physiological, and morphological regulatory mechanisms by parasites and other agents in the biosphere with which they compete. Future work in this field of computer technology can benefit from research on biological communication mechanisms to understand how cells maximize reliability and fidelity of inter- and intra-cellular messages required to implement adaptive outcomes. \n\n\nThe adversarial injection attack was much less effective against Growing CA and resulted in overall unstable CA. This dynamic is also of importance to the scaling of control mechanisms (swarm robotics and nested architectures): a key step in “multicellularity” (joining together to form larger systems from sub-agents ) is informational fusion, which makes it difficult to identify the source of signals and memory engrams. An optimal architecture would need to balance the need for validating control messages with a possibility of flexible merging of subunits, which wipes out metadata about the specific source of informational signals. Likewise, the ability to respond successfully to novel environmental challenges is an important goal for autonomous artificial systems, which may import from biology strategies that optimize tradeoff between maintaining a specific set of signals and being flexible enough to establish novel signaling regimes when needed.\n\n\nThe global state perturbation experiment on Growing CA shows how it is still possible to hijack these CA towards stable out-of-training configurations and how these kinds of attacks are somewhat composable in a similar way to how embedding spaces are manipulable in the natural language processing and computer vision fields . However, this experiment failed to discover stable out-of-training configurations that persist *after the perturbation was lifted*. We hypothesize that this is partially due to the regenerative capabilities of the pretrained CA, and that other models may be less capable of recovery from arbitrary perturbations.", "date_published": "2021-05-06T20:00:00Z", "authors": ["Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin"], "summaries": ["Reprogramming Neural CA to exhibit novel behaviour, using adversarial attacks."], "doi": "10.23915/distill.00027.004", "journal_ref": "distill-pub", "bibliography": [{"link": "https://doi.org/10.23915/distill.00023", "title": "Growing Neural Cellular Automata"}, {"link": "https://doi.org/10.23915/distill.00027.002", "title": "Self-classifying MNIST Digits"}, {"link": "https://doi.org/10.3389/fmicb.2020.00733", "title": "Herpes Simplex Virus: The Hostile Guest That Takes Over Your Home"}, {"link": "https://doi.org/10.1038/cmi.2010.67", "title": "The role of gut microbiota (commensal bacteria) and the mucosal barrier in the pathogenesis of inflammatory and autoimmune diseases and cancer: contribution of germ-free and gnotobiotic animal models of human diseases"}, {"link": "https://doi.org/10.1016/j.mod.2020.103614", "title": "Regulation of axial and head patterning during planarian regeneration by a commensal bacterium"}, {"link": "https://doi.org/10.1007/s00436-018-6040-2", "title": "Toxoplasma gondii infection and behavioral outcomes in humans: a systematic review"}, {"link": "https://doi.org/10.1088/1478-3975/9/6/065002", "title": "Resting potential, oncogene-induced tumorigenesis, and metastasis: the bioelectric basis of cancer in vivo"}, {"link": "https://doi.org/10.18632/oncotarget.1935", "title": "Transmembrane voltage potential of somatic cells controls oncogene-mediated tumorigenesis at long-range"}, {"link": "https://doi.org/10.1242/dev.164210", "title": "Cross-limb communication during Xenopus hindlimb regenerative response: non-local bioelectric injury signals"}, {"link": "https://doi.org/10.1387/ijdb.150197ml", "title": "Local and long-range endogenous resting potential gradients antagonistically regulate apoptosis and proliferation in the embryonic CNS"}, {"link": "https://doi.org/10.1098/rsif.2016.0555", "title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"link": "https://www.cs.cornell.edu/home/kleinber/kdd03-inf.pdf", "title": "Maximizing the spread of influence through a social network"}, {"link": "https://doi.org/10.1007/978-3-319-23105-1_4", "title": "The Independent Cascade and Linear Threshold Models"}, {"link": "http://arxiv.org/pdf/1808.05502.pdf", "title": "A Survey on Influence Maximization in a Social Network"}, {"link": "http://dx.doi.org/10.1038/s41467-019-10431-6", "title": "Simplicial models of social contagion"}, {"link": "https://www.cambridge.org/core/books/algorithmic-game-theory/cascading-behavior-in-networks-algorithmic-and-economic-issues/753EA45A6662E01BC8F9444B0AC80238", "title": "Cascading Behavior in Networks: Algorithmic and Economic Issues"}, {"link": "https://doi.org/10.1137/08073617X", "title": "On the Approximability of Influence in Social Networks"}, {"link": "https://www.frontiersin.org/article/10.3389/fpsyg.2019.02688", "title": "The Computational Boundary of a “Self”: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition"}]} {"id": "46959ccf83a5e89dbf4fc4aee11b4af9", "title": "Weight Banding", "url": "https://distill.pub/2020/circuits/weight-banding", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Branch Specialization](/2020/circuits/branch-specialization/)\n\nIntroduction\n------------\n\n\n\n Open up any ImageNet conv net and look at the weights in the last layer. You’ll find a uniform spatial pattern to them, dramatically unlike anything we see elsewhere in the network. No individual weight is unusual, but the uniformity is so striking that when we first discovered it we thought it must be a bug. Just as different biological tissue types jump out as distinct under a microscope, the weights in this final layer jump out as distinct when visualized with NMF. We call this phenomenon *weight banding*.\n \n\n\n\n\n\nMicroscope slides of different tissues\nMuscle tissue\n\nEpithelial tissue\n\n\nTypical layer\nLayer with weight banding\nNMF of weights at different layers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[1](#figure-1). When [visualized with NMF](https://drafts.distill.pub/distillpub/post--circuits-visualizing-weights#one-simple-trick), the weight banding in layer `mixed_5b` is as visually striking compared to any other layer in InceptionV1 (here shown: `mixed_3a`) as the smooth, regular striation of muscle tissue is when compared to any other tissue (here shown: cardiac muscle tissue and epithelial tissue).\n \n\n\n\n\n\n So far, the [Circuits thread](https://distill.pub/2020/circuits/) has mostly focused on studying very small pieces of neural network – [individual neurons](https://distill.pub/2020/circuits/early-vision/) and small circuits. In contrast, weight banding is an example of what we call a “structural phenomenon,” a larger-scale pattern in the circuits and features of a neural network. Other examples of structural phenomena are the recurring symmetries we see in [equivariance](https://distill.pub/2020/circuits/equivariance/) motifs and the specialized slices of neural networks we see in [branch specialization](https://distill.pub/2020/circuits/branch-specialization/).\n\n In the case of weight banding, we think of it as a structural phenomenon because the pattern appears at the scale of an entire layer.\n\n \n\n\n\n\n Weight banding also seems similar in flavor to the [checkerboard artifacts](https://distill.pub/2016/deconv-checkerboard/) that form during deconvolution.\n \n\n\n\n\n In addition to describing weight banding, we’ll explore when and why it occurs. We find that there appears to be a causal link between whether a model uses global average pooling or fully connected layers at the end, suggesting that weight banding is part of an algorithm for preserving information about larger scale structure in images. Establishing causal links like this is a step towards closing the loop between practical decisions in training neural networks and the phenomena we observe inside them.\n \n\n\nWhere weight banding occurs\n---------------------------\n\n\n\n Weight banding consistently forms in the final convolutional layer of vision models with global average pooling.\n \n\n\n\n In order to see the bands, we need to visualize the spatial structure of the weights, as shown below. We typically do this using NMF, [as described in](https://drafts.distill.pub/distillpub/post--circuits-visualizing-weights/#one-simple-trick) Visualizing Weights. For each neuron, we take the weights connecting it to the previous layer. We then use NMF to reduce the number of dimensions corresponding to channels in the previous layer down to 3 factors, which we can map to RGB channels. Since which factor is which is arbitrary, we use a heuristic to make the mapping consistent across neurons. This reveals a very prominent pattern of horizontalThe stripes aren’t always perfectly horizontal - sometimes they exhibit a slight preference for extra weight in the center of the central band, as seen in some examples below. stripes.\n \n\n\n\n\n[2](#figure-2).\n These common networks have pooling operations before their fully\n connected layers and consistently show banding at their last\n convolutional layers.\n\n\n\n\n\n\nInceptionV1 \nmixed 5b\n\n\n\nResNet50 \nblock 4 unit 3\n\n\n\nVGG19 \nconv5\n\n\n\n\n Interestingly, AlexNet does not exhibit this phenomenon.\n \n\n\n\n\n\n[3](#figure-3).\n AlexNet does not have a pooling operation before its fully connected\n layers and does not show banding at its last convolutional\n layer.\n\n\n \n\n\n To make it easier to look for groups of similar weights, we\n sorted the neurons at each layer by similarity of their reduced\n forms.\n \n\n\n\n\n\n\nAlexNet \nconv5\n\n\n\n\n Unlike most modern vision models, AlexNet does not use global average pooling. Instead, it has a fully connected layer directly connected to its final convolutional layer, allowing it to treat different positions differently. If one looks at the weights of this fully connected layer, the weights strongly vary as a function of the global y position.\n \n\n\n\n The horizontal stripes in weight banding mean that the filters don’t care about horizontal position, but are strongly encoding relative vertical position. Our hypothesis is that weight banding is a learned way to preserve spatial information as it gets lost through various pooling operations.\n \n\n\n\n In the next section, we will construct our own simplified vision network and investigate variations on its architecture in order to understand exactly which conditions are necessary to produce weight banding.\n \n\n\nWhat affects banding\n--------------------\n\n\n\n We’d like to understand which architectural decisions affect weight banding. This will involve trying out different architectures and seeing whether weight banding persists.\n\n Since we will only want to change a single architectural parameter at a time, we will need a consistent baseline to apply our modifications to. Ideally, this baseline would be as simple as possible.\n \n\n\n\n We created a simplified network architecture with 6 groups of convolutions, separated by L2 pooling layers. At the end, it has a global average pooling operation that reduces the input to 512 values that are then fed to a fully connected layer with 1001 outputs.\n \n\n\n\n\n\n\n[4](#figure-4). Our simplified vision network architecture.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n This simplified network reliably produces weight banding in its last layer\n (and usually in the two preceding layers as well).\n \n\n\n\n\n\n[5](#figure-5). NMF of the weights in the last layer of the simplified model shows clear weight banding.\n \n\n\n\n\n\n\nsimplified model (`5b`), baseline\n\n\n\n\n In the rest of this section, we’ll experiment with modifying this architecture and its training settings and seeing if weight banding is preserved.\n\n \n\n\n\n### Rotating images 90 degrees\n\n\n\n To rule out bugs in training or some strange numerical problem, we decided\n to do a training run with the input rotated by 90 degrees. This sanity check\n yielded a very clear result showing *vertical* banding in the resulting\n weights, instead of horizontal banding. This is a clear indication that banding is a result of properties\n within the ImageNet dataset which make spatial vertical position(or, in the case of the rotated dataset, spatial horizontal position) relevant.\n \n\n\n\n\n\n\n[6](#figure-6). simplified model (`5b`), 90º rotation\n\n\n\n### Fully connected layer without global average pooling\n\n\n\n We remove the global average pooling step in our simplified model, allowing the fully connected layer to see all spatial positions at once. This model did **not** exhibit weight banding, but used 49x more parameters in the fully connected layer and overfit to the training set. This is pretty strong evidence that the use of aggressive pooling after the last convolutions in common models causes weight banding. This result is also consistent with AlexNet not showing this banding phenomenon (since it also does not have global average pooling).\n \n\n\n\n\n\n\n[7](#figure-7). simplified model (`5b`), no pooling before fully connected layer \n\n\n\n### Average pooling along x-axis only\n\n\n\n We average out each row of the final convolutional layer, so that vertical absolute position is preserved but horizontal absolute position is not.Since this model has 7x7 spatial positions in the final convolutional layer, this modification increases the number of parameters in the fully connected layer by 7x, but not the 49x of a complete fully connected layer with no pooling at all. The banding at the last layer seems to go away, but on closer investigation, clear banding is still visible in layer `5a`, similar to the baseline model’s `5b`. We found this result surprising.\n \n\n\n\n\n\n[8](#figure-8).\n NMF of weights in `5a` and `5b` in a version of the simplified model modified to have pooling only along the x-axis. Banding is gone from `5b` but reappears in `5a`!\n \n\n\n\n\n\n\n simplified model (`5a`), x-axis pooling\n\n\n\nsimplified model (`5b`), x-axis pooling\n\n\n\n### Approaches where weight banding persisted\n\n\n\n We tried each of the modifications below, and found that weight banding was still present in each of these variants.\n \n\n\n* Global average pooling with learned spatial masks. By applying several different spatial masks and global average pooling, we can allow the model to preserve some spatial information. Intuitively, each mask can select for a different subset of spatial positions.\n\n We tried experimental runs using each of 3, 5, or 16 different masks.\n\n The masks that were learned corresponded to large-scale global structure, but banding was still strongly present.\n* Using an attention layer instead of pooling/fully connected combination after layer\n `5b`.\n* Adding a 7x7x512 mask with learned weights after `5b`. The hope was that a\n mask would help each `5b` neuron focus on the right parts of the 7x7 image\n without a convolution.\n* Adding CoordConv channels to the inputs\n of `5a` and `5b`.\n* Splitting the output of `5b` into 16 7x7x32 channel groups and feeding\n each group its own fully connected layer. The output of the 16 fully connected layers is then\n concatenated into the input of the final 1001-class fully connected layer.\n* Using a global max pool, 4096-unit fully connected layer, then 1001-unit fully connected layer (inspired\n by VGG).\n\n\n\n An interactive diagram allowing you to explore the weights for these experiments and more can be found in the [appendix](#figure-12).\n \n\n\nConfirming banding interventions in common architectures\n--------------------------------------------------------\n\n\n\n In the previous section, we observed two interventions that clearly affected weight banding: rotating the dataset by 90º and removing the global average pooling before the fully connected layer.\n To confirm that these effects hold beyond our simplified model, we decided to make the same interventions to three\n common architectures (InceptionV1, ResNet50, VGG19) and train them from\n scratch.\n \n\n\nWith one exception, the effect holds in all three models.\n\n\n#### InceptionV1\n\n\n\n\n[9](#figure-9). Inception V1, layer `mixed_5c`, 5x5 convolution\n\n\n\n\nbaseline\n\n\n\n90º rotation\n\n\n\nglobal average pooling layer removed\n\n\n\n#### ResNet50\n\n\n\n\n\n[10](#figure-10). ResNet50, last 3x3 convolutional layer\n\n\n\n\nbaseline\n\n\n\n90º rotation\n\n\n\nglobal average pooling layer removed\n\n\n\n#### VGG19\n\n\n\n\n\n[11](#figure-11). VGG19, last 3x3 convolutional layer.\n\n\n\n\nbaseline\n\n\n\n90º rotation\n\n\n\nglobal average pooling layer removed\n\n\n\n\n The one exception is VGG19, where the removal of the pooling operation before its set of fully connected layers did not eliminate weight banding as expected; these weights look fairly similar to the baseline. However, it clearly responds to rotation.\n \n\n\n\nConclusion\n----------\n\n\n\n Once we really understand neural networks, one would expect us to be able to leverage that understanding to design more effective neural networks architectures. Early papers, like Zeiler et al, emphasized this quite strongly, but it’s unclear whether there have yet been any significant successes in doing this. This hints at significant limitations in our work. It may also be a missed opportunity: it seems likely that if interpretability was useful in advancing neural network capabilities, it would become more integrated into other research and get attention from a wider range of researchers.\n \n\n\n\n It’s unclear whether weight banding is “good” or “bad.”On one hand, the 90º rotation experiment shows that weight banding is a product of the dataset and is encoding useful information into the weights. However, if spatial information could flow through the network in a different, more efficient way, then perhaps the channels would be able to focus on encoding relationships between features without needing to track spatial positions. We don’t have any recommendation or action to take away from it. However, it is an example of a consistent link between architecture decisions and the resulting trained weights. It has the right sort of flavor for something that could inform architectural design, even if it isn’t particularly actionable itself.\n \n\n\n\n More generally, weight banding is an example of a large-scale structure. One of the major limitations of circuits has been how small-scale it is. We’re hopeful that larger scale structures like weight banding may help circuits form a higher-level story of neural networks.\n \n\n\n\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Branch Specialization](/2020/circuits/branch-specialization/)", "date_published": "2021-04-08T20:00:00Z", "authors": ["Michael Petrov", "Chelsea Voss", "Ludwig Schubert", "Nick Cammarata", "Gabriel Goh", "Chris Olah"], "summaries": ["Weights in the final layer of common visual models appear as horizontal bands. We investigate how and why."], "doi": "10.23915/distill.00024.009", "journal_ref": "distill-pub", "bibliography": [{"link": "https://commons.wikimedia.org/wiki/File:Muscle_Tissue_Cardiac_Muscle_(27187637567).jpg", "title": "Muscle Tissue: Cardiac Muscle"}, {"link": "https://commons.wikimedia.org/wiki/File:Epithelial_Tissues_Stratified_Squamous_Epithelium_(40230842160).jpg", "title": "Epithelial Tissues: Stratified Squamous Epithelium"}, {"link": "http://distill.pub/2016/deconv-checkerboard", "title": "Deconvolution and Checkerboard Artifacts"}, {"link": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf", "title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"link": "http://arxiv.org/pdf/1807.03247.pdf", "title": "An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution"}]} {"id": "d2af05099a4301be3320f05c220b8489", "title": "Branch Specialization", "url": "https://distill.pub/2020/circuits/branch-specialization", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Visualizing Weights](/2020/circuits/visualizing-weights/)\n[Weight Banding](/2020/circuits/weight-banding/)\n\nIntroduction\n------------\n\n\n\n If we think of interpretability as a kind of “anatomy of neural networks,” most of the circuits thread has involved studying tiny little veins – looking at the small-scale, at individual neurons and how they connect. However, there are many natural questions that the small-scale approach doesn’t address.\n \n\n\n\n In contrast, the most prominent abstractions in biological anatomy involve larger-scale structures: individual organs like the heart, or entire organ systems like the respiratory system. And so we wonder: is there a “respiratory system” or “heart” or “brain region” of an artificial neural network? Do neural networks have any emergent structures that we could study that are larger-scale than circuits?\n \n\n\n\n This article describes *branch specialization*, one of three larger “structural phenomena” we’ve been able observe in neural networks. (The other two, [equivariance](https://distill.pub/2020/circuits/equivariance/) and [weight banding](https://distill.pub/2020/circuits/weight-banding/), have separate dedicated articles.) Branch specialization occurs when neural network layers are split up into branches. The neurons and circuits tend to self-organize, clumping related functions into each branch and forming larger functional units – a kind of “neural network brain region.” We find evidence that these structures implicitly exist in neural networks without branches, and that branches are simply reifying structures that otherwise exist.\n \n\n\n\n The earliest example of branch specialization that we’re aware of comes from AlexNet. AlexNet is famous as a jump in computer vision, arguably starting the deep learning revolution, but buried in the paper is a fascinating, rarely-discussed detail.\n\n The first two layers of AlexNet are split into two branches which can’t communicate until they rejoin after the second layer. This structure was used to maximize the efficiency of training the model on two GPUs, but the authors noticed something very curious happened as a result. The neurons in the first layer organized themselves into two groups: black-and-white Gabor filters formed on one branch and low-frequency color detectors formed on the other branch.\n \n\n\n\n![](images/Figure_1.png)\n\n\n[1](#figure-1). Branch specialization in the first two layers of AlexNet. Krizhevsky et al. observed the phenomenon we call branch specialization in the first layer of AlexNet by visualizing their weights to RGB channels; here, we use [feature visualization](https://distill.pub/2017/feature-visualization/) to show how this phenomenon extends to the second layer of each branch.\n \n\n\n\n\n\n Although the first layer of AlexNet is the only example of branch specialization we’re aware of being discussed in the literature, it seems to be a common phenomenon. We find that branch specialization happens in later hidden layers, not just the first layer. It occurs in both low-level and high-level features. It occurs in a wide range of models, including places you might not expect it – for example, residual blocks in resnets can functionally be branches and specialize. Finally, branch specialization appears to surface as a structural phenomenon in plain convolutional nets, even without any particular structure causing it.\n \n\n\n\n Is there a large-scale structure to how neural networks operate? How are features and circuits organized within the model? Does network architecture influence the features and circuits that form? Branch specialization hints at an exciting story related to all of these questions.\n \n\n\nWhat is a branch?\n-----------------\n\n\n\n Many neural network architectures have *branches*, sequences of layers which temporarily don’t have access to “parallel” information which is still passed to later layers.\n \n\n\n\n\n\n\n\n\nInceptionV1\n has nine sets of four-way branches called “Inception blocks.”\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n has several two-way branches.\nAlexNet\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nResidual Networks\n aren’t typically thought of as having branches, but residual blocks can be seen as a type of branch.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[2](#figure-2). Examples of branches in various types of neural network architectures.\n \n\n\n\n\n\n In the past, models with explicitly-labeled branches were popular (such as AlexNet and the Inception family of networks). In more recent years, these have become less common, but residual networks – which can be seen as implicitly having branches in their residual blocks – have become very common. We also sometimes see branched architectures develop automatically in neural architecture search, an approach where the network architecture is learned.\n \n\n\n\n The implicit branching of residual networks has some important nuances. At first glance, every layer is a two-way branch. But because the branches are combined together by addition, we can actually rewrite the model to reveal that the residual blocks can be understood as branches in parallel:\n \n\n\n\n\n\n\n\nWe typically think of residual blocks as sequential layers, building on top of each other.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n+\n\n\n\n+\n\n\n\n+\n\n\n\n+\n\n\n\n\n… but we can also conceptualize them as, to some extent, being parallel branches due to the skip connections. This means that residual blocks can potentially specialize.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n+\n\n\n\n\n\n\n\n[3](#figure-3). Residual blocks as branches in parallel.\n \n\n\n\n\n\n We typically see residual blocks specialize in very deep residual networks (e.g. ResNet-152). One hypothesis for why is that, in these models, the exact depth of a layer doesn’t matter and the branching aspect becomes more important than the sequential aspect.\n \n\n\n\n One of the conceptual weaknesses of normal branching models is that although branches can save parameters, it still requires a lot of parameters to mix values between branches. However, if you buy the branch interpretation of residual networks, you can see them as a strategy to sidestep this: residual networks intermix branches (e.g. block sparse weights) with low-rank connections (projecting all the blocks into the same sum and then back up). This seems like a really elegant way to handle branching. More practically, it suggests that analysis of residual networks might be well-served by paying close attention to the units in the blocks, and that we might expect the residual stream to be unusually polysemantic.\n \n\n\nWhy does branch specialization occur?\n-------------------------------------\n\n\n\n Branch specialization is defined by features organizing between branches. In a normal layer, features are organized randomly: a given feature is just as likely to be any neuron in a layer. But in a branched layer, we often see features of a given type cluster to one branch. The branch has specialized on that type of feature.\n \n\n\n\n How does this happen? Our intuition is that there’s a positive feedback loop during training.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA1\nB1\nC1\nD1\nA2\nB2\nD2\nD2\nThe first part of the branch is incentivized to form features relevant to the second half.\nThe second half of the branch prefers features which the first half provides primitives for.\n\n\n\n\n\n\n[4](#figure-4). Hypothetical positive feedback loop of branch specialization during training.\n \n\n\n\n\n\n Another way to think about this is that if you need to cut a neural network into pieces that have limited ability to communicate with each other, it makes sense to organize similar features close together, because they probably need to share more information.\n \n\n\nBranch specialization beyond the first layer\n--------------------------------------------\n\n\n\n So far, the only concrete example we’ve shown of branch specialization is the first and second layer of AlexNet. What about later layers? AlexNet also splits its later layers into branches, after all. This seems to be unexplored, since studying features after the first layer is much harder.For the first layer, one can visualize the RGB weights; for later layers, one needs to use feature visualization.\n\n\n\n\n Unfortunately, branch specialization in the later layers of AlexNet is also very subtle. Instead of one overall split, it’s more like there’s dozens of small clusters of neurons, each cluster being assigned to a branch. It’s hard to be confident that one isn’t just seeing patterns in noise.\n \n\n\n\n But other models have very clear branch specialization in later layers. This tends to happen when a branch constitutes only a very small fraction of a layer, either because there are many branches or because one is much smaller than others. In these cases, the branch can specialize on a very small subset of the features that exist in a layer and reveal a clear pattern.\n \n\n\n\n For example, most of InceptionV1′s layers have a branched structure. The branches have varying numbers of units, and varying convolution sizes. The 5x5 branch is the smallest branch, and also has the largest convolution size. It’s often very specialized:\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe 5x5 branch of mixed3a, a relatively early layer, is specialized on color detection, and especially black-and-white vs. color detection.\nmixed3a\\_5x5: \n\n\n It also contains a disproportionate number of boundary, eye, and fur detectors, many of which share sub-components with curves.\nThis branch contains all 30 of the curve-related features for this layer (all curves, double curves, circles, spirals, S-shape and more features, etc).\nmixed3b\\_5x5: \n\n\nThis branch appears to be specialized in complex shapes and 3D geometry detectors. We don’t have a full taxonomy of this layer to allow for a quantitative assessment.\nmixed4a\\_5x5: \n\n3D Geometry / Complex Shapes\nCurve Related\nBW vs Color\nFur/Eye/Face Related\nOther\nBoundary Detectors\nOther\nOther\nBrightness\nOther Color Contrast\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[5](#figure-5). Examples of branch specialization in `[mixed3a\\_5x5](https://distill.pub/2017/feature-visualization/appendix/googlenet/3a.html#3a-192)`, `[mixed3b\\_5x5](https://distill.pub/2017/feature-visualization/appendix/googlenet/3b.html#3b-320)`, and `[mixed4a\\_5x5](https://distill.pub/2017/feature-visualization/appendix/googlenet/4a.html#4a-396)`.\n \n\n\n\n\n\n This is exceptionally unlikely to have occurred by chance.\n\n For example, all 9 of the black and white vs. color detectors in `mixed3a` are in `mixed3a_5x5`, despite it only being 32 out of the 256 neurons in the layer. The probability of that happening by chance is less than 1/108. For a more extreme example, all 30 of the curve-related features in `mixed3b` are in `mixed3b_5x5`, despite it being only 96 out of the 480 neurons in the layer. The probability of that happening by chance is less than 1/1020.\n\n\n \n\n\n It’s worth noting one confounding factor which might be influencing the specialization. The 5x5 branches are the smallest branches, but also have larger convolutions (5x5 instead of 3x3 or 1x1) than their neighbors.There is something which suggests that the branching plays an essential role: mixed3a and mixed3b are adjacent layers which contain relatively similar features and are at the same scale. If it was only about convolution size, why don’t we see any curves in the `mixed3a_5x5` branch or color in the `mixed3b_5x5` branch?\n\n\n\nWhy is branch specialization consistent?\n----------------------------------------\n\n\n\n Perhaps the most surprising thing about branch specialization is that the same branch specializations seem to occur again and again, across different architectures and tasks.\n \n\n\n\n For example, the branch specialization we observed in AlexNet – the first layer specializing into a black-and-white Gabor branch vs. a low-frequency color branch – is a surprisingly robust phenomenon. It occurs consistently if you retrain AlexNet. It also occurs if you train other architectures with the first few layers split into two branches. It even occurs if you train those models on other natural image datasets, like Places instead of ImageNet. Anecdotally, we also seem to see other types of branch specialization recur. For example, finding branches that seem to specialize in curve detection seems to be quite common (although InceptionV1′s `mixed3b_5x5` is the only one we’ve carefully characterized).\n \n\n\n\n So, why do the same branch specializations occur again and again?\n \n\n\n\n One hypothesis seems very tempting. Notice that many of the same features that form in normal, non-branched models also seem to form in branched models. For example, the first layer of both branched and non-branched models contain Gabor filters and color features. If the same features exist, presumably the same weights exist between them.\n \n\n\n\n Could it be that branching is just surfacing a structure that already exists? Perhaps there’s two different subgraphs between the weights of the first and second conv layer in a normal model, with relatively small weights between them, and when you train a branched model these two subgraphs latch onto the branches.\n\n (This would be directionally similar to work finding modular substructures within neural networks.)\n \n\n\n\n To test this, let’s look at models which have non-branched first and second convolutional layers. Let’s take the weights between them and perform a singular value decomposition (SVD) on the absolute values of the weights. This will show us the main factors of variation in which neurons connect to different neurons in the next layer (irrespective of whether those connections are excitatory or inhibitory).\n \n\n\n\n Sure enough, the singular vector (the largest factor of variation) of the weights between the first two convolutional layers of InceptionV1 is color.\n \n\n\n\n\n\n\n\nfirst convolutional layer\nNeurons in the \norganized by the left singular vectors of |W|.\n\n\nInceptionV1 (tf-slim version) trained on ImageNet.\n\nThe first singular vector separates color and black and white, meaning that’s the largest dimension of variation in which neurons connect to which in the next layer.\nGabor filters and color features are far apart, meaning they tend to connect to different features in the next layer.\n\n\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\n\n\n\nInceptionV1 trained on Places365\n\nOne more, the first singular vector separates color and black and white, meaning that’s the largest dimension of variation in which neurons connect to which in the next layer.\n\n\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\n\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\n\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\nsecond convolutional layer\nNeurons in the organized by the right singular vectors of |W|.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[6](#figure-6). Singular vectors for the first and second convolutional layers of InceptionV1, trained on ImageNet (above) or Places365 (below). One can think of neurons being plotted closer together in this diagram as meaning they likely tend to connect to similar neurons.\n\n \n\n\n\n\n\n We also see that the second factor appears to be [frequency](/2020/circuits/frequency-edges/). This suggests an interesting prediction: perhaps if we were to split the layer into more than two branches, we’d also observe specialization in frequency in addition to color.\n \n\n\n\n This seems like it may be true. For example, here we see a high-frequency black-and-white branch, a mid-frequency mostly black-and-white branch, a mid-frequency color branch, and a low-frequency color branch.\n \n\n\n\n![](images/Figure_7.png)\n\n\n[7](#figure-7). We constructed a small ImageNet model with the first layer split into four branches. The rest of the model is roughly an InceptionV1 architecture.\n \n\n\n\n\nParallels to neuroscience\n-------------------------\n\n\n\n We’ve shown that branch specialization is one example of a structural phenomenon — a larger-scale structure in a neural network. It happens in a variety of situations and neural network architectures, and it happens with *consistency* – certain motifs of specialization, such as color, frequency, and curves, happen consistently across different architectures and tasks.\n \n\n\n\n Returning to our comparison with anatomy, although we hesitate to claim explicit parallels to neuroscience, it’s tempting to draw analogies between branch specialization and the existence of regions of the brain focused on particular tasks.\n The visual cortex, the auditory cortex, Broca’s area and Wernicke’s area\n \n The subspecialization within the V2 area of the primate visual cortex is another strong example from neuroscience. One type of stripe within V2 is sensitive to orientation or luminance, whereas the other type of stripe contains color-selective neurons.\n\n We are grateful to Patrick Mineault for noting this analogy, and for further noting that the high-frequency features are consistent with some of the known representations of high-level features in the primate V2 area.\n\n – these are all examples of brain areas with such consistent specialization across wide populations of people that neuroscientists and psychologists have been able to characterize as having remarkably consistent functions.\n\n As researchers without expertise in neuroscience, we’re uncertain how useful this connection is, but it may be worth considering whether branch specialization can be a useful model of how specialization might emerge in biological neural networks.\n \n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Visualizing Weights](/2020/circuits/visualizing-weights/)\n[Weight Banding](/2020/circuits/weight-banding/)\n\n\n .comment {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n margin-top: 1em;\n }\n\n .comment h3 {\n font-size: 100%;\n font-weight: bold;\n text-transform: uppercase;\n margin-top: 0px;\n }\n\n .comment .commenter-description {\n font-style: italic;\n margin-bottom: 1em;\n margin-top: 0px;\n }\n .comment .commenter-description, .comment .commenter-description a {\n color: #777;\n font-size: 80%;\n line-height: 160%;\n }\n .comment p {\n margin-bottom: 0px;\n }\n .comment div {\n margin-top: 1em;\n }\n\n \n\n### Comment\n\n\n\n**[Matthew Nolan](https://www.ed.ac.uk/discovery-brain-sciences/our-staff/research-groups/matthew-nolan)** is Professor of Neural Circuits and Computation at the Centre for Discovery Brain Sciences and Simons Initiative for the Developing Brain, University of Edinburgh.\n **Ian Hawes** is a PhD student in the Wellcome Trust programme in Translational Neuroscience at the University of Edinburgh.\n \n\nAs neuroscientists we’re excited by this work as it offers fresh theoretical perspectives on long-standing questions about how brains are organised and how they develop. Branching and specialisation are found throughout the brain. A well studied example is the dorsal and ventral visual streams, which are associated with spatial and non-spatial visual processing. At the microcircuit level neurons in each pathway are similar. However, recordings of neural activity demonstrate remarkable specialisation; classic experiments from the 1970s and 80s established the idea that the ventral stream enables identification of objects whereas the dorsal stream represents their location. Since then, much has been learned about signal processing in these pathways but fundamental questions such as why there are multiple streams and how they are established remain unanswered.\n\n\n\n\nFrom the perspective of a neuroscientist, a striking result from the investigation of branch specialization by Voss and her colleagues is that robust branch specialisation emerges in the absence of any complex branch specific design rules. Their analyses show that specialisation is similar within and across architectures, and across different training tasks. The implication here is that no specific instructions are required for branch specialisation to emerge. Indeed, their analyses suggest that it even emerges in the absence of predetermined branches. By contrast, the intuition of many neuroscientists would be that specialisation of different areas of the neocortex requires developmental mechanisms that are specific to each area. For neuroscientists aiming to understand how perceptual and cognitive functions of the brain arise, an important idea here is that developmental mechanisms that drive the separation of cortical pathways, such as the dorsal and ventral visual streams, may be absolutely critical.\n\n\n\n\nWhile the parallels between branch specialization in artificial neural networks and neural circuits in the brain are striking, there are clearly major differences and many outstanding questions. From the perspective of building artificial neural networks, we wonder if branch specific tuning of individual units and their connectivity rules would enhance performance? In the brain, there is good evidence that the activation functions of individual neurons are fine-tuned between and even within distinct neural circuits. If this fine tuning confers benefits to the brain then we might expect similar benefits in artificial networks. From the perspective of understanding the brain, we wonder whether branch specialisation could help make experimentally testable predictions? If artificial networks can be engineered with branches that have organisation similar to branching pathways in the brain, then manipulations to these networks could be compared to experimental manipulations achieved with optogenetic and chemogenetic strategies. Given that many brain disorders involve changes to specific neural populations, similar strategies could give insights into how these pathological changes alter brain functions. For example, very specific populations of neurons are disrupted in early stages of Alzheimer’s disease. By disrupting corresponding units in neural network models one could explore the resulting computational deficits and possible strategies for restoration of cognitive functions.", "date_published": "2021-04-05T20:00:00Z", "authors": ["Chelsea Voss", "Gabriel Goh", "Nick Cammarata", "Michael Petrov", "Ludwig Schubert", "Chris Olah"], "summaries": ["When a neural network layer is divided into multiple branches, neurons self-organize into coherent groupings."], "doi": "10.23915/distill.00024.008", "journal_ref": "distill-pub", "bibliography": [{"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://arxiv.org/pdf/1602.03616.pdf", "title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://www.jneurosci.org/content/7/11/3378", "title": "Segregation of form, color, and stereopsis in primate area 18"}, {"link": "https://www.jneurosci.org/content/24/13/3313", "title": "Representation of Angles Embedded within Contour Stimuli in Area V2 of Macaque Monkeys"}]} {"id": "8ca56371f32f6a2e55c8306b5fe94e7d", "title": "Self-Organising Textures", "url": "https://distill.pub/selforg/2021/textures", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Patterns, textures and physical processes](#patterns-textures-and-physical-processes)\n* [From Turing, to Cellular Automata, to Neural Networks](#from-turing-to-cellular-automata-to-neural-networks)\n* [NCA as pattern generators](#nca-as-pattern-generators)\n* [Related work](#related-work)\n\n\n[Feature Visualization](#feature-visualization)\n* [NCA with Inception](#nca-with-inception)\n\n\n[Other interesting findings](#other-interesting-findings)\n* [Robustness](#robustness)\n* [Hidden States](#hidden-states)\n\n\n[Conclusion](#conclusion)\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Self-classifying MNIST Digits](/2020/selforg/mnist/)\n[Adversarial Reprogramming of Neural Cellular Automata](/selforg/2021/adversarial/)\n\nNeural Cellular Automata (NCA We use NCA to refer to both *Neural Cellular Automata* and *Neural Cellular Automaton*.) are capable of learning a diverse set of behaviours: from generating stable, regenerating, static images , to segmenting images , to learning to “self-classify” shapes . The inductive bias imposed by using cellular automata is powerful. A system of individual agents running the same learned local rule can solve surprisingly complex tasks. Moreover, individual agents, or cells, can learn to coordinate their behavior even when separated by large distances. By construction, they solve these tasks in a massively parallel and inherently degenerate Degenerate in this case refers to the [biological concept of degeneracy](https://en.wikipedia.org/wiki/Degeneracy_(biology)). way. Each cell must be able to take on the role of any other cell - as a result they tend to generalize well to unseen situations.\n\n\nIn this work, we apply NCA to the task of texture synthesis. This task involves reproducing the general appearance of a texture template, as opposed to making pixel-perfect copies. We are going to focus on texture losses that allow for a degree of ambiguity. After training NCA models to reproduce textures, we subsequently investigate their learned behaviors and observe a few surprising effects. Starting from these investigations, we make the case that the cells learn distributed, local, algorithms. \n\n\nTo do this, we apply an old trick: we employ neural cellular automata as a differentiable image parameterization .\n\n\nPatterns, textures and physical processes\n-----------------------------------------\n\n\n\n\n![](images/zebra.jpg)\nA pair of Zebra. Zebra are said to have unique stripes.\n\nZebra stripes are an iconic texture. Ask almost anyone to identify zebra stripes in a set of images, and they will have no trouble doing so. Ask them to describe what zebra stripes look like, and they will gladly tell you that they are parallel stripes of slightly varying width, alternating in black and white. And yet, they may also tell you that no two zebra have the same set of stripes Perhaps an apocryphal claim, but at the very lowest level every zebra will be unique. Ourp point is - “zebra stripes” as a concept in human understanding refers to the general structure of a black and white striped pattern and not to a specific mapping from location to colour.. This is because evolution has programmed the cells responsible for creating the zebra pattern to generate a pattern of a certain quality, with certain characteristics, as opposed to programming them with the blueprints for an exact bitmap of the edges and locations of stripes to be moulded to the surface of the zebra’s body.\n\n\nPut another way, patterns and textures are ill-defined concepts. The Cambridge English Dictionary defines a pattern as “any regularly repeated arrangement, especially a design made from repeated lines, shapes, or colours on a surface”. This definition falls apart rather quickly when looking at patterns and textures that impart a feeling or quality, rather than a specific repeating property. A coloured fuzzy rug, for instance, can be considered a pattern or a texture, but is composed of strands pointing in random directions with small random variations in size and color, and there is no discernable regularity to the pattern. Penrose tilings do not repeat (they are not translationally invariant), but show them to anyone and they’ll describe them as a pattern or a texture. Most patterns in nature are outputs of locally interacting processes that may or may not be stochastic in nature, but are often based on fairly simple rules. There is a large body of work on models which give rise to such patterns in nature; most of it is inspired by Turing’s seminal paper on morphogenesis. \n\n\nSuch patterns are very common in developmental biology . In addition to coat colors and skin pigmentation, invariant large-scale patterns, arising in spite of stochastic low-level dynamics, are a key feature of peripheral nerve networks, vascular networks, somites (blocks of tissue demarcated in embryogenesis that give rise to many organs), and segments of anatomical and genetic-level features, including whole body plans (e.g., snakes and centipedes) and appendages (such as demarcation of digit fields within the vertebrate limb). These kinds of patterns are generated by reaction-diffusion processes, bioelectric signaling, planar polarity, and other cell-to-cell communication mechanisms. Patterns in biology are not only structural, but also physiological, as in the waves of electrical activity in the brain and the dynamics of gene regulatory networks. These gene regulatory networks, for example, can support computation sufficiently sophisticated as to be subject to Liar paradoxes See [liar paradox](https://en.wikipedia.org/wiki/Liar_paradox). In principle, gene regulatory networks can express paradoxical behaviour, such as that expression of factor A represses the expression of factor A. One result of such a paradox can be that a certain factor will oscillate with time. . Studying the emergence and control of such patterns can help us to understand not only their evolutionary origins, but also how they are recognized (either in the visual system of a second observer or in adjacent cells during regeneration) and how they can be modulated for the purposes of regenerative medicine.\n\n\nAs a result, when having any model learn to produce textures or patterns, we want it to learn a generative process for the pattern. We can think of such a process as a means of sampling from the distribution governing this pattern. The first hurdle is to choose an appropriate loss function, or qualitative measure of the pattern. To do so, we employ ideas from Gatys et. al . NCA become the parametrization for an image which we “stylize” in the style of the target pattern. In this case, instead of restyling an existing image, we begin with a fully unconstrained setting: the output of an untrained, randomly initialized, NCA. The NCA serve as the “renderer” or “generator”, and a pre-trained differentiable model serves as a distinguisher of the patterns, providing the gradient necessary for the renderer to learn to produce a pattern of a certain style.\n\n\n### From Turing, to Cellular Automata, to Neural Networks\n\n\nNCA are well suited for generating textures. To understand why, we’ll demonstrate parallels between texture generation in nature and NCA. Given these parallels, we argue that NCA are a good model class for texture generation.\n\n\n#### PDEs\n\n\nIn “The Chemical Basis of Morphogenesis” , Alan Turing suggested that simple physical processes of reaction and diffusion, modelled by partial differential equations, lie behind pattern formation in nature, such as the aforementioned zebra stripes. Extensive work has since been done to identify PDEs modeling reaction-diffusion and evaluating their behaviour. One of the more celebrated examples is the Gray-Scott model of reaction diffusion (,). This process has a veritable zoo of interesting behaviour, explorable by simply tuning the two parameters. We strongly encourage readers to visit this [interactive atlas](http://mrob.com/pub/comp/xmorphia/) of the different regions of the Gray-Scott reaction diffusion model to get a sense for the extreme variety of behaviour hidden behind two simple knobs. The more adventurous can even [play with a simulation locally](https://groups.csail.mit.edu/mac/projects/amorphous/jsim/sim/GrayScott.html) or [in the browser](https://mrob.com/pub/comp/xmorphia/ogl/index.html).\n\n\nTo tackle the problem of reproducing our textures, we propose a more general version of the above systems, described by a simple Partial Differential Equation (PDE) over the state space of an image. \n\n\n\n∂s∂t=f(s,∇xs,∇x2s)\\frac{\\partial \\mathbf{s} }{\\partial t } = f(\\textbf{s}, \\nabla\\_\\mathbf{x} \\textbf{s}, \\nabla\\_\\mathbf{x}^{2}\\textbf{s})∂t∂s​=f(s,∇x​s,∇x2​s)\n\nHere, fff is a function that depends on the gradient (∇xs\\nabla\\_\\mathbf{x} \\textbf{s}∇x​s) and Laplacian (∇x2s\\nabla\\_\\mathbf{x}^{2}\\textbf{s}∇x2​s) of the state space and determines the time evolution of this state space. sss represents a k dimensional vector, whose first three components correspond to the visible RGB color channels. \n\n\nIntuitively, we have defined a system where every point of the image changes with time, in a way that depends on how the image currently changes across space, with respect to its immediate neighbourhood. Readers may start to recognize the resemblance between this and another system based on immediately local interactions.\n\n\n#### To CAs\n\n\nDifferential equations governing natural phenomena are usually evaluated using numerical differential equation solvers. Indeed, this is sometimes the **only** way to solve them, as many PDEs and ODEs of interest do not have closed form solutions. This is even the case for some deceptively simple ones, such as the [three-body problem](https://en.wikipedia.org/wiki/Three-body_problem). Numerically solving PDEs and ODEs is a vast and well-established field. One of the biggest hammers in the metaphorical toolkit for numerically evaluating differential equations is discretization: the process of converting the variables of the system from continuous space to a discrete space, where numerical integration is tractable. When using some ODEs to model a change in a phenomena over time, for example, it makes sense to advance through time in discrete steps, possibly of variable size. \n\n\nWe now show that numerically integrating the aforementioned PDE is equivalent to reframing the problem as a Neural Cellular Automata, with fff assuming the role of the NCA rule. \n\n\nThe logical approach to discretizing the space the PDE operates on is to discretize the continuous 2D image space into a 2D raster grid. Boundary conditions are of concern but we can address them by moving to a toroidal world where each dimension wraps around on itself. \n\n\nSimilarly to space, we choose to treat time in a discretized fashion and evaluate our NCA at fixed-sized time steps. This is equivalent to explicit Euler integration. However, here we make an important deviation from traditional PDE numerical integration methods for two reasons. First, if all cells are updated synchronously, initial conditions s0s\\_0s0​ must vary from cell-to-cell in order to break the symmetry. Second, the physical implementation of the synchronous model would require the existence of a global clock, shared by all cells. One way to work around the former is by initializing the grid with random noise, but in the spirit of self organisation we instead choose to decouple the cell updates by asynchronously evaluating the CA. We sample a subset of all cells at each time-step to update. This introduces both asynchronicity in time (cells will sometimes operate on information from their neighbours that is several timesteps old), and asymmetry in space, solving both aforementioned issues.\n\n\nOur next step towards representing a PDE with cellular automata is to discretize the gradient and Laplacian operators. For this we use the [sobel operator](https://en.wikipedia.org/wiki/Sobel_operator) and the [9-point variant](https://en.wikipedia.org/wiki/Discrete_Laplace_operator) of the discrete Laplace operator, as below.\n\n\n\n[−101−202−101][−1−2−1000121][1212−122121]SobelxSobelyLaplacian \\begin{array}{ c c c }\n\\begin{bmatrix}\n-1 & 0 & 1\\\\-2 & 0 & 2 \\\\-1 & 0 & 1\n\\end{bmatrix}\n&\n\\begin{bmatrix}\n-1 & -2 & -1\\\\ 0 & 0 & 0 \\\\1 & 2 & 1\n\\end{bmatrix}\n&\n\\begin{bmatrix}\n1 & 2 & 1\\\\2 & -12 & 2 \\\\1 & 2 & 1\n\\end{bmatrix}\n\\\\\nSobel\\_x & Sobel\\_y & Laplacian\n\\end{array}⎣⎡​−1−2−1​000​121​⎦⎤​Sobelx​​⎣⎡​−101​−202​−101​⎦⎤​Sobely​​⎣⎡​121​2−122​121​⎦⎤​Laplacian​\n\nWith all the pieces in place, we now have a space-discretized version of our PDE that looks very much like a Cellular Automata: the time evolution of each discrete point in the raster grid depends only on its immediate neighbours. These discrete operators allow us to formalize our PDE as a CA. To double check that this is true, simply observe that as our grid becomes very fine, and the asynchronous updates approach uniformity, the dynamics of these discrete operators will reproduce the continuous dynamics of the original PDE as we defined it.\n\n\n#### To Neural Networks\n\n\nThe final step in implementing the above general PDE for texture generation is to translate it to the language of deep learning. Fortunately, all the operations involved in iteratively evaluating the generalized PDE exist as common operations in most deep learning frameworks. We provide both a Tensorflow and a minimal PyTorch implementation for reference, and refer readers to these for details on our implementation. \n\n\n### NCA as pattern generators\n\n\n#### Model:\n\n\n\n\n![](images/texture_model.svg)\nTexture NCA model.\n\nWe build on the Growing CA NCA model , complete with built-in quantization of weights, stochastic updates, and the batch pool mechanism to approximate long-term training. For further details on the model and motivation, we refer readers to this work.\n\n\n#### Loss function:\n\n\n \n\n\n\n![](images/texture_training.svg)\nTexture NCA model.\n\nWe use a well known deep convolutional network for image recognition, VGG (Visual Geometry Group Net ) as our differentiable discriminator of textures, for the same reasons outlined in Differentiable Parametrizations . We start with a template image, x⃗\\vec{x}x⃗, which we feed into VGG. Then we collect statistics from certain layers (block[1...51…51...5]\\_conv1) in the form of the raw activation values of the neurons in these layers. Finally, we run our NCA forward for between 32 and 64 iterations, feeding the resulting RGB image into VGG. Our loss is the L2L\\_2L2​ distance between the gram matrix For a brief definition of gram matrices, see [here](https://www.tensorflow.org/tutorials/generative/style_transfer#calculate_style). of activations of these neurons with the NCA as input and their activations with the template image as input. We keep the weights of VGG frozen and use ADAM to update the weights of the NCA.\n\n\n#### Dataset:\n\n\nThe template images for this dataset are from the Oxford Describable Textures Dataset . The aim of this dataset is to provide a benchmark for measuring the ability of vision models to recognize and categorize textures and describe textures using words. The textures were collected to match 47 “attributes” such as “bumpy” or “polka-dotted”. These 47 attributes were in turn distilled from a set of common words used to describe textures identified by Bhusan, Rao and Lohse . \n\n\n#### Results:\n\n\nAfter a few iterations of training, we see the NCA converge to a solution that at first glance looks similar to the input template, but not pixel-wise identical. The very first thing to notice is that the solution learned by the NCA is **not** time-invariant if we continue to iterate the CA. In other words it is constantly changing! \n\n\nThis is not completely unexpected. In *Differentiable Parametrizations*, the authors noted that the images produced when backpropagating into image space would end up different each time the algorithm was run due to the stochastic nature of the parametrizations. To work around this, they introduced some tricks to maintain **alignment** between different visualizations. In our model, we find that we attain such alignment along the temporal dimension without optimizing for it; a welcome surprise. We believe the reason is threefold. First, reaching and maintaining a static state in an NCA appears to be non-trivial in comparison to a dynamic one, so much so that in Growing CA a pool of NCA states at various iteration times had to be maintained and sampled as starting states to simulate loss being applied after a time period longer than the NCAs iteration period, to achieve a static stability. We employ the same sampling mechanism here to prevent the pattern from decaying, but in this case the loss doesn’t enforce a static fixed target; rather it guides the NCA towards any one of a number of states that minimizes the style loss. Second, we apply our loss after a random number of iterations of the NCA. This means that, at any given time step, the pattern must be in a state that minimizes the loss. Third, the stochastic updates, local communication, and quantization all limit and regularize the magnitude of updates at each iteration. This encourages changes to be small between one iteration and the next. We hypothesize that these properties combined encourage the NCA to find a solution where each iteration is **aligned** with the previous iteration. We perceive this alignment through time as motion, and as we iterate the NCA we observe it traversing a manifold of locally aligned solutions. \n\n\nWe now **posit** *that finding temporally aligned solutions is equivalent to finding an algorithm, or process, that generates the template pattern*, based on the aforementioned findings and qualitative observation of the NCA. We proceed to demonstrate some exciting behaviours of NCA trained on different template images. \n\n\n\n\n\nAn NCA trained to create a pattern in the style of **chequered\\_0121.jpg**.\n\nHere, we see that the NCA is trained using a template image of a simple black and white grid. \n\n\nWe notice that: \n\n\n* Initially, a non-aligned grid of black and white quadrilaterals is formed.\n* As time progresses, the quadrilaterals seemingly grow or shrink in both x⃗\\vec{x}x⃗ and y⃗\\vec{y}y⃗​ to more closely approximate squares. Quadrilaterals of both colours either emerge or disappear. Both of these behaviours seem to be an attempt to find local consistency.\n* After a longer time, the grid tends to achieve perfect consistency.\n\n\nSuch behaviour is not entirely unlike what one would expect in a hand-engineered algorithm to produce a consistent grid with local communication. For instance, one potential hand-engineered approach would be to have cells first try and achieve local consistency, by choosing the most common colour from the cells surrounding them, then attempting to form a diamond of correct size by measuring distance to the four edges of this patch of consistent colour, and moving this boundary if it were incorrect. Distance could be measured by using a hidden channel to encode a gradient in each direction of interest, with each cell decreasing the magnitude of this channel as compared to its neighbour in that direction. A cell could then localize itself within a diamond by measuring the value of two such gradient channels. The appearance of such an algorithm would bear resemblance to the above - with patches of cells becoming either black, or white, diamonds then resizing themselves to achieve consistency.\n\n\n\n\n\n\nAn NCA trained to create a pattern in the style of **bubbly\\_0101.jpg**.\n\nIn this video, the NCA has learned to reproduce a texture based on a template of clear bubbles on a blue background. One of the most interesting behaviours we observe is that the density of the bubbles remains fairly constant. If we re-initialize the grid states, or interactively destroy states, we see a multitude of bubbles re-forming. However, as soon as two bubbles get too close to each other, one of them spontaneously collapses and disappears, ensuring a constant density of bubbles throughout the entire image. We regard these bubbles as ”[solitons](#an-aside-solitons-and-lenia)″ in the solution space of our NCA. This is a concept we will discuss and investigate at length below.\n\n\nIf we speed the animation up, we see that different bubbles move at different speeds, yet they never collide or touch each other. Bubbles also maintain their structure by self-correcting; a damaged bubble can re-grow.\n\n\nThis behaviour is remarkable because it arises spontaneously, without any external or auxiliary losses. All of these properties are learned from a combination of the template image, the information stored in the layers of VGG, and the inductive bias of the NCA. The NCA learned a rule that effectively approximates many of the properties of the bubbles in the original image. Moreover, it has learned a process that generates this pattern in a way that is robust to damage and looks realistic to humans. \n\n\n\n\n\n\nAn NCA trained to create a pattern in the style of **interlaced\\_0172.jpg**.\n\nHere we see one of our favourite patterns: a simple geometric “weave”. Again, we notice the NCA seems to have learned an algorithm for producing this pattern. Each “thread” alternately joins or detaches from other threads in order to produce the final pattern. This is strikingly similar to what one would attempt to implement, were one asked to programmatically generate the above pattern. One would try to design some sort of stochastic algorithm for weaving individual threads together with other nearby threads.\n\n\n\n\n\nAn NCA trained to create a pattern in the style of **banded\\_0037.jpg**.\n\nHere, misaligned stripe fragments travel up or down the stripe until either they merge to form a single straight stripe or a stripe shrinks and disappears. Were this to be implemented algorithmically with local communication, it is not infeasible that a similar algorithm for finding consistency among the stripes would be used.\n\n\n### Related work\n\n\nThis foray into pattern generation is by no means the first. There has been extensive work predating deep-learning, in particular suggesting deep connections between spatial patterning of anatomical structure and temporal patterning of cognitive and computational processes (e.g., reviewed in ). Hans Spemann, one of the heroes of classical developmental biology, said “Again and again terms have been used which point not to physical but to psychical analogies. It was meant to be more than a poetical metaphor. It was meant to my conviction that the suitable reaction of a germ fragment, endowed with diverse potencies, in an embryonic ‘field’… is not a common chemical reaction, like all vital processes, are comparable, to nothing we know in such degree as to vital processes of which we have the most intimate knowledge.” . More recently, Grossberg quantitatively laid out important similarities between developmental patterning and computational neuroscience . As briefly touched upon, the inspiration for much of the work came from Turing’s work on pattern generation through local interaction, and later papers based on this principle. However, we also wish to acknowledge some works that we feel have a particular kinship with ours. \n\n\n#### Patch sampling\n\n\nEarly work in pattern generation focused on texture sampling. Patches were often sampled from the original image and reconstructed or rejoined in different ways to obtain an approximation of the texture. This method has also seen recent success with the work of Gumin .\n\n\n#### Deep learning\n\n\nGatys et. al’s work , referenced throughout, has been seminal with regards to the idea that statistics of certain layers in a pre-trained network can capture textures or styles in an image. There has been extensive work building on this idea, including playing with other parametrisations for image generation and optimizing the generation process . \n\n\nOther work has focused on using a convolutional generator combined with path sampling and trained using an adversarial loss to produce textures of similar quality . \n\n\n#### Interactive Evolution of Camouflage\n\n\nPerhaps the most unconventional approach, with which we find kinship, is laid out in *Interactive Evolution of Camouflage* . Craig Reynolds uses a texture description language, consisting of generators and operators, to parametrize a texture patch, which is presented to human viewers who have to decide which patches are the worst at “camouflaging” themselves against a chosen background texture. The population is updated in an evolutionary fashion to maximize “camouflage”, resulting in a texture exhibiting the most camouflage (to human eyes) after a number of iterations. We see strong parallels with our work - instead of a texture generation language, we have an NCA parametrize the texture, and instead of human reviewers we use VGG as an evaluator of the quality of a generated pattern. We believe a fundamental difference lies in the solution space of an NCA. A texture generation language comes with a number of inductive biases and learns a deterministic mapping from coordinates to colours. Our method appears to learn more general algorithms and behaviours giving rise to the target pattern.\n\n\nTwo other noteworthy examples of similar work are Portilla et. al’s work with the wavelet transform , and work by Chen et al with reaction diffusion .\n\n\nFeature visualization\n---------------------\n\n\n\n\n![](images/butterfly_eye.jpg)\nA butterfly with an “eye-spot” on the wings.\n\nWe have now explored some of the fascinating behaviours learned by the NCA when presented with a template image. What if we want to see them learn even more “unconstrained” behaviour? \n\n\nSome butterflies have remarkably lifelike eyes on their wings. It’s unlikely the butterflies are even aware of this incredible artwork on their own bodies. Evolution placed these there to trigger a response of fear in potential predators or to deflect attacks from them . It is likely that neither the predator nor the butterfly has a concept for what an eye is or what an eye does, or even less so any [theory of mind](https://en.wikipedia.org/wiki/Theory_of_mind) regarding the consciousness of the other, but evolution has identified a region of morphospace for this organism that exploits pattern-identifying features of predators to trick them into fearing a harmless bug instead of consuming it. \n\n\nEven more remarkable is the fact that the individual cells composing the butterfly’s wings can self assemble into coherent, beautiful, shapes far larger than an individual cell - indeed a cell is on the order of 1−5m1^{-5}m1−5m while the features on the wings will grow to as large as 1−3m1^{-3}m1−3m . The coordination required to produce these features implies self-organization over hundreds or thousands of cells to generate a coherent image of an eye that evolved simply to act as a visual stimuli for an entirely different species, because of the local nature of cell-to-cell communication. Of course, this pales in comparison to the morphogenesis that occurs in animal and plant bodies, where structures consisting of millions of cells will specialize and coordinate to generate the target morphology. \n\n\nA common approach to investigating neural networks is to look at what inhibits or excites individual neurons in a network . Just as neuroscientists and biologists have often treated cells and cell structures and neurons as black-box models to be investigated, measured and reverse-engineered, there is a large contemporary body of work on doing the same with neural networks. For instance the work by Boettiger .\n\n\nWe can explore this idea with minimal effort by taking our pattern-generating NCA and exploring what happens if we task it to enter a state that excites a given neuron in Inception. One of the common resulting NCAs we notice is eye and eye-related shapes - such as the video below - likely as a result of having to detect various animals in ImageNet. In the same way that cells form eye patterns on the wings of butterflies to excite neurons in the brains of predators, our NCA’s population of cells has learned to collaborate to produce a pattern that excites certain neurons in an external neural network.\n\n\n\n\n\n\nAn NCA trained to excite **mixed4a\\_472** in Inception.\n\n### NCA with Inception\n\n\n#### Model:\n\n\nWe use a model identical to the one used for exploring pattern generation, but with a different discriminator network: Imagenet-trained Inception v1 network .\n\n\n#### Loss function:\n\n\nOur loss maximizes the activations of chosen neurons, when evaluated on the output of the NCA. We add an auxiliary loss to encourage the outputs of the NCA to be ∈[0,1]\\in [0,1]∈[0,1], as this is not inherently built into the model. We keep the weights of the Inception frozen and use ADAM to update the weights of the NCA.\n\n\n#### Dataset:\n\n\nThere is no explicit dataset for this task. Inception is trained on ImageNet. The layers and neurons we chose to excite are chosen qualitatively using OpenAI Microscope.\n\n\n#### Results:\n\n\nSimilar to the pattern generation experiment, we see quick convergence and a tendency to find temporally dynamic solutions. In other words, resulting NCAs do not stay still. We also observe that the majority of the NCAs learn to produce solitons of various kinds. We discuss a few below, but encourage readers to explore them in the demo. \n\n\n\n\n\n\nAn NCA trained to excite **mixed4c\\_439** in Inception.\n\nSolitons in the form of regular circle-like shapes with internal structure are quite commonly observed in the inception renderings. Two solitons approaching each other too closely may cause one or both of them to decay. We also observe that solitons can divide into two new solitons.\n\n\n\n\n\n\nAn NCA trained to excite **mixed3b\\_454** in Inception.\n\nIn textures that are composed of threads or lines, or in certain excitations of Inception neurons where the resulting NCA has a “thread-like” quality, the threads grow in their respective directions and will join other threads, or grow around them, as required. This behaviour is similar to the regular lines observed in the striped patterns during pattern generation.\n\n\nOther interesting findings\n--------------------------\n\n\n### Robustness\n\n\n#### Switching manifolds\n\n\nWe encode local information flow within the NCA using the same fixed Laplacian and gradient filters. As luck would have it, these can be defined for most underlying manifolds, giving us a way of placing our cells on various surfaces and in various configurations without having to modify the learned model. Suppose we want our cells to live in a hexagonal world. We can redefine our kernels as follows:\n\n\n\n\n![](images/hex_kernels.svg)\nHexagonal grid convolutional filters.\n\nOur model, trained in a purely square environment, works out of the box on a hexagonal grid! Play with the corresponding setting in the demo to experiment with this. Zooming in allows observation of the individual hexagonal or square cells. As can be seen in the demo, the cells have no problem adjusting to a hexagonal world and producing identical patterns after a brief period of re-alignment.\n\n\n \n\n\n\n![](images/coral_square.png)\n![](images/coral_hex.png)\nThe same texture evaluated on a square and hexagonal grid, respectively.\n\n#### Rotation\n\n\n\n\n![](images/mond_rot0.png)\n![](images/mond_rot1.png)\n![](images/mond_rot2.png)\n![](images/mond_rot3.png)\nMondrian pattern where the cells are rotated in various directions. Note that the NCA is not re-trained - it gen-\neralises to this new rotated paradigm without issue.\n\n\nIn theory, the cells can be evaluated on any manifold where one can define approximations to the Sobel kernel and the Laplacian kernel. We demonstrate this in our demo by providing an aforementioned “hexagonal” world for the cells to live in. Instead of having eight equally-spaced neighbours, each cell now has six equally-spaced neighbours. We further demonstrate this versatility by rotating the Sobel and Laplacian kernels. Each cell receives an innate global orientation based on these kernels, because they are defined with respect to the coordinate system of the state. Redefining the Sobel and Laplacian kernel with a rotated coordinate system is straightforward and can even be done on a per-cell level. Such versatility is exciting because it mirrors the extreme robustness found in biological cells in nature. Cells in most tissues will generally continue to operate whatever their location, direction, or exact placement relative to their neighbours. We believe this versatility in our model could even extend to a setting where the cells are placed on a manifold at random, rather than on an ordered grid.\n\n\n#### Time-synchronization\n\n\n\n\n\nTwo NCAs running next to each other, at different speeds, with some stochasticity in speed. They can communicate through their shared edge; the vertical boundary between them in the center of the state space.\n\nStochastic updates teach the cells to be robust to asynchronous updates. We investigate this property by taking it to an extreme and asking *how* *do the cells react if two manifolds are allowed to communicate but one runs the NCA at a different speed than the other*? The result is surprisingly stable; the CA is still able to construct and maintain a consistent texture across the combined manifold. The time discrepancy between the two CAs sharing the state is far larger than anything the NCA experiences during training, showing remarkable robustness of the learned behaviour. Parallels can be drawn to organic matter self repairing, for instance a fingernail can regrow in adulthood despite the underlying finger already having fully developed; the two do not need to be sync. This result also hints at the possibility of designing distributed systems without having to engineer for a global clock, synchronization of compute units or even homogenous compute capacity. \n\n\n\n\n\nAn NCA is evaluated for a number of steps. The surrounding border of cells are then also turned into NCA cells. The cells have no difficulty communicating with the “finished” pattern and achieving consistency. \n\nAn even more drastic example of this robustness to time asynchronicity can be seen above. Here, an NCA is iterated until it achieves perfect consistency in a pattern. Then, the state space is expanded, introducing a border of new cells around the existing state. This border quickly interfaces with the existing cells and settles in a consistent pattern, with almost no perturbation to the already-converged inner state.\n\n\n#### Failure cases\n\n\nThe failure modes of a complex system can teach us a great deal about its internal structure and process. Our model has many quirks and sometimes these prevent it from learning certain patterns. Below are some examples.\n\n\n\n\n![](images/fail_mondrian.jpeg)\n![](images/fail_sprinkle.jpeg)\n![](images/fail_chequerboard.jpeg)\nThree failure cases of the NCA. Bottom row shows target texture samples, top row are corresponding NCA outputs. Failure modes include incorrect colours, chequerboard artefacts, and incoherent image structure.\n\nSome patterns are reproduced somewhat accurately in terms of structure, but not in colour, while some are the opposite. Others fail completely. It is difficult to determine whether these failure cases have their roots in the parametrization (the NCA), or in the hard-to-interpret gradient signals from VGG, or Inception. Existing work with style transfer suggests that using a loss on Gram matrices in VGG can introduce instabilities , that are similar to the ones we see here. We hypothesize that this effect explains the failures in reproducing colors. The structural failures, meanwhile, may be caused by the NCA parameterization, which makes it difficult for cells to establish long-distance communication with one another.\n\n\n### Hidden states\n\n\nWhen biological cells communicate with each other, they do so through a multitude of available communication channels. Cells can emit or absorb different ions and proteins, sense physical motion or “stiffness” of other cells, and even emit different chemical signals to diffuse over the local substrate . \n\n\nThere are various ways to visualize communication channels in real cells. One of them is to add to cells a potential-activated dye. Doing so gives a clear picture of the voltage potential the cell is under with respect to the surrounding substrate. This technique provides useful insight into the communication patterns within groups of cells and helps scientists visualize both local and global communication over a variety of time-scales.\n\n\nAs luck would have it, we can do something similar with our Neural Cellular Automata. Our NCA model contains 12 channels. The first three are visible RGB channels and the rest we treat as latent channels which are visible to adjacent cells during update steps, but excluded from loss functions. Below we map the first three principle components of the hidden channels to the R,G, and B channels respectively. Hidden channels can be considered “floating,” to abuse a term from circuit theory. In other words, they are not pulled to any specific final state or intermediate state by the loss. Instead, they converge to some form of a dynamical system which assists the cell in fulfilling its objective with respect to its visible channels. There is no pre-defined assignment of different roles or meaning to different hidden channels, and there is almost certainly redundancy and correlation between different hidden channels. Such correlation may not be visible when we visualize the first three principal components in isolation. But this concern aside, the visualization yields some interesting insights anyways.\n\n\n \n\n\n\n\n**Left:** RGB channels of NCA. **Right:** Intensities of top three principal components of hidden states. \n \n An NCA trained to excite **mixed4b\\_70** in Inception. Notice the hidden states appear to encode information about structure. “Threads” along the major diagonal (NW - SE) appear primarily green, while those running along the anti-diagonal appear blue, indicating that these have differing internal states, despite being effectively indistinguishable in RGB space.\n\nIn the principal components of this coral-like texture, we see a pattern which is similar to the visible channels. However, the “threads” pointing in each diagonal direction have different colours - one diagonal is green and the other is a pale blue. This suggests that one of the things encoded into the hidden states is the direction of a “thread”, likely to allow cells that are inside one of these threads to keep track of which direction the thread is growing, or moving, in. \n\n\n\n\n\n**Left:** RGB channels of NCA. **Right:** Intensities of top three principal components of hidden states. \n \n An NCA trained to produce a texture based on DTD image **cheqeuered\\_0121**. Notice the structure of squares - with a gradient occurring inside the structure of each square, evidencing that structure is being encoded in hidden state.\n\nThe chequerboard pattern likewise lends itself to some qualitative analysis and hints at a fairly simple mechanism for maintaining the shape of squares. Each square has a clear gradient in PCA space across the diagonal, and the values this gradient traverses differ for the white and black squares. We find it likely the gradient is used to provide a local coordinate system for creating and sizing the squares. \n\n\n\n\n\n**Left:** RGB channels of NCA. **Right:** Intensities of top three principal components of hidden states. \n \n An NCA trained to excite **mixed4c\\_208** in Inception. The visible body of the eye is clearly demarcated in the hidden states. There is also a “halo” which appears to modulate growth of any solitons immediately next to each other. This halo is barely visible in the RGB channels.\n\nWe find surprising insight in NCA trained on Inception as well. In this case, the structure of the eye is clearly encoded in the hidden state with the body composed primarily of one combination of principal components, and an halo, seemingly to prevent collisions of the eye solitons, composed of another set of principal components.\n\n\nAnalysis of these hidden states is something of a dark art; it is not always possible to draw rigorous conclusions about what is happening. We welcome future work in this direction, as we believe qualitative analysis of these behaviours will be useful for understanding more complex behaviours of CAs. We also hypothesize that it may be possible to modify or alter hidden states in order to affect the morphology and behaviour of NCA. \n\n\nConclusion\n----------\n\n\nIn this work, we selected texture templates and individual neurons as targets and then optimized NCA populations so as to produce similar excitations in a pre-trained neural network. This procedure yielded NCAs that could render nuanced and hypnotic textures. During our analysis, we found that these NCAs have interesting and unexpected properties. Many of the solutions for generating certain patterns in an image appear similar to the underlying model or physical behaviour producing the pattern. For example, our learned NCAs seem to have a bias for treating objects in the pattern as individual objects and letting them move freely across space. While this effect was present in many of our models, it was particularly strong in the bubble and eye models. The NCA is forced to find algorithms that can produce such a pattern with purely local interaction. This constraint seems to produce models that favor high-level consistency and robustness.\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Self-classifying MNIST Digits](/2020/selforg/mnist/)\n[Adversarial Reprogramming of Neural Cellular Automata](/selforg/2021/adversarial/)", "date_published": "2021-02-11T20:00:00Z", "authors": ["Eyvind Niklasson", "Alexander Mordvintsev", "Michael Levin"], "summaries": ["Neural Cellular Automata learn to generate textures, exhibiting surprising properties."], "doi": "10.23915/distill.00027.003", "journal_ref": "distill-pub", "bibliography": [{"link": "https://distill.pub/2020/growing-ca", "title": "Growing Neural Cellular Automata"}, {"link": "http://arxiv.org/pdf/2008.04965.pdf", "title": "Image segmentation via Cellular Automata"}, {"link": "https://distill.pub/2020/selforg/mnist", "title": "Self-classifying MNIST Digits"}, {"link": "https://distill.pub/2018/differentiable-parameterizations", "title": "Differentiable Image Parameterizations"}, {"link": "https://doi.org/10.1098/rstb.1952.0012", "title": "The chemical basis of morphogenesis"}, {"link": "http://dx.doi.org/10.1016/j.gde.2012.11.013", "title": "Turing patterns in development: what about the horse part?"}, {"link": "http://dx.doi.org/10.1038/ncomms5905", "title": "A unified design space of synthetic stripe-forming networks"}, {"link": "http://dx.doi.org/10.1016/j.devcel.2017.04.021", "title": "On the Formation of Digits and Joints during Limb Development"}, {"link": "http://dx.doi.org/10.1126/science.1252960", "title": "Modeling digits. Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients"}, {"link": "http://dx.doi.org/10.1016/j.ydbio.2019.10.031", "title": "Pattern formation mechanisms of self-organizing reaction-diffusion systems"}, {"link": "http://dx.doi.org/10.1098/rsif.2017.0425", "title": "Bioelectric gene and reaction networks: computational modelling of genetic, biochemical and bioelectrical dynamics in pattern regulation"}, {"link": "http://dx.doi.org/10.1101/336461", "title": "Turing-like patterns can arise from purely bioelectric mechanisms"}, {"link": "http://dx.doi.org/10.1098/rsta.2017.0376", "title": "Dissipative structures in biological systems: bistability, oscillations, spatial patterns and waves"}, {"link": "http://dx.doi.org/10.1002/bies.200900072", "title": "Gene networks and liar paradoxes"}, {"link": "http://arxiv.org/pdf/1505.07376.pdf", "title": "Texture Synthesis Using Convolutional Neural Networks"}, {"link": "http://dx.doi.org/10.1007/BF02459572", "title": "The chemical basis of morphogenesis. 1953"}, {"link": "http://dx.doi.org/10.1126/science.261.5118.192", "title": "Pattern formation by interacting chemical fronts"}, {"link": "http://dx.doi.org/10.1126/science.261.5118.189", "title": "Complex patterns in a simple system"}, {"link": "http://arxiv.org/pdf/1409.1556.pdf", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"link": "http://arxiv.org/pdf/1412.6980.pdf", "title": "Adam: A Method for Stochastic Optimization"}, {"link": "http://arxiv.org/pdf/1311.3618.pdf", "title": "Describing Textures in the Wild"}, {"link": "http://doi.wiley.com/10.1207/s15516709cog2102_4", "title": "The texture lexicon: Understanding the categorization of visual texture terms and their relationship to texture images"}, {"link": "http://dx.doi.org/10.1039/c5ib00221d", "title": "Re-membering the body: applications of computational neuroscience to the top-down control of regeneration of limbs and other complex organs"}, {"link": "http://dx.doi.org/10.1097/00000441-193811000-00047", "title": "Embryonic Development and Induction"}, {"link": "https://linkinghub.elsevier.com/retrieve/pii/B9780125431057500129", "title": "Communication, Memory, and Development"}, {"link": "https://github.com/mxgmn/WaveFunctionCollapse", "title": "WaveFunctionCollapse"}, {"link": "http://arxiv.org/pdf/1603.03417.pdf", "title": "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images"}, {"link": "https://openaccess.thecvf.com/content_cvpr_2018/papers/Xian_TextureGAN_Controlling_Deep_CVPR_2018_paper.pdf", "title": "TextureGAN: Controlling deep image synthesis with texture patches"}, {"link": "http://dx.doi.org/10.1162/artl_a_00023", "title": "Interactive evolution of camouflage"}, {"link": "https://www.cns.nyu.edu/pub/eero/portilla99-reprint.pdf", "title": "A parametric texture model based on joint statistics of complex wavelet coefficients"}, {"link": "http://dx.doi.org/10.1109/TPAMI.2016.2596743", "title": "Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration"}, {"link": "https://www.researchgate.net/publication/227464385_The_evolutionary_significance_of_butterfly_eyespots", "title": "The evolutionary significance of butterfly eyespots"}, {"link": "http://dx.doi.org/10.1371/journal.pone.0128332", "title": "Live Cell Imaging of Butterfly Pupal and Larval Wings In Vivo"}, {"link": "http://dx.doi.org/10.1186/s40064-016-2969-8", "title": "Focusing on butterfly eyespot focus: uncoupling of white spots from eyespot bodies in nymphalid butterflies"}, {"link": "https://openai.com/blog/microscope/", "title": "OpenAI Microscope"}, {"link": "http://dx.doi.org/10.1073/pnas.0810311106", "title": "The neural origins of shell structure and pattern in aquatic mollusks"}, {"link": "http://dx.doi.org/10.4161/cib.2.6.9260", "title": "Emergent complexity in simple neural systems"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "http://arxiv.org/pdf/1701.08893.pdf", "title": "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses"}, {"link": "http://dx.doi.org/10.1073/pnas.1618239114", "title": "Stem cell migration and mechanotransduction on linear stiffness gradient hydrogels"}]} {"id": "e0055470e63b5b83359c358c17b108f9", "title": "Visualizing Weights", "url": "https://distill.pub/2020/circuits/visualizing-weights", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Curve Circuits](/2020/circuits/curve-circuits/)\n[Branch Specialization](/2020/circuits/branch-specialization/)\n\nIntroduction\n------------\n\n\nThe problem of understanding a neural network is a little bit like reverse engineering a large compiled binary of a computer program. In this analogy, the weights of the neural network are the compiled assembly instructions. At the end of the day, the weights are the fundamental thing you want to understand: how does this sequence of convolutions and matrix multiplications give rise to model behavior?\n\n\nTrying to understand artificial neural networks also has a lot in common with neuroscience, which tries to understand biological neural networks. As you may know, one major endeavor in modern neuroscience is mapping the [connectomes](https://en.wikipedia.org/wiki/Connectome) of biological neural networks: which neurons connect to which. These connections, however, will only tell neuroscientists which weights are non-zero. Getting the weights – knowing whether a connection excites or inhibits, and by how much – would be a significant further step. One imagines neuroscientists might give a great deal to have the access to weights that those of us studying artificial neural networks get for free.\n\n\nAnd so, it’s rather surprising how little attention we actually give to looking at the weights of neural networks. There are a few exceptions to this, of course. It’s quite common for researchers to show pictures of the first layer weights in vision models (these are directly connected to RGB channels, so they’re easy to understand as images). In some work, especially historically, we see researchers reason about the weights of toy neural networks by hand. And we quite often see researchers discuss aggregate statistics of weights. But actually looking at the weights of a neural network other than the first layer is quite uncommon – to the best of our knowledge, mapping weights between hidden layers to meaningful algorithms is novel to the circuits project.\n\n\n\nWhat’s the difference between visualizing activations, weights, and attributions?\n---------------------------------------------------------------------------------\n\n\nIn this article, we’re focusing on visualizing weights. But people often visualize activations, attributions, gradients, and much more. How should we think about the meaning of visualizing these different objects?\n\n\n* **Activations:** We generally think of these as being “what” the network saw. If understanding a neural network is like reverse compiling a computer program, the neurons are the variables, and the activations are the values of those variables.\n* **Weights:** We generally think of these as being “how” the neural network computes one layer from the previous one. In the reverse engineering analogy, these are compiled assembly instructions.\n* **Attributions:** Attributions try to tell us the extent to which one neuron influenced a later neuron.We often think of this as “why” the neuron fired. We need to be careful with attributions, because they’re a human-defined object on top of a neural network rather than a fundamental object. They aren’t always well defined, and people mean different things by them. (They are very well defined if you are only operating across adjacent layers!)\n\n\nWhy it’s non-trivial to study weights in hidden layers\n------------------------------------------------------\n\n\nIt seems to us that there are three main barriers to making sense of the weights in neural networks, which may have contributed to researchers tending to not directly inspect them:\n\n\n* **Lack of Contextualization:** Researchers often visualize weights in the first layer, because they are linked to RGB values that we understand. That connection makes weights in the first layer meaningful. But weights between hidden layers are meaningless by default: knowing nothing about either the source or the destination, how can we make sense of them?\n* **Indirect Interaction:** Sometimes, the meaningful weight interactions are between neurons which aren’t literally adjacent in a neural network. For example, in a residual network, the output of one neuron can pass through the additive residual stream and linearly interact with a neuron much later in the network. In other cases, neurons may interact through intermediate neurons without significant nonlinear interactions. How can we efficiently reason about these interactions?\n* **Dimensionality and Scale:** Neural networks have lots of neurons. Those neurons connect to lots of other neurons. There’s a lot of data to display! How can we reduce it to a human-scale amount of information?\n\n\nMany of the methods we’ll use to address these problems were previously explored in [Building Blocks](https://distill.pub/2018/building-blocks/) in the context of understanding activation vectors. The goal of this article is to show how similar ideas can be applied to weights instead of activations. Of course, we’ve already implicitly used these methods in various circuit articles, but in those articles the methods have been of secondary interest to the results. It seems useful to give some dedicated discussion to the methods.\n\n\nAside: One Simple Trick\n-----------------------\n\n\nInterpretability methods often fail to take off because they’re hard to use. So before diving into sophisticated approaches, we wanted to offer a simple, easy to apply method.\n\n\nIn a convolutional network, the input weights for a given neuron have shape `[width, height, input_channels]`. Unless this is the first convolutional layer, this probably can’t be easily visualized because `input_channels` is large. (If this is the first convolutional layer, visualize it as is!) However, one can use dimensionality reduction to collapse `input_channels` down to 3 dimensions. We find one-sided NMF especially effective for this.\n\n\n\n![](images/screenshot_1.png)\n\n\n[1](#figure-1):\n NMF of input weights in InceptionV1 `mixed4d_5x5`, for a selection of ten neurons. The red, green, and blue channels on each grid indicate the weights for each of the 3 NMF factors.\n \n\n\n\nThis visualization doesn’t tell you very much about what your weights are doing in the context of the larger model, but it does show you that they are learning nice spatial structures. This can be an easy sanity check that your neurons are learning, and a first step towards understanding your neuron’s behavior. We’ll also see later that this general approach of factoring weights can be extended into a powerful tool for studying neurons.\n\n\nDespite this lack of contextualization, one-sided NMF can be a great technique for investigating multiple channels at a glance. One thing you may quickly discover using this method is that, in models with global average pooling at the end of their convolutional layers, the last few layers will have all their weights be horizontal bands.\n\n\n\n![](images/screenshot_2.png)\n\n\n[2](#figure-2):\n Horizontally-banded weights in InceptionV1 `mixed5b_5x5`, for a selection of eight neurons. As in Figure 1, the red, green, and blue channels on each grid indicate the weights for each of the 3 NMF factors.\n \n\n\n\n\nWe call this phenomenon [*weight banding*](/2020/circuits/weight-banding/). One-sided NMF allows for quickly testing and validating hypotheses about phenomena such as weight banding.\n\n\nContextualizing Weights with Feature Visualization\n--------------------------------------------------\n\n\nOf course, looking at weights in a vacuum isn’t very interesting. In order to really understand what’s going on, we need to *contextualize* weights in the broader context of the network. The challenge of contextualization is a recurring challenge in understanding neural networks: we can easily observe every activation, every weight, and every gradient; the challenge lies in determining what those values represent.\n\n\nRecall that the weights between two convolutional layers are a four dimensional array of the shape:\n\n\n`[relative x position, relative y position,\n input channels, output channels]`\n\n\nIf we fix the input channel and the output channel, we get a 2D array we can present with traditional data visualization. Let’s assume we know which neuron we’re interested in understanding, so we have the output channel. We can pick the input channels with high magnitude weights to our output channel.\n\n\nBut what does the input represent? What about the output?\n\n\nThe key trick is that techniques like feature visualization (or deeper investigations of neurons) can help us understand what the input and output neurons represent, contextualizing the graph. Feature visualizations are especially attractive because they’re automatic, and produce a single image which is often very informative about the neuron. As a result, we often represent neurons as feature visualizations in weight diagrams.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[3](#figure-3): Contextualizing weights.\n \n\n\n\nThis approach is the weight analogue of using feature visualizations to contextualize activation vectors in [Building Blocks](https://distill.pub/2018/building-blocks/) (see the section titled “Making Sense of Hidden Layers”).\n\n\nWe can liken this to how, when reverse-engineering a normal compiled computer program, one would need to start assigning variable names to the values stored in registers to keep track of them. Feature visualizations are essentially automatic variable names for neurons, which are roughly analogous to those registers or variables.\n\n\n### Small Multiples\n\n\nOf course, neurons have multiple inputs, and it can be helpful to show the weights to several inputs at a time as a [small multiple](https://en.wikipedia.org/wiki/Small_multiple):\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[4](#figure-4): Small multiple weights for [`mixed3b` 342](https://microscope.openai.com/models/inceptionv1/mixed3b_0/342).\n \n\n\n\nAnd if we have two families of related neurons interacting, it can sometimes even be helpful to show the weights between all of them as a grid of small multiples:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[5](#figure-5): Small multiple weights for a variety of [curve detectors](https://distill.pub/2020/circuits/curve-detectors/).\n\n\n\n\n\nAdvanced Approaches to Contextualization with Feature Visualization\n-------------------------------------------------------------------\n\n\nAlthough we most often use feature visualization to visualize neurons, we can visualize any direction (linear combination of neurons). This opens up a very wide space of possibilities for visualizing weights, of which we’ll explore a couple particularly useful ones.\n\n\n### Visualizing Spatial Position Weights\n\n\nRecall that the weights for a single neuron have shape `[width, height, input_channels]`. In the previous section we split up `input_channels` and visualized each `[width, height]` matrix. But an alternative approach is to think of there as being a vector over input neurons at each spatial position, and to apply feature visualization to each of those vectors. You can think of this as telling us what the weights in that position are collectively looking for.\n\n\n\n![](images/screenshot_6.png)\n\n\n[6](#figure-6). **Left:** Feature visualization of a car neuron. **Right:** Feature visualizations of the vector over input neurons at each spatial position of the car neuron’s weights. As we see, the car neuron broadly responds to window features above wheel features.\n \n\n\n\nThis visualization is the weight analogue of the [“Activation Grid” visualization](https://distill.pub/2018/building-blocks/#ActivationVecVis) from Building Blocks. It can be a nice, high density way to get an overview of what the weights for one neuron are doing. However, it will be unable to capture cases where one position responds to multiple very different things, as in a multi-faceted or polysemantic neuron.\n\n\n### Visualizing Weight Factors\n\n\nFeature visualization can also be applied to factorizations of the weights, which we briefly discussed earlier. This is the weight analogue to the “Neuron Groups” visualization from Building Blocks.\n\n\nThis can be especially helpful when you have a group of neurons like [high-low frequency detectors](/2020/circuits/frequency-edges/) or black and white vs color detectors that look are all mostly looking for a small number of factors. For example, a large number of high-low frequency detectors can be significantly understood as combining just two factors – a high frequency factor and a low-frequency factor – in different patterns.\n\n\n\n\n .upstream-nmf {\n display: grid;\n grid-row-gap: .5rem;\n margin-bottom: 2rem;\n }\n\n .upstream-nmf .row {\n display: grid;\n grid-template-columns: min-content 1fr 6fr;\n grid-column-gap: 1rem;\n grid-row-gap: .5rem;\n\n }\n\n .units,\n .weights {\n display: grid;\n grid-template-columns: repeat(6, 1fr);\n grid-gap: 0.5rem;\n grid-column-start: 3;\n }\n\n img.fv {\n max-width: 100%;\n border-radius: 8px;\n }\n\n div.units img.full {\n margin-left: 1px;\n margin-bottom: 0px;\n }\n\n img.full {\n width: unset;\n object-fit: none;\n object-position: center;\n image-rendering: optimizeQuality;\n }\n\n img.weight {\n width: 100%;\n image-rendering: pixelated;\n align-self: center;\n border: 1px solid #ccc;\n }\n\n .annotated-image {\n display: grid;\n grid-auto-flow: column;\n align-items: center;\n }\n\n .annotated-image span {\n writing-mode: vertical-lr;\n }\n\n .layer-label {\n grid-row-start: span 2;\n text-align: end;\n }\n\n .layer-label label {\n display: inline-block;\n writing-mode: vertical-lr;\n }\n\n .layer-label.hidden {\n border-color: transparent;\n }\n\n .layer-label.nonhidden {\n margin-left: 1rem;\n }\n\n .layer-label.hidden label {\n visibility: hidden;\n }\n\n\n\n\n\nmixed3a\n\n[![](diagrams/figure-7-feature-vis/neuron-136.png \"Unit 136\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_136.html)\n[![](diagrams/figure-7-feature-vis/neuron-108.png \"Unit 108\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_108.html)\n[![](diagrams/figure-7-feature-vis/neuron-132.png \"Unit 132\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_132.html)\n[![](diagrams/figure-7-feature-vis/neuron-88.png \"Unit 88\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_88.html)\n[![](diagrams/figure-7-feature-vis/neuron-110.png \"Unit 110\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_110.html)\n[![](diagrams/figure-7-feature-vis/neuron-180.png \"Unit 180\")](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_180.html)\n\n\n\n\n\nHF-factor\n![](diagrams/figure-8-upstream-nmf/conv2d2-hi.png)\n\n\n![](diagrams/figure-8-upstream-nmf/neuron=136-layer=maxpool1-factor=1.png)\n![](diagrams/figure-8-upstream-nmf/neuron=108-layer=maxpool1-factor=1.png)\n![](diagrams/figure-8-upstream-nmf/neuron=132-layer=maxpool1-factor=1.png)\n![](diagrams/figure-8-upstream-nmf/neuron=88-layer=maxpool1-factor=1.png)\n![](diagrams/figure-8-upstream-nmf/neuron=110-layer=maxpool1-factor=1.png)\n![](diagrams/figure-8-upstream-nmf/neuron=180-layer=maxpool1-factor=1.png)\n\n\nLF-factor\n![](diagrams/figure-8-upstream-nmf/conv2d2-lo.png)\n\n\n![](diagrams/figure-8-upstream-nmf/neuron=136-layer=maxpool1-factor=0.png)\n![](diagrams/figure-8-upstream-nmf/neuron=108-layer=maxpool1-factor=0.png)\n![](diagrams/figure-8-upstream-nmf/neuron=132-layer=maxpool1-factor=0.png)\n![](diagrams/figure-8-upstream-nmf/neuron=88-layer=maxpool1-factor=0.png)\n![](diagrams/figure-8-upstream-nmf/neuron=110-layer=maxpool1-factor=0.png)\n![](diagrams/figure-8-upstream-nmf/neuron=180-layer=maxpool1-factor=0.png)\n\n\n\n\n\n\n\n[7](#figure-7):\n\n NMF factorization on the weights (excitatory and inhibitory) connecting six high-low frequency detectors in InceptionV1 to the\n layer `conv2d2`.\n \n\n\n\nThese factors can then be decomposed into individual neurons for more detailed understanding.\n\n\n\n\n .upstream-neurons {\n display: grid;\n grid-gap: 1em;\n margin-bottom: 1em;\n }\n\n h5 {\n margin-bottom: 0px;\n }\n\n .upstream-neurons .row {\n display: grid;\n grid-template-columns: 1fr min-content 1fr min-content 1fr min-content 1fr min-content 1fr min-content 1fr min-content min-content;\n grid-column-gap: .25em;\n column-gap: .25em;\n align-items: center;\n }\n\n .units,\n .weights {\n display: grid;\n grid-template-columns: repeat(6, 1fr);\n grid-gap: 0.5rem;\n grid-column-start: 3;\n }\n\n img.fv {\n display: block;\n max-width: 100%;\n border-radius: 8px;\n }\n\n img.full {\n width: unset;\n object-fit: none;\n object-position: center;\n image-rendering: optimizeQuality;\n }\n\n img.weight {\n width: 100%;\n image-rendering: pixelated;\n align-self: center;\n border: 1px solid #ccc;\n }\n\n .layer-label {\n grid-row-start: span 2;\n }\n\n .layer-label label {\n display: inline-block;\n /\\* transform: rotate(-90deg); \\*/\n }\n\n .annotation {\n font-size: 1.5em;\n font-weight: 200;\n color: #666;\n margin-bottom: 0.2em;\n }\n\n .equal-sign {\n padding: 0 0.25em;\n }\n\n .ellipsis {\n padding: 0 0.25em;\n /\\* vertically align the ellipsis \\*/\n position: relative;\n bottom: 0.5ex;\n }\n\n .unit {\n display: block;\n min-width: 50px;\n }\n\n .factor {\n box-shadow: 0 0 8px #888;\n }\n\n .unit .bar {\n display: block;\n margin-top: 0.5em;\n background-color: #CCC;\n height: 4px;\n }\n\n .row h4 {\n border-bottom: 1px solid #ccc;\n }\n\n\n\n\n![](diagrams/figure-8-upstream-nmf/conv2d2-hi.png)\n=\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=0-channel_index=119.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_119.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=0-channel_index=102.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_102.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=0-channel_index=123.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_123.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=0-channel_index=90.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_90.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=0-channel_index=173.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_173.html)\n+\n…\nHF-factor\n\n\n\n × 0.93\n\n\n\n\n × 0.73\n\n\n\n\n × 0.66\n\n\n\n\n × 0.59\n\n\n\n\n × 0.55\n\n\n\n\n![](diagrams/figure-8-upstream-nmf/conv2d2-lo.png)\n=\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=1-channel_index=89.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_89.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=1-channel_index=163.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_163.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=1-channel_index=98.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_98.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=1-channel_index=188.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_188.html)\n+\n[![](diagrams/figure-8-upstream-neurons/layer_name=conv2d2-component=1-channel_index=158.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_158.html)\n+\n…\nLF-factor\n\n\n\n × 0.44\n\n\n\n\n × 0.41\n\n\n\n\n × 0.38\n\n\n\n\n × 0.36\n\n\n\n\n × 0.34\n\n\n\n\n\n\n\n[8](#figure-8): Neurons (by their feature visualizations) that contribute to the two NMF factors from [Figure 7](#figure-7), plus the weighted amount they contribute to\n each factor.\n \n\n\n\nDealing with Indirect Interactions\n----------------------------------\n\n\nAs we mentioned earlier, sometimes the meaningful weight interactions are between neurons which aren’t literally adjacent in a neural network, or where the weights aren’t directly represented in a single weight tensor. A few examples:\n\n\n* In a residual network, the output of one neuron can pass through the additive residual stream and linearly interact with a neuron much later in the network.\n* In a separable convolution, weights are stored as two or more factors, and need to be expanded to link neurons.\n* In a bottleneck architecture, neurons in the bottleneck may primarily be a low-rank projection of neurons from the previous layer.\n* The behavior of an intermediate layer simply doesn’t introduce much non-linear behavior, leaving two neurons in non-adjacent layers with a significant linear interaction.\n\n\nAs a result, we often work with “expanded weights” – that is, the result of multiplying adjacent weight matrices, potentially ignoring non-linearities. We generally implement expanded weights by taking gradients through our model, ignoring or replacing all non-linear operations with the closest linear one.\n\n\nThese expanded weights have the following properties:\n\n\n* If two layers interact **linearly**, the expanded weights will give the true linear map, even if the model doesn’t explicitly represent the weights in a single weight matrix.\n* If two layers interact **non-linearly**, the expanded weights can be seen as the expected value of the gradient up to a constant factor, under the assumption that all neurons have an equal (and independent) probability of firing.\n\n\nThey also have one additional benefit, which is more of an implementation detail: because they’re implemented in terms of gradients, you don’t need to know how the weights are represented. For example, in TensorFlow, you don’t need to know which variable object represents the weights. This can be a significant convenience when you’re working with unfamiliar models!\n\n\n### Benefits of Expanded Weights\n\n\nMultiplying out the weights like this can sometimes help us see a simpler underlying structure. For example, [`mixed3b` 208](https://microscope.openai.com/models/inceptionv1/mixed3b_0/208) is a black and white center detector. It’s built by combining a bunch of black and white vs color detectors together.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[9](#figure-9). `mixed3b` 208 along with five neurons from `mixed3a` that contribute the [strongest weights](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_208.html) to it.\n \n\n\n\n\nExpanding out the weights allows us to see an important aggregate effect of these connections: together, they look for the absence of color in the center one layer further back.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[10](#figure-10). Top eighteen [expanded weights](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_208.html) from `conv2d2` to [`mixed3b` 208](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_208.html), organized in two rows according to weight factorization.\n \n\n\n\n\nA particularly important use of this method – which we’ve been implicitly using in earlier examples – is to jump over “bottleneck layers.” Bottleneck layers are layers of the network which squeeze the number of channels down to a much smaller number, typically in a branch, making large spatial convolutions cheaper. The [bottleneck layers](https://microscope.openai.com/models/inceptionv1/mixed3a_5x5_bottleneck_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis) of InceptionV1 are one example. Since so much information is compressed, these layers are often polysemantic, and it can often be more helpful to jump over them and understand the connection to the wider layer before them.\n\n\n### Cases where expanded weights are misleading\n\n\nExpanded weights can, of course, be misleading when non-linear structure is important. A good example of this is [boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary). Recall that boundary detectors usually detect both low-to-high and high-to-low frequency transitions:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[11](#figure-11). Boundary detectors such as [`mixed3b` 345](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_345.html) detect both low-to-high and high-to-low frequency transitions.\n \n\n\n\n\nSince high-low frequency detectors are [usually](/2020/circuits/frequency-edges/) excited by high-frequency patterns on one side and inhibited on the other (and vice versa for low frequency), detecting both directions means that the expanded weights cancel out! As a result, expanded weights appear to show that boundary detectors are neither excited or inhibited by high frequency detectors two layers back, when in fact they are *both* excited and also inhibited by high frequency, depending on the context, and it’s just that those two different cases cancel out.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[12](#figure-12).\n\n Neurons two layers back (such as [`conv2d2` 89](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_89.html)) may have a strong influence on the high-low frequency detectors that contribute to `mixed3b` 345 (top), but that influence washes out when we look at the expanded weights (bottom) directly between `conv2d2` 89 and `mixed3b` 345.\n \n\n\n\n\nMore sophisticated techniques for describing multi-layer interactions can help us understand cases like this. For example, one can determine what the “best case” excitation interaction between two neurons is (that is, the maximum achievable gradient between them). Or you can look at the gradient for a particular example. Or you can factor the gradient over many examples to determine major possible cases. These are all useful techniques, but we’ll leave them for a future article to discuss.\n\n\n### Qualitative properties\n\n\nOne qualitative property of expanding weights across many layers deserves mention before we end our discussion of them. Expanded weights often get this kind of “electron orbital”-like smooth spatial structures:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[13](#figure-13). Smooth spatial structure of some expanded weights from [`mixed3b` 268](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_268.html) to `conv2d1`.\n\n Although the exact structures present may vary from neuron to neuron, this example is not cherry-picked: this smoothness is typical of most multiple-layer expanded weights.\n \n\n\n\n\nIt’s not clear how to interpret this, but it’s suggestive of rich spatial structure on the scale of multiple layers.\n\n\nDimensionality and Scale\n------------------------\n\n\nSo far, we’ve addressed the challenges of contextualization and indirection interactions. But we’ve only given a bit of attention to our third challenge of dimensionality and scale. Neural networks contain many neurons and each one connects to many others, creating a huge amount of weights. How do we pick which connections between neurons to look at?\n\n\nFor the purposes of this article, we’ll put the question of which neurons we want to study outside of our scope, and only discuss the problem of picking which connections to study. (We may be trying to comprehensively study a model, in which case we want to study all neurons. But we might also, for example, be trying to study neurons we’ve determined related to some narrower aspect of model behavior.)\n\n\nGenerally, we chose to look at the largest weights, as we did at the beginning of the section on contextualization. Unfortunately, there tends to be a long tail of small weights, and at some point it generally gets impractical to look at these. How much of the story is really hiding in these small weights? We don’t know, but polysemantic neurons suggest there could be a very important and subtle story hiding here! There’s some hope that sparse neural networks might make this much better, by getting rid of small weights, but whether such conclusions can be drawn about non-sparse networks is presently speculative.\n\n\nAn alternative strategy that we’ve brushed on a few times is to reduce your weights into a few components and then study those factors (for example, with NMF). Often, a very small number of components can explain much of the variance. In fact, sometimes a small number of factors can explain the weights of an entire set of neurons! Prominent examples of this are high-low frequency detectors (as we saw earlier) and black and white vs color detectors.\n\n\nHowever, this approach also has downsides. Firstly, these components can be harder to understand and even polysemantic. For example, if you apply the basic version of this method to a boundary detector, one component will contain both high-to-low and low-to-high frequency detectors which will make it hard to analyze. Secondly, your factors no longer align with activation functions, which makes analysis much messier. Finally, because you will be reasoning about every neuron in a different basis, it is difficult to build a bigger picture view of the model unless you convert your components back to neurons.\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Curve Circuits](/2020/circuits/curve-circuits/)\n[Branch Specialization](/2020/circuits/branch-specialization/)", "date_published": "2021-02-04T20:00:00Z", "authors": ["Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Michael Petrov", "Ludwig Schubert", "Swee Kiat Lim", "Chris Olah"], "summaries": ["We present techniques for visualizing, contextualizing, and understanding neural network weights."], "doi": "10.23915/distill.00024.007", "journal_ref": "distill-pub", "bibliography": [{"link": "http://yosinski.com/media/papers/Yosinski__2015__ICML_DL__Understanding_Neural_Networks_Through_Deep_Visualization__.pdf", "title": "Understanding neural networks through deep visualization"}, {"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "https://doi.org/10.23915/distill.00010", "title": "The Building Blocks of Interpretability"}, {"link": "https://doi.org/10.23915/distill.00024.001", "title": "Zoom In: An Introduction to Circuits"}, {"link": "https://doi.org/10.23915/distill.00024.002", "title": "An Overview of Early Vision in InceptionV1"}, {"link": "https://doi.org/10.23915/distill.00024.003", "title": "Curve Detectors"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1602.03616.pdf", "title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"link": "https://arxiv.org/pdf/1506.02078.pdf", "title": "Visualizing and understanding recurrent networks"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}]} {"id": "63e78aeacd5b314498232de7a27d4381", "title": "High-Low Frequency Detectors", "url": "https://distill.pub/2020/circuits/frequency-edges", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Naturally Occurring Equivariance in Neural Networks](/2020/circuits/equivariance/)\n[Curve Circuits](/2020/circuits/curve-circuits/)\n\nIntroduction\n------------\n\n\nSome of the neurons in vision models are features that we aren’t particularly surprised to find. [Curve detectors](https://distill.pub/2020/circuits/curve-detectors/), for example, are a pretty natural feature for a vision system to have. In fact, they had already been discovered in the animal visual cortex. It’s easy to imagine how curve detectors are built up from earlier edge detectors, and it’s easy to guess why curve detection might be useful to the rest of the neural network.\n\n\nHigh-low frequency detectors, on the other hand, seem more surprising. They are not a feature that we would have expected *a priori* to find. Yet, when systematically characterizing the [early layers](https://distill.pub/2020/circuits/early-vision/) of InceptionV1, we found a full [fifteen neurons](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_high_low_frequency) of `mixed3a` that appear to detect a high frequency pattern on one side, and a low frequency pattern on the other.\n\n\n\nBy “high frequency” and “low frequency” here, we mean [spatial frequency](https://en.wikipedia.org/wiki/Spatial_frequency) — just like when we take the\n Fourier transform of an image.\n\n\n\n\n One worry we might have about the [circuits](https://distill.pub/2020/circuits/zoom-in/) approach to studying neural networks is that we might only be able to understand a limited set of highly-intuitive features.\n\n High-low frequency detectors demonstrate that it’s possible to understand at least somewhat unintuitive features.\n \n\n\n[Function\n----------](#function)\n\n How can we be sure that “high-low frequency detectors” are actually detecting directional transitions from low to high spatial frequency?\n We will rely on three methods:\n \n\n\n* [**Feature visualization**](#feature-visualization) allows us to establish a causal link between each neuron and its function.\n* [**Dataset examples**](#dataset-examples) show us where the neuron fires in practice.\n* [**Synthetic tuning curves**](#tuning-curves) show us how variation affects the neuron’s response.\n\n\n\n Later on in the article, we dive into the mechanistic details of how they are both [implemented](#implementation) and [used](#usage). We will be able to understand the algorithm that implements them, confirming that they detect high to low frequency transitions.\n \n\n\n[### Feature Visualization](#feature-visualization)\n\n A [feature visualization](https://distill.pub/2017/feature-visualization) is a synthetic input\n optimized to elicit maximal activation of a single, specific neuron.\n Feature visualizations are constructed starting from random noise, so each and every pixel in a feature visualization\n that’s *changed* from random noise is there because it caused the neuron to activate more strongly. This\n establishes a causal link! The behavior shown in the\n feature visualization is behavior that causes the neuron to fire:\n \n\n\n\n .gallery {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(28px, 64px));\n grid-gap: 0.5rem;\n justify-content: start;\n }\n\n ul.gallery {\n padding-left: 0;\n }\n\n .gallery img,\n .gallery-img {\n max-width: 100%;\n width: unset;\n object-fit: none;\n object-position: center;\n border-radius: 8px;\n }\n\n @media screen and (min-width: 768px) {\n .gallery {\n grid-template-columns: repeat(7, minmax(28px, 96px));\n justify-content: left;\n }\n }\n @media screen and (min-width: 1180px) {\n .gallery {\n grid-gap: 1rem;\n }\n }\n \n\n[![](diagrams/1.1-feature-vis/neuron-136.png \"Unit 136\")\n3a:136](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_136.html)\n[![](diagrams/1.1-feature-vis/neuron-108.png \"Unit 108\")\n3a:108](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_108.html)\n[![](diagrams/1.1-feature-vis/neuron-132.png \"Unit 132\")\n3a:132](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_132.html)\n[![](diagrams/1.1-feature-vis/neuron-88.png \"Unit 88\")\n3a:88](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_88.html)\n[![](diagrams/1.1-feature-vis/neuron-110.png \"Unit 110\")\n3a:110](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_110.html)\n[![](diagrams/1.1-feature-vis/neuron-180.png \"Unit 180\")\n3a:180](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_180.html)\n[![](diagrams/1.1-feature-vis/neuron-153.png \"Unit 153\")\n3a:153](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_153.html)\n[![](diagrams/1.1-feature-vis/neuron-186.png \"Unit 186\")\n3a:186](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_186.html)\n[![](diagrams/1.1-feature-vis/neuron-86.png \"Unit 86\")\n3a:86](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_86.html)\n[![](diagrams/1.1-feature-vis/neuron-117.png \"Unit 117\")\n3a:117](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_117.html)\n[![](diagrams/1.1-feature-vis/neuron-112.png \"Unit 112\")\n3a:112](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_112.html)\n[![](diagrams/1.1-feature-vis/neuron-70.png \"Unit 70\")\n3a:70](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_70.html)\n[![](diagrams/1.1-feature-vis/neuron-106.png \"Unit 106\")\n3a:106](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_106.html)\n[![](diagrams/1.1-feature-vis/neuron-113.png \"Unit 113\")\n3a:113](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_113.html)\n\n\n[1](#figure-1):\n Feature visualizations of a variety of high-low frequency detectors from InceptionV1′s [`mixed3a`](https://microscope.openai.com/models/inceptionv1/mixed3a_0?models.op.feature_vis.type=neuron&models.op.technique=feature_vis) layer.\n \n\n\n From their feature visualizations, we observe that all of these high-low frequency detectors share these same\n characteristics:\n \n\n\n* **Detection of adjacent high and low frequencies.** The detectors respond to *high frequency* on one side, and *low frequency* on the other side.\n* **Rotational equivariance.**\n The detectors are rotationally [equivariant](https://distill.pub/2020/circuits/equivariance/): each unit detects a high-low frequency change along a particular angle, with different units spanning the full 360º of possible orientations.\n We will see this in more detail when we [construct a tuning curve](#tuning-curves) with synthetic examples, and also when we look at the weights [implementing](#implementation) these detectors.\n\n\nWe can use a [diversity term](https://distill.pub/2017/feature-visualization/#diversity) in our feature visualizations to jointly optimize for the activation of a neuron while encouraging different activation patterns in a batch of visualizations.\n\n We are thus reasonably confident that if high-low frequency detectors were also sensitive to other patterns, we would see signs of them in these feature visualizations. Instead, the frequency contrast remains an invariant aspect of all these visualizations. (Although other patterns form along the boundary, these are likely outside the neuron’s effective receptive field.)\n\n\n\n\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-0.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-1.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-2.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-3.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-4.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-5.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n![](diagrams/1.1-feature-vis/fv-mixed3a-136-diversity-6.png \"Unit mixed3a:136 optimized with a diversity objective.\")\n\n\n[1-2](#figure-1-2):\n Feature visualizations of high-low frequency detector mixed3a:136 from InceptionV1′s [`mixed3a`](https://microscope.openai.com/models/inceptionv1/mixed3a_0?models.op.feature_vis.type=neuron&models.op.technique=feature_vis)\n layer, optimized with a diversity objective. You can learn more about feature visualization and the diversity objective [here](https://distill.pub/2017/feature-visualization/#diversity).\n \n\n[### Dataset Examples](#dataset-examples)\n\n We generate dataset examples by sampling from a natural data distribution (in this case, the training set) and selecting the images that cause the neurons to maximally activate.\n\n\n Checking against these examples helps ensure we’re not misreading the feature visualizations.\n\n \n\n\n\n\n[![](diagrams/1.1-feature-vis/neuron-136.png)](https://microscope.openai.com/models/inceptionv1/mixed3a_0/136)\n![](diagrams/1.2-dataset-examples/placeholder.png)\n\n\n[2](#1.2.0-dataset-examples):\n Crops taken from Imagenet where [`mixed3a` 136](https://microscope.openai.com/models/inceptionv1/mixed3a_0/136) activated maximally,\n argmaxed over spatial locations.\n \n\nA wide range of real-world situations can cause high-low frequency detectors to fire. Oftentimes it’s a highly-textured, in-focus foreground object against a blurry background — for example, the foreground might be the microphone’s latticework, the hummingbird’s tiny head feathers, or the small rubber dots on the Lenovo ThinkPad [pointing stick](https://en.wikipedia.org/wiki/Pointing_stick) — but not always: we also observe that it fires for the MP3 player’s brushed metal finish against its shiny screen, or the text of a watermark.\n\n\nIn all cases, we see one area with high frequency and another area with low frequency. Although they often fire at an object boundary,\n\n they can also fire in cases where there is a frequency change without an object boundary.\n\n High-low frequency detectors are therefore not the same as [boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary).\n\n\n[### Synthetic Tuning Curves](#tuning-curves)\n\n**Tuning curves** show us how a neuron’s response changes with respect to a parameter.\n\n They are a standard method in neuroscience, and we’ve found them very helpful for studying artificial neural networks as well. For example, we used them to demonstrate [how the response of curve detectors changes](https://distill.pub/2020/circuits/curve-detectors/#radial-tuning-curve) with respect to orientation.\n\n Similarly, we can use tuning curves to show how high-low frequency detectors respond.\n \n\n\n\n To construct such a curve, we’ll need a set of *synthetic stimuli* which cause high-low frequency detectors to fire.\n\n We generate images with a high-frequency pattern on one side and a low-frequency pattern on the other. Since we’re interested in orientation, we’ll rotate this pattern to create a 1D family of stimuli:\n \n\n\n\n![](diagrams/1.4-tuning-curves/orientation.png)\n\n The first axis of variation of our synthetic stimuli is *orientation*.\n \n\n\n But what frequency should we use for each side? How steep does the difference in frequency need to be?\n To explore this, we’ll add a second dimension varying the ratio between the two frequencies:\n \n\n\n\n![](diagrams/1.4-tuning-curves/ratio.png)\n\n The second axis of variation of our synthetic stimuli is the *frequency ratio*.\n \n\n\n\n (Adding a second dimension will also help us see whether the results for the first dimension are robust.)\n \n\n\n\n Now that we have these two dimensions, we sample the synthetic stimuli and plot each neuron’s responses to them:\n\n\n\n\n Each high-low frequency detector exhibits a clear preference for a limited range of orientations.\n\n As we [previously found](https://distill.pub/2020/circuits/curve-detectors/#synthetic-curves) with curve detectors, high-low frequency detectors are rotationally [equivariant](https://distill.pub/2020/circuits/equivariance/): each one selects for a given orientation, and together they span the full 360º space.\n \n\n\n\n\n[Implementation\n--------------](#implementation)\nHow are high-low frequency detectors built up from lower-level neurons?\n\n One could imagine many different circuits which could implement this behavior. To give just one example, it seems like there are at least two different ways that the oriented nature of these units could form.\n \n\n\n* **Equivariant→Equivariant Hypothesis.** The first possibility is that the previous layer already has precursor features which detect oriented transitions from high frequency to low frequency. The extreme version of this hypothesis would be that the high-low frequency detector is just an identity passthrough of some lower layer neuron. A more moderate version would be something like what we see with curve detectors, where [early curve detectors](https://distill.pub/2020/circuits/early-vision/#group_conv2d2_tiny_curves) become refined into the larger and more sophisticated [late curve detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_curves). Another example would be how edge detection is built up from simple [Gabor filters](https://distill.pub/2020/circuits/early-vision/#group_conv2d0_gabor_filters) which were already oriented.\n\n We call this [Equivariant→Equivariant](https://distill.pub/2020/circuits/equivariance/#equivariant-to-equivariant) because the equivariance over orientation was already there in the previous layer.\n* **Invariant→Equivariant Hypothesis.** Alternatively, previous layers might not have anything like high-low frequency detectors. Instead, the orientation might come from spatial arrangements in the neuron’s weights that govern where it is excited by low-frequency and high-frequency features.\n\n\nTo resolve this question — and more generally, to understand how these detectors are implemented — we can look at the weights.\n\n\nLet’s look at a single detector. Glancing at the weights from [`conv2d2`](https://distill.pub/2020/circuits/early-vision/#conv2d2) to [`mixed3a` 110](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_110.html), most of them can be roughly divided into two categories: those that activate on the left and inhibit on the right, and those that do the opposite.\n \n\n\n\n\n\n\n[4](#figure-4):\n Six neurons from conv2d2 contributing weights to mixed3a 110.\n\n \n\n\n\n\n The same also holds for each of the other high-low frequency detectors — but, of course, with different spatial patternsAs an aside: The 1-2-1 pattern on each column of weights is curiously reminiscent of the structure of the [Sobel filter](https://en.wikipedia.org/wiki/Sobel_operator). on the weights, implementing the different orientations.\n \n\n\n\n Surprisingly, across all high-low frequency detectors, the two clusters of neurons that we get for each are actually the *same* two clusters! One cluster appears to detect textures with a generally high frequency, and one cluster appears to detect textures with a generally low frequency.\n \n\n\n\n\n\n\n\n![](diagrams/HF-LF-clusters-amd-weight-structure.png)\n\n\n[5](#figure-5):\n The strongest weights on any high-low frequency detector (here shown: [`mixed3a` 110](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_110.html), [`mixed3a` 136](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_136.html), and [`mixed3a` 112](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_112.html)) can be divided into roughly two clusters. Each cluster contributes its weights in similar ways.\n \n\n\n\n Top row: underlying neurons [`conv2d2` 119](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_119.html), [`conv2d2` 102](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_102.html), [`conv2d2` 123](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_123.html), [`conv2d2` 90](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_90.html), [`conv2d2` 89](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_89.html), [`conv2d2` 163](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_163.html), [`conv2d2` 98](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_98.html), and [`conv2d2` 188](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_188.html).\n \n\n\n\n\n\n\n This is exactly what we would expect to see if the Invariant→Equivariant hypothesis is true: each high-low frequency detector composes the same two components in different spatial arrangements, which then in turn govern the detector’s orientation.\n \n\n\n\n These two different clusters are really striking.\n\n In the next section, we’ll investigate them in more detail.\n \n\n\n### High and Low Frequency Factors\n\n\nIt would be nice if we could confirm that these two clusters of neurons are real. It would also be nice if we could create a simpler way to represent them for circuit analysis later.\n\n\nFactorizing the connectionsBetween two adjacent layers, “connections” reduces to the weights\n between the two layers. Sometimes we are interested in observing connectivity between layers that may not be\n directly adjacent. Because our model, a deep convnet, is non-linear, we will need to approximate the\n connections. A simple approach that we take is to linearize the model by removing the non-linearities. While\n this is not a great approximation of the model’s behavior, it does give a reasonable intuition for\n counterfactual influence: had the neurons in the intermediate layer fired, how it would have affected neurons in\n the downstream layers. We treat positive and negative influences separately. between lower layers and the high-low frequency detectors is one way that we can check whether these two clusters are meaningful, and investigate their significance. Performing a one-sided non-negative matrix factorization (NMF)We [require](/2020/circuits/visualizing-weights/#one-simple-trick) that the channel factor be positive, but allow the spatial factor to have both positive and negative values. separates the connections into two factors.\n\n\nEach factor corresponds to a vector over neurons. Feature visualization can also be used to visualize these linear combinations of neurons. Strikingly, one clearly displays a generic high-frequency image, whereas the other does the same with a low-frequency image.In InceptionV1 in particular, it’s possible that we recover these two factors so crisply in part due to the *[3x3 bottleneck](https://microscope.openai.com/models/inceptionv1/mixed3a_3x3_bottleneck_0?models.op.feature_vis.type=neuron&models.op.technique=feature_vis)* between conv2d2 and mixed3a. Because of this, we’re not here looking at direct weights between conv2d2 and mixed3a, but rather the “expanded weights,” which are a product of a 1x1 convolution (which reduces down to a small number of neurons) combined with a 3x3 convolution. This structure is very similar to the factorization we apply. However, as we see later in [Universality](#universality), we recover similar factors for other models where this bottleneck doesn’t exist. NMF makes it easy to see this abstract circuit across many models which may not have an architecture that more explicitly reifies it. We’ll call these the *HF-factor* and the *LF-factor*:\n\n\n\n\n .upstream-neurons {\n display: grid;\n grid-gap: 1em;\n margin-bottom: 1em;\n }\n\n h5 {\n margin-bottom: 0px;\n }\n\n .upstream-neurons .row {\n display: grid;\n grid-template-columns: 1fr min-content 1fr min-content 1fr min-content 1fr min-content 1fr min-content 1fr min-content min-content;\n grid-column-gap: .25em;\n column-gap: .25em;\n align-items: center;\n }\n\n .units,\n .weights {\n display: grid;\n grid-template-columns: repeat(6, 1fr);\n grid-gap: 0.5rem;\n grid-column-start: 3;\n }\n\n img.fv {\n display: block;\n max-width: 100%;\n border-radius: 8px;\n }\n\n img.full {\n width: unset;\n object-fit: none;\n object-position: center;\n image-rendering: optimizeQuality;\n }\n\n img.weight {\n width: 100%;\n image-rendering: pixelated;\n align-self: center;\n border: 1px solid #ccc;\n }\n\n .layer-label {\n grid-row-start: span 2;\n }\n\n .layer-label label {\n display: inline-block;\n /\\* transform: rotate(-90deg); \\*/\n }\n\n .annotation {\n font-size: 1.5em;\n font-weight: 200;\n color: #666;\n margin-bottom: 0.2em;\n }\n\n .equal-sign {\n padding: 0 0.25em;\n }\n\n .ellipsis {\n padding: 0 0.25em;\n /\\* vertically align the ellipsis \\*/\n position: relative;\n bottom: 0.5ex;\n }\n\n .unit {\n display: block;\n min-width: 50px;\n }\n\n .factor {\n box-shadow: 0 0 8px #888;\n }\n\n .unit .bar {\n display: block;\n margin-top: 0.5em;\n background-color: #CCC;\n height: 4px;\n }\n\n .row h4 {\n border-bottom: 1px solid #ccc;\n }\n\n\n\n#### mixed3a → conv2d2\n\n\n\n![](diagrams/2.1-upstream-nmf/conv2d2-hi.png)\n=\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=0-channel_index=119.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=0-channel_index=102.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=0-channel_index=123.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=0-channel_index=90.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=0-channel_index=173.png)\n+\n…\nHF-factor\n\n\n\n × 0.93\n\n\n\n\n × 0.73\n\n\n\n\n × 0.66\n\n\n\n\n × 0.59\n\n\n\n\n × 0.55\n\n\n\n\n![](diagrams/2.1-upstream-nmf/conv2d2-lo.png)\n=\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=1-channel_index=89.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=1-channel_index=163.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=1-channel_index=98.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=1-channel_index=188.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d2-component=1-channel_index=158.png)\n+\n…\nLF-factor\n\n\n\n × 0.44\n\n\n\n\n × 0.41\n\n\n\n\n × 0.38\n\n\n\n\n × 0.36\n\n\n\n\n × 0.34\n\n\n\n\n\n#### mixed3a → conv2d1\n\n\n\n![](diagrams/2.1-upstream-nmf/conv2d1-hi.png)\n=\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=0-channel_index=30.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=0-channel_index=41.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=0-channel_index=55.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=0-channel_index=51.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=0-channel_index=16.png)\n+\n…\nHF-factor\n\n\n\n × 0.86\n\n\n\n\n × 0.81\n\n\n\n\n × 0.64\n\n\n\n\n × 0.53\n\n\n\n\n × 0.52\n\n\n\n\n![](diagrams/2.1-upstream-nmf/conv2d1-lo.png)\n=\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=1-channel_index=4.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=1-channel_index=46.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=1-channel_index=1.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=1-channel_index=33.png)\n+\n![](diagrams/2.2-upstream-neurons/layer_name=conv2d1-component=1-channel_index=29.png)\n+\n…\nLF-factor\n\n\n\n × 0.49\n\n\n\n\n × 0.48\n\n\n\n\n × 0.45\n\n\n\n\n × 0.43\n\n\n\n\n × 0.42\n\n\n\n\n\n[6](#figure-6):\n NMF recovers the neurons that contribute to the two NMF factors plus the weighted amount they contribute to\n each factor. Here shown: NMF against both `conv2d2` and a deeper layer, `conv2d1`. The\n left side of the equal sign shows feature visualizations of the NMF factors.\n \n\n\nThe feature visualizations are suggestive, but how can we be sure that these factors really correspond to high and low frequency in general, rather than specific high or low frequency patterns? One thing we can do is to create synthetic stimuli again, but now plotting the responses of those two NMF factors.\n \n\n\n\n\n Since our factors don’t correspond to an edge, our synthetic stimuli will only have one frequency region for each stimulus. To add a second dimension and again demonstrate robustness, we also vary the rotation of that region. (The frequency texture is not exactly rotationally invariant because we construct the stimulus out of orthogonal cosine waves.)\n \n\n\n\nUnlike last time, these activations now mostly ignore the image’s orientation, but are sensitive to its frequency. We can average these results over all orientations in order to produce a simple tuning curve of how each factor responds to frequency. As predicted, the HF-factor responds to high frequency and the LF-factor responds to low frequency.\n\n\n\n\n\n\n\n\n[8](#figure-8):\n Tuning curve for HF-factor and LF-factor from `conv2d2` against images with synthetic frequency, averaged across orientation. Wavelength as a proportion of the full input image ranges from 1:1 to 1:10.\n \n\n\n\n\n\n\n Now that we’ve confirmed what these factors are, let’s look at how they’re combined into high-low frequency detectors.\n \n\n\n### Construction of High-Low Frequency Detectors\n\n\n\n NMF factors the weights into both a channel factor and a spatial factor. So far, we’ve looked at the two parts of the channel factor. The spatial factor shows the spatial weighting that combines the HF and LF factors into high-low frequency detectors.\n\n \n\n\n\n\n Unsurprisingly, these weights basically reproduce the same pattern that we’d previously been seeing in [Figure 5](#figure-5) from its two different clusters of neurons: where the HF-factor inhibits, the LF-factor activates — and vice versa.\n \n As an aside, the HF-factor here for InceptionV1 (as well as some of its NMF components, like [`conv2d2` 123](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_123.html)) also appears to be lightly activated by bright greens and magentas. This might be responsible for the feature visualizations of these high-low frequency detectors showing only greens and magentas on the high-frequency side.\n \n\n\n\n\n .upstream-nmf {\n display: grid;\n grid-row-gap: 2rem;\n margin-bottom: 2rem;\n }\n\n .upstream-nmf .row {\n display: grid;\n grid-template-columns: min-content 1fr 6fr;\n grid-column-gap: 1rem;\n grid-row-gap: .5rem;\n\n }\n\n .units,\n .weights {\n display: grid;\n grid-template-columns: repeat(6, 1fr);\n grid-gap: 0.5rem;\n grid-column-start: 3;\n }\n\n img.fv {\n max-width: 100%;\n border-radius: 8px;\n }\n\n div.units img.full {\n margin-left: 1px;\n }\n\n img.full {\n width: unset;\n object-fit: none;\n object-position: center;\n image-rendering: optimizeQuality;\n }\n\n img.weight {\n width: 100%;\n image-rendering: pixelated;\n align-self: center;\n border: 1px solid #ccc;\n }\n\n .annotated-image {\n display: grid;\n grid-auto-flow: column;\n align-items: center;\n }\n\n .annotated-image span {\n writing-mode: vertical-lr;\n }\n\n .layer-label {\n grid-row-start: span 2;\n border-right: 1px solid #aaa;\n text-align: end;\n }\n\n .layer-label label {\n display: inline-block;\n margin-right: .5em;\n writing-mode: vertical-lr;\n }\n\n .layer-label.hidden {\n border-color: transparent;\n }\n\n .layer-label.hidden label {\n visibility: hidden;\n }\n\n\n\n\n\nmixed3a\n\n![](diagrams/1.1-feature-vis/neuron-136.png \"Unit 136\")\n![](diagrams/1.1-feature-vis/neuron-108.png \"Unit 108\")\n![](diagrams/1.1-feature-vis/neuron-132.png \"Unit 132\")\n![](diagrams/1.1-feature-vis/neuron-88.png \"Unit 88\")\n![](diagrams/1.1-feature-vis/neuron-110.png \"Unit 110\")\n![](diagrams/1.1-feature-vis/neuron-180.png \"Unit 180\")\n\n\n\nmixed3a → conv2d2\n\nHF-factor\n![](diagrams/2.1-upstream-nmf/conv2d2-hi.png)\n\n\n![](diagrams/2.1-upstream-nmf/neuron=136-layer=maxpool1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=108-layer=maxpool1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=132-layer=maxpool1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=88-layer=maxpool1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=110-layer=maxpool1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=180-layer=maxpool1-factor=1.png)\n\n\nLF-factor\n![](diagrams/2.1-upstream-nmf/conv2d2-lo.png)\n\n\n![](diagrams/2.1-upstream-nmf/neuron=136-layer=maxpool1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=108-layer=maxpool1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=132-layer=maxpool1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=88-layer=maxpool1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=110-layer=maxpool1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=180-layer=maxpool1-factor=0.png)\n\n\n\nmixed3a → conv2d1\n\nHF-factor\n![](diagrams/2.1-upstream-nmf/conv2d1-hi.png)\n\n\n![](diagrams/2.1-upstream-nmf/neuron=136-layer=conv2d1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=108-layer=conv2d1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=132-layer=conv2d1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=88-layer=conv2d1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=110-layer=conv2d1-factor=1.png)\n![](diagrams/2.1-upstream-nmf/neuron=180-layer=conv2d1-factor=1.png)\n\n\nLF-factor\n![](diagrams/2.1-upstream-nmf/conv2d1-lo.png)\n\n\n![](diagrams/2.1-upstream-nmf/neuron=136-layer=conv2d1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=108-layer=conv2d1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=132-layer=conv2d1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=88-layer=conv2d1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=110-layer=conv2d1-factor=0.png)\n![](diagrams/2.1-upstream-nmf/neuron=180-layer=conv2d1-factor=0.png)\n\n\n\n\n[9](#figure-9):\n Using NMF factorization on the weights connecting six high-low frequency detectors in InceptionV1 to the\n two directly\n preceding convolutional layers, `conv2d2` and `conv2d1`.\n\n\nTheir spatial arrangement is very clear, with LF factors activating\n areas in which high-low frequency detectors expect low frequencies, and inhibiting areas in which they expect high frequencies. The two\n factors\n are very close to symmetric. Weight magnitudes normalized between -1 and 1.\n\n\n\n\n\nHigh-low frequency detectors are therefore built up by circuits that arrange high frequency detection on one side and low frequency detection on the other.\n\n\n\n There are some exceptions that aren’t fully captured by the NMF factorization perspective. For example, [`conv2d2` 181](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_181.html) is a texture contrast detector that appears to already have spatial structure.\n\n This is the kind of feature that we would expect to be involved through an Equivariant→Equivariant circuit.\n\n If that were the case, however, we would expect its weights to the high-low frequency detector [`mixed3a` 70](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_70.html) to be a solid positive stripe down the middle.\n\n What we instead observe is that it contributes as a component of high frequency detection, though perhaps with a slight positive overall bias.\n\n\n Although `conv2d2` 181 has a spatial structure, perhaps it responds more strongly to high frequency patterns.\n\n \n\n\n\n\n![](diagrams/2d2-181-3a-70-weight.png)\n\n The weights from `conv2d2` 181 to `mixed3a` 70 are consistent with `conv2d2` 181 contributing via the HF-factor, not via the existing spatial structure of its texture contrast detection.\n \n\n\nNow that we understand how they are constructed, how are high-low frequency detectors used by higher-level features?\n\n\n\n\n[Usage\n-----](#usage)\n\n\n\n[`mixed3b`](https://distill.pub/2020/circuits/early-vision/#mixed3b) is the next layer immediately after the high-low frequency detectors. Here, high-low frequency detectors contribute to a variety of features. Their most important role seems to be supporting [boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary), but they also contribute to [bumps](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_bumps) and [divots](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_divots), [line-like](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_bar_line_like) and [curve-like](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_curves_misc.) shapes,\n and at least one each of [center-surrounds](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_281.html), [patterns](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_372.html), and [textures](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_276.html).\n \n\n\n\n\n\n![](diagrams/usage-1.png)\n\n\n\n[10](#figure-10):\n Examples of neurons that high-low frequency detectors contribute to: (1) [`mixed3b` 345](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_345.html) (a boundary detector), (2) [`mixed3b` 276](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_276.html) (a center-surround texture detector), (3) [`mixed3b` 314](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_314.html) (a double boundary detector), and (4) [`mixed3b` 365](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_365.html) (an hourglass shape detector).\n \n\n\n\n These aren’t the only contributors to these neurons – for example, `mixed3b` 276 also relies heavily on certain center-surrounds and textures – but they are strong contributors.\n \n\n\n\n\n\n\n Oftentimes, downstream features appear to ignore the “polarity” of a high-low frequency detector, responding roughly the same way regardless of which side is high frequency. For example, the vertical boundary detector [`mixed3b` 345](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_345.html) (see above) is strongly excited by high-low frequency detectors that detect frequency change across a vertical line in either direction.\n \n\n\n\n Whereas activation from a high-low frequency detector can help detect boundaries between different objects, inhibition from a high-low frequency detector can also add structure to an object detector by detecting regions that must be contiguous along some direction — essentially, indicating the absence of a boundary.\n \n\n\n\n\n![](diagrams/usage-2.png)\n\n\n[11](#figure-11):\n Some of [`mixed3b` 314](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_314.html)’s weights, extracted for emphasis. Orientation doesn’t matter so much for how these weights are used by [`mixed3b` 314](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_314.html), but their 180º-invariant orientation does!\n\n\nYou may notice that strong excitation (left) is correlated with the presence of a **boundary** at a particular angle, whereas strong inhibition (right) is correlated with **object continuity** where a boundary might otherwise have been.\n\n\n\n\n\n\nAs we’ve mentioned, by far the primary downstream contribution of high-low frequency detectors is to *boundary detectors*. Of the top 20 neurons in `mixed3b` with the highest L2-norm of weights across all high-low frequency detectors, eight of those 20 neurons participate in boundary detection of some sort: [double boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_double_boundary), [miscellaneous boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary_misc), and especially [object boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary). \n\n\n#### Role in object boundary detection\n\n\n[Object boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary) are neurons which detect boundaries between objects, whether that means the boundary between one object and another or the transition from foreground to background. They are different from edge detectors or curve detectors: although they are sensitive to edges (indeed, some of their strongest weights are contributed by lower-level edge detectors!), object boundary detectors are also sensitive to other indicators such as color contrast and high-low frequency detection.\n\n\n\n![](diagrams/usage-boundary.png)\n\n[12](#figure-12): `mixed3b` 345 is a boundary detector activated by high-low frequency detectors, edges, color contrasts, and end-of-line\n detectors. It is specifically sensitive to vertically-oriented high-low frequency detectors, regardless of their\n orientation, and along a vertical line of positive weights.\n \n\n High-low frequency detectors contribute to these object boundary detectors by providing one piece of evidence that an object has ended and something else has begun. Some examples of object boundary detectors are shown below, along with their weights to a selection of high-low frequency detectors, grouped by orientation (ignoring polarity).\n\n\nIn particular, note how similar the weights are within each grouping! This shows us again that the later layers ignore the high-low frequency detectors’ polarity. Furthermore, the arrangement of excitatory and inhibitory weights contributes to each boundary detector’s overall shape, following the principles outlined above.\n\n\n\n\n\n /\\* TODO: Optimize smaller breakpoints by hand \\*/\n \n![](diagrams/usage-boundaries.png)\n\n[13](#figure-13): Four examples of object boundary detectors that high-low frequency detectors contribute to: [`mixed3b` 345](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_345.html), [`mixed3b` 376](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_376.html), [`mixed3b` 368](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_368.html), and [`mixed3b` 151](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_151.html).\n \n\n\n\n\n\n Beyond `mixed3b`, high-low frequency detectors ultimately play a role in detecting more sophisticated object shapes in `mixed4a` and beyond, by continuing to contribute to the detection of boundaries and contiguity.\n \n\n\n\n\n So far, the scope of our investigation has been limited to InceptionV1.\n\n How common are high-low frequency detectors in convolutional neural networks generally?\n\n\n\nUniversality\n------------\n\n\n### High-Low Frequency Detectors in Other Networks\n\n\nIt’s always good to ask if what we see is the rule or an interesting exception — and high-low frequency detectors seem to be the rule.\n High-low frequency detectors similar to ones in InceptionV1 can be found in a variety of architectures.\n \n\n\n\n\n strong d-cite {\n font-weight: normal;\n }\n\n figure.columns {\n display: grid;\n grid-gap: 0.5em;\n grid-template-columns: repeat(4, 1fr);\n overflow: hidden;\n }\n\n .columns .model {\n display: grid;\n /\\* position: relative;\n grid-row-gap: 0.5em;\n grid-auto-flow: row;\n align-content: start;\n grid-template-rows: min-content; \\*/\n }\n\n .model picture {\n position: relative;\n }\n\n .model img {\n box-sizing: border-box;\n position: relative;\n width: unset;\n object-fit: none;\n z-index: 1;\n outline: 6px solid white;\n border-radius: 8px;\n }\n\n .model figcaption>\\* {\n display: block;\n }\n\n /\\*\n HAAAACKY\n \\*/\n .model:first-of-type picture::before {\n content: \"\";\n display: block;\n position: absolute;\n height: 1px;\n width: 350%;\n left: 0;\n top: 48px;\n background-color: #aaa;\n z-index: 0;\n }\n\n .columns>figcaption {\n grid-column: 1 / span 4;\n }\n\n\n\n\n\n**InceptionV1**\nLayer `mixed3a` \nAt ~33% CNN depth\n\n\n\n\n![](diagrams/1.1-feature-vis/neuron-106.png \"InceptionV1's layer mixed3a, Unit 106\")\n\n\n![](diagrams/1.1-feature-vis/neuron-110.png \"InceptionV1's layer mixed3a, Unit 110\")\n\n\n![](diagrams/1.1-feature-vis/neuron-180.png \"InceptionV1's layer mixed3a, Unit 180\")\n\n\n![](diagrams/1.1-feature-vis/neuron-132.png \"InceptionV1's layer mixed3a, Unit 132\")\n\n\n![](diagrams/1.1-feature-vis/neuron-112.png \"InceptionV1's layer mixed3a, Unit 112\")\n\n\n![](diagrams/1.1-feature-vis/neuron-108.png \"InceptionV1's layer mixed3a, Unit 108\")\n\n\n\n\n**AlexNet**\nLayer `Conv2D_2` \nAt ~29% CNN depth\n\n\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-183.png \"AlexNet's layer Conv2D_2, Unit 183\")\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-175.png \"AlexNet's layer Conv2D_2, Unit 175\")\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-215.png \"AlexNet's layer Conv2D_2, Unit 215\")\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-230.png \"AlexNet's layer Conv2D_2, Unit 230\")\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-161.png \"AlexNet's layer Conv2D_2, Unit 161\")\n\n\n![](diagrams/4.1-similar-directions/alexnet/Conv2D_2-205.png \"AlexNet's layer Conv2D_2, Unit 205\")\n\n\n\n\n**InceptionV4**\nLayer `Mixed_5a` \nAt ~33% CNN depth\n\n\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-82.png \"InceptionV4's layer Mixed_5a, Unit 82\")\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-178.png \"InceptionV4's layer Mixed_5a, Unit 178\")\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-77.png \"InceptionV4's layer Mixed_5a, Unit 77\")\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-18.png \"InceptionV4's layer Mixed_5a, Unit 18\")\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-52.png \"InceptionV4's layer Mixed_5a, Unit 52\")\n\n\n![](diagrams/4.1-similar-directions/inceptionv4/channel-76.png \"InceptionV4's layer Mixed_5a, Unit 76\")\n\n\n\n\n**ResNetV2-50**\nLayer `B2_U1_conv2` \nAt ~29% CNN depth\n\n\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-118.png \"ResNetV2-50's layer B2_U1_conv2, Unit 118\")\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-41.png \"ResNetV2-50's layer B2_U1_conv2, Unit 41\")\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-58.png \"ResNetV2-50's layer B2_U1_conv2, Unit 58\")\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-50.png \"ResNetV2-50's layer B2_U1_conv2, Unit 50\")\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-45.png \"ResNetV2-50's layer B2_U1_conv2, Unit 45\")\n\n\n![](diagrams/4.1-similar-directions/resnetv2/channel-53.png \"ResNetV2-50's layer B2_U1_conv2, Unit 53\")\n\n\n\n[14](#figure-14). High-low frequency detectors that we’ve found in AlexNet, InceptionV4, and ResnetV2-50 (right), compared to their most similar counterpart from InceptionV1 (left). These are individual neurons, not linear combinations approximating the detectors in InceptionV1.\n \n\n\nNotice that these detectors are found at very similar depths within the different networks, between 29% and 33% network depth!Network depth is here defined as the index of the layer divided by the total number of layers. While the particular orientations each network’s high-low frequency detectors respond to may vary slightly, each network has its own family of detectors that together cover the full 360º and comprise a rotationally [equivariant](https://distill.pub/2020/circuits/equivariance/) family.\n Architecture aside – what about networks trained on substantially different *datasets*? In the extreme case, one could imagine a synthetic dataset where high-low frequency detectors don’t arise. For most practical datasets, however, we expect to find them. For example, we even find some candidate high-low frequency detectors in AlexNet (Places): [down-up](https://microscope.openai.com/models/alexnet_caffe_places365/conv2_Conv2D_0/37), [left-right](https://microscope.openai.com/models/alexnet_caffe_places365/conv2_Conv2D_0/91), and [up-down](https://microscope.openai.com/models/alexnet_caffe_places365/conv2_Conv2D_0/104).\n\n\n\nEven though these families are from three completely different networks, we also discover that their high-low frequency detectors are built up from high and low frequency components.\n\n\n### HF-factor and LF-factor in Other Networks\n\n\n\n As we did with InceptionV1, we can again perform NMF on the weights of the high-low frequency detectors in each network in order to extract the strongest two factors.\n \n\n\n\n .upstream-nmf {\n display: grid;\n grid-row-gap: 2rem;\n margin-bottom: 3rem;\n }\n\n .upstream-nmf .row {\n display: grid;\n grid-template-columns: min-content 1fr 6fr;\n grid-column-gap: 1rem;\n grid-row-gap: .5rem;\n\n }\n\n .units,\n .weights {\n display: grid;\n grid-template-columns: repeat(6, 1fr);\n grid-gap: 0.5rem;\n grid-column-start: 3;\n }\n\n img.fv {\n max-width: 100%;\n border-radius: 8px;\n }\n\n div.units img.full {\n margin-left: 1px;\n }\n\n img.full {\n width: unset;\n object-fit: none;\n object-position: center;\n image-rendering: optimizeQuality;\n }\n\n img.weight {\n width: 100%;\n image-rendering: pixelated;\n align-self: center;\n border: 1px solid #ccc;\n }\n\n img.factor {\n /\\* padding-right: 0.75rem; \\*/\n }\n\n .annotated-image {\n display: grid;\n grid-auto-flow: column;\n align-items: center;\n }\n\n .annotated-image span {\n writing-mode: vertical-lr;\n }\n\n .layer-label {\n grid-row-start: span 2;\n border-right: 1px solid #aaa;\n text-align: end;\n }\n\n .layer-label label {\n display: inline-block;\n margin-right: .5em;\n writing-mode: vertical-lr;\n\n }\n\n .layer-label.hidden {\n border-color: transparent;\n }\n\n .layer-label.hidden label {\n visibility: hidden;\n }\n\n\n\n\n#### AlexNet\n\n\n\n\nlayer\n\n[![](diagrams/6.0-universality/AlexNet-0.png \"Unit 55\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/55)[![](diagrams/6.0-universality/AlexNet-1.png \"Unit 47\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/47)[![](diagrams/6.0-universality/AlexNet-2.png \"Unit 87\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/87)[![](diagrams/6.0-universality/AlexNet-3.png \"Unit 33\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/33)[![](diagrams/6.0-universality/AlexNet-4.png \"Unit 77\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/77)[![](diagrams/6.0-universality/AlexNet-5.png \"Unit 102\")](https://microscope.openai.com/models/alexnet/Conv2D_2_0/102)\n\n\n\nConv2D\\_2 → conv1\\_1\n\n\nHF-factor\n![](diagrams/6.0-universality/AlexNet-hi.png)\n\n\n![](diagrams/6.0-universality/AlexNet-0-hi.png \"Unit 55\")\n![](diagrams/6.0-universality/AlexNet-1-hi.png \"Unit 47\")\n![](diagrams/6.0-universality/AlexNet-2-hi.png \"Unit 87\")\n![](diagrams/6.0-universality/AlexNet-3-hi.png \"Unit 33\")\n![](diagrams/6.0-universality/AlexNet-4-hi.png \"Unit 77\")\n![](diagrams/6.0-universality/AlexNet-5-hi.png \"Unit 102\")\n\n\nLF-factor\n![](diagrams/6.0-universality/AlexNet-lo.png)\n\n\n![](diagrams/6.0-universality/AlexNet-0-lo.png \"Unit 55\")\n![](diagrams/6.0-universality/AlexNet-1-lo.png \"Unit 47\")\n![](diagrams/6.0-universality/AlexNet-2-lo.png \"Unit 87\")\n![](diagrams/6.0-universality/AlexNet-3-lo.png \"Unit 33\")\n![](diagrams/6.0-universality/AlexNet-4-lo.png \"Unit 77\")\n![](diagrams/6.0-universality/AlexNet-5-lo.png \"Unit 102\")\n\n\n\n#### InceptionV3\\_slim\n\n\n\n\nlayer\n\n[![](diagrams/6.0-universality/InceptionV3_slim-0.png \"Unit 82\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/82)[![](diagrams/6.0-universality/InceptionV3_slim-1.png \"Unit 83\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/83)[![](diagrams/6.0-universality/InceptionV3_slim-2.png \"Unit 137\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/137)[![](diagrams/6.0-universality/InceptionV3_slim-3.png \"Unit 139\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/139)[![](diagrams/6.0-universality/InceptionV3_slim-4.png \"Unit 155\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/155)[![](diagrams/6.0-universality/InceptionV3_slim-5.png \"Unit 159\")](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0/159)\n\n\n\nConv2d\\_4a → Conv2d\\_3b\n\n\nHF-factor\n![](diagrams/6.0-universality/InceptionV3_slim-hi.png)\n\n\n![](diagrams/6.0-universality/InceptionV3_slim-0-hi.png \"Unit 82\")\n![](diagrams/6.0-universality/InceptionV3_slim-1-hi.png \"Unit 83\")\n![](diagrams/6.0-universality/InceptionV3_slim-2-hi.png \"Unit 137\")\n![](diagrams/6.0-universality/InceptionV3_slim-3-hi.png \"Unit 139\")\n![](diagrams/6.0-universality/InceptionV3_slim-4-hi.png \"Unit 155\")\n![](diagrams/6.0-universality/InceptionV3_slim-5-hi.png \"Unit 159\")\n\n\nLF-factor\n![](diagrams/6.0-universality/InceptionV3_slim-lo.png)\n\n\n![](diagrams/6.0-universality/InceptionV3_slim-0-lo.png \"Unit 82\")\n![](diagrams/6.0-universality/InceptionV3_slim-1-lo.png \"Unit 83\")\n![](diagrams/6.0-universality/InceptionV3_slim-2-lo.png \"Unit 137\")\n![](diagrams/6.0-universality/InceptionV3_slim-3-lo.png \"Unit 139\")\n![](diagrams/6.0-universality/InceptionV3_slim-4-lo.png \"Unit 155\")\n![](diagrams/6.0-universality/InceptionV3_slim-5-lo.png \"Unit 159\")\n\n\n\n#### ResnetV2\\_50\\_slim\n\n\n\n\nlayer\n\n[![](diagrams/6.0-universality/ResnetV2_50_slim-0.png \"Unit 118\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/118)[![](diagrams/6.0-universality/ResnetV2_50_slim-1.png \"Unit 41\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/41)[![](diagrams/6.0-universality/ResnetV2_50_slim-2.png \"Unit 53\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/53)[![](diagrams/6.0-universality/ResnetV2_50_slim-3.png \"Unit 44\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/44)[![](diagrams/6.0-universality/ResnetV2_50_slim-4.png \"Unit 25\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/25)[![](diagrams/6.0-universality/ResnetV2_50_slim-5.png \"Unit 50\")](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0/50)\n\n\n\nB2\\_U1\\_conv2 → B2\\_U1\\_conv1\n\n\nHF-factor\n![](diagrams/6.0-universality/ResnetV2_50_slim-hi.png)\n\n\n![](diagrams/6.0-universality/ResnetV2_50_slim-0-hi.png \"Unit 118\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-1-hi.png \"Unit 41\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-2-hi.png \"Unit 53\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-3-hi.png \"Unit 44\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-4-hi.png \"Unit 25\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-5-hi.png \"Unit 50\")\n\n\nLF-factor\n![](diagrams/6.0-universality/ResnetV2_50_slim-lo.png)\n\n\n![](diagrams/6.0-universality/ResnetV2_50_slim-0-lo.png \"Unit 118\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-1-lo.png \"Unit 41\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-2-lo.png \"Unit 53\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-3-lo.png \"Unit 44\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-4-lo.png \"Unit 25\")\n![](diagrams/6.0-universality/ResnetV2_50_slim-5-lo.png \"Unit 50\")\n\n\n\n\n[15](#figure-15):\n NMF of high-low frequency detectors in\n\n \n \n AlexNet’s\n [Conv2D\\_2](https://microscope.openai.com/models/alexnet/Conv2D_2_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis) with respect to [conv1\\_1](https://microscope.openai.com/models/alexnet/conv1_1_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis),\n \n \n InceptionV3\\_slim’s\n [Conv2d\\_4a](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_4a_3x3_Relu_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis) with respect to [Conv2d\\_3b](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Conv2d_3b_1x1_Relu_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis),\n \n and \n ResnetV2\\_50\\_slim’s\n [B2\\_U1\\_conv2](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_1_bottleneck_v2_conv2_Relu_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis) with respect to [B2\\_U1\\_conv1](https://microscope.openai.com/models/resnetv2_50_slim/resnet_v2_50_block2_unit_2_bottleneck_v2_conv1_Relu_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis),\n \n showing activations and inhibitions.\n \n\n\nThe feature visualizations of the two factors reveal one clear HF-factor and one clear LF-factor, just like what we found in InceptionV1. Furthermore, the weights on the two factors are again very close to symmetric.\n\n\nOur earlier conclusions therefore also hold across these different networks: high-low frequency detectors are built up from the specific spatial arrangement of a high frequency component and a low frequency component.\n\n\nConclusion\n----------\n\n\nAlthough high-low frequency detectors represent a feature that we didn’t necessarily expect to find in a neural network, we find that we can still explore and understand them using the interpretability tools we’ve built up for exploring circuits: NMF, feature visualization, synthetic stimuli, and more.\n\n \n\nWe’ve also learned that high-low frequency detectors are built up from comprehensible lower-level parts, and we’ve shown how they contribute to later, higher-level features.\n\n Finally, we’ve seen that high-low frequency detectors are common across multiple network architectures.\n \n\n\nGiven the universality observations, we might wonder whether the existence of high-low frequency detectors isn’t so unnatural after all. We even find [approximate](https://microscope.openai.com/models/alexnet_caffe_places365/conv2_Conv2D_0/91) high-low frequency detectors in AlexNet Places, with its substantially different training data. Beyond neural networks, the aesthetic quality imparted by the blurriness of an out-of-focus region of an image is already known as to photographers as [*bokeh*](https://en.wikipedia.org/wiki/Bokeh). And in VR, visual blur can either provide an effective depth-of-field cue or, conversely, can induce nausea in the user when implemented in a dissonant way. Perhaps frequency detection might well be commonplace in both natural and artificial vision systems as yet another type of informational cue.\n\n\nNevertheless, whether their existence is natural or not, we find that high-low frequency detectors are possible to characterize and understand.\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the Circuits thread, a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. \n\n\n\n\n[Naturally Occurring Equivariance in Neural Networks](/2020/circuits/equivariance/)\n[Curve Circuits](/2020/circuits/curve-circuits/)", "date_published": "2021-01-27T20:00:00Z", "authors": ["Ludwig Schubert", "Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Chris Olah"], "summaries": ["A family of early-vision neurons reacting to directional transitions from high to low spatial frequency."], "doi": "10.23915/distill.00024.005", "journal_ref": "distill-pub", "bibliography": [{"link": "https://www.biorxiv.org/content/early/2019/10/20/808907", "title": "Discrete neural clusters encode orientation, curvature and corners in macaque V4"}, {"link": "https://doi.org/10.23915/distill.00024.002", "title": "An Overview of Early Vision in InceptionV1"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "https://doi.org/10.23915/distill.00024.001", "title": "Zoom In: An Introduction to Circuits"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "http://yosinski.com/media/papers/Yosinski__2015__ICML_DL__Understanding_Neural_Networks_Through_Deep_Visualization__.pdf", "title": "Understanding neural networks through deep visualization"}, {"link": "https://arxiv.org/pdf/1506.02078.pdf", "title": "Visualizing and understanding recurrent networks"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://doi.org/10.1109/CVPR.2009.5206848", "title": "ImageNet: A large-scale hierarchical image database"}, {"link": "https://arxiv.org/pdf/1511.07543.pdf", "title": "Convergent learning: Do different neural networks learn the same representations?"}, {"link": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf", "title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"link": "http://arxiv.org/pdf/1602.07261.pdf", "title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning"}, {"link": "http://arxiv.org/pdf/1512.03385.pdf", "title": "Deep Residual Learning for Image Recognition"}]} {"id": "e3fcaaea154f7c22f0e1e2cc814837d5", "title": "Naturally Occurring Equivariance in Neural Networks", "url": "https://distill.pub/2020/circuits/equivariance", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n[Curve Detectors](/2020/circuits/curve-detectors/)\n[High-Low Frequency Detectors](/2020/circuits/frequency-edges/)\n\n\n\n### Contents\n\n\n[Equivariant Features](#features)\n* [Rotation](#rotation-features), \n [Scale](#scale-features), \n [Hue](#hue-features), \n [Hue-Rotation](#hue-rotation-features), \n [Reflection](#reflection-features), \n [Miscellaneous](#miscellaneous-features)\n\n\nEquivariant Circuits\n* [High-Low](#circuit-hilo), \n [Contrast→Center](#circuit-greenpurple), \n [BW-Color](#circuit-BW), \n [Line→Circle/Divergence](#circuit-circlestar), \n [Curve→Circle/Evolute](#circuit-circle-evolute), \n [Human-Animal](#circuit-person),\n [Invariant Dog Head](#circuit-dog), \n [Hue→Hue](#circuit-hue),\n [Curve→Curve](#circuit-curve), \n [Contrast→Line](#circuit-coloredge)\n\n\n[Equivariant Architectures](#architectures)\n[Conclusion](#conclusion)\n\n\nConvolutional neural networks contain a hidden world of symmetries within themselves. This symmetry is a powerful tool in understanding the features and circuits inside neural networks. It also suggests that efforts to design neural networks with additional symmetries baked in (eg. ) may be on a promising track.\n\n \n\nTo see these symmetries, we need to look at the individual neurons inside convolutional neural networks and the [circuits](https://distill.pub/2020/circuits/zoom-in/#claim-2) that connect them. \n It turns out that many neurons are slightly transformed versions of the same basic feature.\n This includes rotated copies of the same feature, scaled copies, flipped copies, features detecting different colors, and much more.\n We sometimes call this phenomenon “[equivariance](https://en.wikipedia.org/wiki/Equivariant_map),” since it means that switching the neurons is equivalent to transforming the input.\n The standard definition of [equivariance](https://en.wikipedia.org/wiki/Equivariant_map) in group theory is that a function fff is equivariant if for all g∈Gg\\in Gg∈G, it’s the case that f(g⋅x)=g⋅f(x)f(g\\cdot x) = g\\cdot f(x)f(g⋅x)=g⋅f(x). At first blush, this doesn’t seem very relevant to transformed versions of neurons. \n \n Before we talk about the examples introduced in this article, let’s talk about how this definition maps to the classic example of equivariance in neural networks: translation and convolutional neural network nets. In a conv net, translating the input image is equivalent to translating the neurons in the hidden layers (ignoring pooling, striding, etc). Formally, g∈Z2g\\in Z^2g∈Z2 and fff maps images to hidden layer activations. Then ggg acts on the input image xxx by translating spatially, and acts on the activations by also spatially translating them. \n \n Now let’s consider the case of curve detectors (the first example in the Equivariant Features section), which have ten rotated copies. In this case, g∈Z10g\\in Z\\_{10}g∈Z10​ and f(x)=(curve1(x),...)f(x) = (\\mathrm{curve}\\_1(x), …)f(x)=(curve1​(x),...) maps a position at an image to a ten dimensional vector describing how much each curve detector fires. Then ggg acts on the input image xxx by rotating it around that position and ggg acts on the hidden layers by reorganizing the neurons so that their orientations correspond to the appropriate rotations. This satisfies, at least approximately, the original definition of equivariance. \n \n This transformed neuron form of equivariance is a special case of equivariance. There are many ways a neural network could be equivariant without having transformed versions of neurons. Conversely, we’ll also see a number of examples of equivariance that don’t map exactly to the group theory definition of equivariance: some have “holes” where a transformed neuron is missing, while others consist of a set of transformations that have a weaker structure than a group or don’t correspond to a simple action on the image. But this general structure remains.\n\n\nEquivariance can be seen as a kind of ”[circuit\n motif](https://distill.pub/2020/circuits/zoom-in/#claim-2-motifs),” an abstract recurring pattern across circuits analogous to motifs in systems biology .\n It can also be seen as a kind of larger-scale “structural phenomenon” (similar to [weight banding](/2020/circuits/weight-banding/) and [branch\n specialization](https://distill.pub/2020/circuits/branch-specialization/)), since a given equivariance type is often widespread in some layers and rare in others.\n\n\n\n In this article, we’ll focus on examples of equivariance in InceptionV1 trained\n on ImageNet, but we’ve observed at least some equivariance in every model\n trained on natural images we’ve studied.\n \n \n \n\n\n\n\n---\n\n \n \n\nEquivariant Features\n--------------------\n\n\n**Rotational Equivariance:** One example of equivariance is rotated versions of the same feature. These are especially common in [early vision](https://distill.pub/2020/circuits/early-vision/), for example [curve detectors](https://distill.pub/2020/circuits/curve-detectors/), [high-low frequency detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_high_low_frequency), and [line detectors](https://distill.pub/2020/circuits/early-vision/#group_conv2d2_line).\n \n \n\n\n\n\n\n\nRotational Equivariance\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSome rotationally equivariant features wrap around at 180 degrees due to symmetry.\n\n(There are even units which wrap around at 90 degrees, such as hatch texture detectors.)\n\n\n\n\n\n\n\n\nCurve Detectors\n\n\n\n\n\n\n\n\nRotational Equivariance (mod 180)\n\n\n\n\n\n\nHigh-Low Frequency Detectors\nEdge Detectors\nLine Detectors\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOne can test that these are genuinely rotated versions of the same feature by taking examples that cause one to fire, rotating them, and checking that the others fire as expected. The [article](https://distill.pub/2020/circuits/curve-detectors/) on curve detectors tests their equivariance through several experiments, including rotating stimuli that activate one neuron and seeing how the others respond.\n \n \n \n\n\n[![](images/curve-rotate-small.png)](https://distill.pub/2020/circuits/curve-detectors/#joint-tuning-curves)\nOne way to verify that units like curve detectors are truly rotated versions of the same feature is to take stimuli that activate one and see how they fire as you rotate the stimuli. [Learn more.](https://distill.pub/2020/circuits/curve-detectors/#joint-tuning-curves)\n\n**Scale Equivariance:** Rotated versions aren’t the only kind of variation we see. It’s also quite common to see the same feature at different scales, although usually the scaled features occur at different layers. For example, we see [circle detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_circles_loops) across a huge variety of scales, with the small ones in early layers and the large ones in later layers.\n \n \n\n\n\n\n\n\n\n\n\n\n\nScale Equivariance\nCircle Detectors\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Hue Equivariance:** For color-detecting features, we often see variants detecting the same thing in different hues. For example, [color center-surround](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_color_center_surround) units will detect one hue in the center, and the opposing hue on around it. Units can be found doing this up until the seventh or even eighth layer of InceptionV1.\n \n \n\n\n\n\n\n\n\n\n\n\n\nHue Equivariance\nColor Center\n-Surround\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Hue-Rotation Equivariance:** In early vision, we very often see [color contrast units](https://distill.pub/2020/circuits/early-vision/#group_conv2d2_color_contrast). These units detect one hue on one side, and the opposite hue on the other. As a result, they have variation in both hue and rotation. These variations are particularly interesting, because there’s an interaction between hue and rotation. But cycling hue by 180 degrees flips which hue is on which side, and is so is equivalent to rotating by 180 degrees.\n \n \n\nIn the following diagram, we show orientation rotating the whole 360 degrees, but hue only rotating 180. At the bottom of the chart, it wraps around to the top but shifts by 180 degrees.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRotational Equivariance\nHue Equivariance\nColor Contrast Detectors\n(hue+180, orientation) = (hue, orientation+180)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Reflection Equivariance:** As we move into the mid layers of the network, rotated variations become less prominent, but horizontally flipped pairs become quite prevalent.\n \n \n\n\n\n\n\n\n\nHorizontal Flip\nDog snout detectors\nS-curve detectors\nHuman beside animal\nHorizontal Flip\nHorizontal Flip\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Miscellaneous Equivariance:** Finally, we see variations of features transformed in other miscellaneous ways. For example, short vs long-snouted versions of the same dog head features, or human vs dog versions of the same feature. We even see units which are equivariant to camera perspective (found in a Places365 model). These aren’t necessarily something that we would classically think of as forms of equivariance, but do seem to essentially be the same thing.\n \n \n\n\n\n\n\n\n\n\nSnout Length\nHuman vs Dog\nPerspective\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n---\n\n \n \n\nEquivariant Circuits\n--------------------\n\n\nThe equivariant behavior we observe in neurons is really a reflection of a deeper symmetry that exists in the weights of neural networks and the [circuits](https://distill.pub/2020/circuits/zoom-in/#claim-2) they form.\n \n \n\nWe’ll start by focusing on rotationally equivariant features that are formed from rotationally invariant features. This “invariant→equivariant” case is probably the simplest form of equivariant circuit. Next, we’ll look at “equivariant→invariant” circuits, and then finally the more complex “equivariant→equivariant” circuits.\n \n \n \n\n\n\n**High-Low Circuit:** In the following example, we see [high-low frequency detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_high_low_frequency) get built from a high-frequency factor and a low-frequency factor (both factors correspond to a combination of neurons in the previous layer). Each high-low frequency detector responds to a transition in frequency in a given direction, detecting high-frequency patterns on one side, and low frequency patterns on the other. Notice how the same weight pattern rotates, making rotated versions of the feature.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n respond to a high-frequency neuron factor on one side and low frequency on the other. Notice how the weights rotate:\nHigh-low frequency detectors\n\nThis makes them rotationally equivariant.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n**Contrast→Center Circuit:** This same pattern can be used in reverse to turn rotationally equivariant features back into rotationally invariant features (an “equivariant→invariant” circuit). In the following example, we see several green-purple [color contrast detectors](https://distill.pub/2020/circuits/early-vision/#group_conv2d2_color_contrast) get combined to create green-purple and purple-green [center-surround detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_color_center_surround).\n \n Compare the weights in this circuit to the ones in the previous one. It’s literally the same weight pattern transposed.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRotational equivariance can be turned into invariance with the transpose of an \ninvariant -> equivariant circuit.\n\nHere, we see (rotationally equivariant) combine to make (rotationally invariant). Again, notice how the weights rotate, forming the same pattern we saw above with high-low frequency detectors, but with inputs and outputs swapped.\ncolor contrast unitscolor center surround units\n\npositive (excitation)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSometimes we see one of these immediately follow the other: equivariance be created, and then immediately partially used to create invariant units.\n \n \n \n\n\n\n**BW-Color Circuit:** In the following example, a generic color factor and a black and white factor are used to create black and white vs color features. Later, the [black and white vs color features](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_bw_vs_color) are combined to create units which detect black and white at the center, but color around, or vice versa.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBut then one major use of the equivariant units is combining them into rotationally invariant center surround units.\nFirst, rotationally equivariant “black and white vs color” units are formed from mostly invariant features.\n\npositive (excitation)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n**Line→Circle/Divergence Circuit:** Another example of equivariant features being combined to create invariant features is very early line-like [complex Gabor detectors](https://distill.pub/2020/circuits/early-vision/#group_conv2d1_complex_gabor) being combined to create a small circle unit and diverging lines unit.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\n\nnegative (inhibition)\n\n\n\nA is created by detecting early edges to a normal line.\ncircle detectorperpendicular\n\n\nA is created by detecting early edges to a normal line.\ndiverging line detectorparallel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n**Curve→Circle/Evolute Circuit:** For a more complex example of rotational equivariance being combined to create invariant units, we can look at [curve detectors](https://distill.pub/2020/circuits/curve-detectors/) being combined to create [circle](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_circles_loops) and [evolute detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_evolute). This circuit is also an example of scale equivariance. The same general pattern which turns small curve detectors into a small circle detector turns large curve detectors into a large circle detector. The same pattern which turns medium curve detectors into a medium evolute detector turns large curves into a large evolute detector.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nconv2d -> mixed3a\nSmall circle from curves \n\n\n\nmixed3a -> mixed3b\nMedium evolute from curves \n\n\n\nmixed3b -> mixed4a\nLarge circle and evolute from curves \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n**Human-Animal Circuit:** So far, all of the examples we’ve seen of circuits have involved rotation. These human-animal and animal-human detectors are an example of horizontal flip equivariance instead:\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\n\n\nnegative (inhibition)\n\n\n\nHuman detectors excite the human side of each unit and inhibit the other.\nOther units (mainly dog detectors) inhibit the human side of each unit and excite the other.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Invariant Dog Head Circuit:** Conversely, this example (part of the broader [oriented dog head circuit](https://distill.pub/2020/circuits/zoom-in/#claim-2-dog)) shows left and right oriented dog heads get combined into a pose invariant dog head detector. Notice how the weights flip.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n### “Equivariant→Equivariant” Circuits\n\n\nThe circuits we’ve looked at so far were either “invariant→equivariant” or “equivariant→invariant.” Either they had invariant input units, or invariant output units. Circuits of this form are quite simple: the weights rotate, or flip, or otherwise transform, but only in response to the transformation of a single feature. When we look at “equivariant→equivariant” circuits, things become a bit more complex. Both the input and output features transform, and we need to consider the relative relationship between the two units.\n \n \n \n\n\n\n**Hue→Hue Circuit:** Let’s start with a circuit connecting two sets of hue-equivariant center-surround detectors. Each unit in the second layer is excited by the unit selecting for a similar hue in the previous layer.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nEach unit is excited by the unit with the same hue in the previous layer.\nThe weights between all selected color center-surround units. Units are excited by units with the same, and tend to be inhibited by those with slightly different hues. (In early layers, very different hues inhibit; by later layers very different colors are already distinguished and inhibition focuses on similar colors.)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo understand the above, we need to focus on the relative relationships between each input and output neuron — in this case, how far the hues are apart on the color wheel. When they have the same hue, the relationship is excitatory. When they have close but different hues, it’s inhibitory. And when they are very different, the weight is close to zero. The units used to illustrate hue equivariance here were selected to have a straightforward circuit. Other units may have more complex relationships. For example, some units respond to a range of hues like yellow-red and have correspondingly more complex weights.\n \n \n\n\n\n**Curve→Curve Circuit:** Let’s now consider a slightly more complex example, how early curve detectors connect to late curve detectors. We’ll focus on four [curve detectors](https://distill.pub/2020/circuits/curve-detectors/) that are 90 degrees rotated from each other.Again, the curve detectors presented were selected to make the circuit as simple and pedagogical as possible. They have clean weights and even spacing between them, which will make the pattern easier to see. A forthcoming article will discuss curve circuits in detail.\n\n\nIf we just look at the matrix of weights, it’s a bit hard to understand. But if we focus on how each curve detector connects to the earlier curve in the same and opposite orientations, it becomes easier to see the structure. Rather than each curve being built from the same neurons in the previous layer, they shift. Each curve is excited by curves in the same orientation and inhibited by those in the opposite. At the same time, the spatial structure of the weights also rotate.\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWeights from each of the four curve detectors displayed to each of the other four. Notice how the diagonal is excitatory ( ), while the off-diagonal is inhibitory ( ). The weights shown to the left are a subset of these.\nEach curve inhibits the curve in the opposite orientation in the next layer along its tangent. Notice how the weights rotate:\nEach curve excites the curve in the same orientation in the next layer along its tangent. Notice how the weights rotate:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n**Contrast→Line Circuit:** For a yet more complex example, let’s look at how [color contrast detectors](https://distill.pub/2020/circuits/early-vision/#group_conv2d2_color_contrast) connect to [line detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3a_lines). The general idea is line detectors should fire more strongly if there are different colors on each side of the line. Conversely, they should be inhibited by a change in color if it is perpendicular to the line.\n \n \n\nNote that this is an “equivariant→equivariant” circuit with respect to rotation, but “equivariant→invariant” with respect to hue.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n roughly arranged by orientation and hue.\nColor contrast detectors\n\nOne of the downstream roles of color contrast detectors is to make line detectors respond to changes in color across the line. In this circuit, we will see how this is done.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA (slightly tilted) is excited by horizontal color contrasts of all hues, and inhibited by vertical ones.\nhorizontal line detector\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA is excited by vertical color contrasts of all hues, and inhibited by horizontal ones.\nvertical line detector\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n---\n\n \n \n \n\nEquivariant Architectures\n-------------------------\n\n\n\n Equivariance has a rich history in deep learning.\n Many important neural network architectures have equivariance at their core, and there is a very active thread of research around more aggressively incorporating equivariance.\n But the focus is normally on designing equivariant architectures, rather than “natural equivariance” we’ve discussed so far. \n How should we think about the relationship between “natural” and “designed” equivariance?\n As we’ll see, there appears to be quite a deep connection.\n \n \n\n\n Historically, there has been some interesting back and forth between the two.\n Researchers have often observed that many features in the first layer of neural networks are transformed versions of one basic template.Features in the first layer of neural networks are much more often studied than in other layers. This is because they are easy to study: you can just visualize the weights to pixel values, or more generally to input features. \n This naturally occurring equivariance in the first layer has then sometimes been — and in other cases, easily could have been — inspiration for the design of new architectures.\n \n \n \n \n\nFor example, if you train a fully-connected neural network on a visual task, the first layer will learn variants of the same features over and over: Gabor filters at different positions, orientations, and scales. Convolutional neural networks changed this. By baking the existence of translated copies of each feature directly into the network architecture, they generally remove the need for the network to learn translated copies of each feature. This resulted in a massive increase in statistical efficiency, and became a cornerstone of modern deep learning approaches to computer vision. But if we look at the first layer of a well-trained convolutional neural network, we see that other transformed versions of the same feature remain:\n \n \n\n\n![](images/slim_conv1.png)\nThe weights for the units in the first layer of the TF-Slim \n version of InceptionV1 .We show the first layer conv weights of the tf-slim version of InceptionV1 rather than the canonical one because its weights are cleaner. This is likely due to the inclusion of batch-norm in the slim variant, causing cleaner gradients. Units are sorted by the first principal component of the adjacency matrix between the first and second layers. Note how many features are similar except for rotation, scale, and hue.\n\nInspired by this, a 2011 paper subtitled “One Gabor to Rule Them All” created a sparse coding model which had a single Gabor filter translated, rotated, and scaled. In more recent years, a number of papers have extended this equivariance to the hidden layers of neural networks, and to broader kinds of transformations . Just as convolutional neural networks enforce that the weights between two features be the same if they have the same relative position:\n \n \n W(x1, y1, a) → (x2, y2, b)  =  W(x1+Δx, y1+Δy, a) → (x2+Δx, y2+Δy, b)W\\_{(x\\_1,~y\\_1,~a) ~\\to~ (x\\_2,~y\\_2,~b)} ~~=~~ W\\_{(x\\_1+\\Delta x,~y\\_1 +\\Delta y,~a) ~\\to~ (x\\_2+\\Delta x,~y\\_2+\\Delta y,~b)}W(x1​, y1​, a) → (x2​, y2​, b)​  =  W(x1​+Δx, y1​+Δy, a) → (x2​+Δx, y2​+Δy, b)​\n\n\n… these more sophisticated equivariant networks make the weights between two neurons equal if they have the same relative relationship under more general transformations:\n For our purposes, it suffices to know that these equivariant neural networks have the same weights when there is the same relative relationship between neurons. This footnote is for the benefit of readers who may wish to engage more deeply in the enforced equivariance literature, and can be safely skipped.\n \n \n\n Group theory is an area of mathematics that gives us tools for describing symmetries and sets of interacting transformations. To build equivariant neural networks, we often borrow an idea from group theory called a group convolution. Just as a regular convolution can describe weights that correctly respect translational equivariance, a group convolution can describe weights that respect a complex set of interacting transformations (the group it operates over). Although you could try to work out how to tie the weights to achieve this from first principles, it’s easy to make mistakes. (One of the authors participated in many conversations with researchers in 2012 where people made errors on whiteboards about how sets of rotated and translated features should interact, without using convolutions.) Group convolutions can take any group you describe and give you the correct weight tying.\n \n \n\n For an approachable introduction to group convolutions, we recommend [this article](https://colah.github.io/posts/2014-12-Groups-Convolution/).\n \n \n\n If you dig further, you may begin to see papers discussing something called a group representation instead of group convolutions. This is a more advanced topic in group theory. The core idea is analogous to the Fourier transform. Recall that the Fourier transform turns convolution into pointwise multiplication (this is sometimes used to accelerate convolution). Well, the Fourier transform has a version that can operate over functions on groups, and also maps convolution to pointwise multiplication. And when you apply the Fourier transform to a group, the resulting coefficients correspond to something called a group representation, which you can think of as being analogous to a frequency in the regular Fourier transform.\n \nWa → b  =  WT(a)→T(b)W\\_{a~\\to~ b} ~~=~~ W\\_{T(a) \\to T(b)}Wa → b​  =  WT(a)→T(b)​\n\n\nThis is, at least approximately, what we saw conv nets naturally doing when we look at equivariant circuits! The weights had symmetries that caused neurons with similar relationships to have similar weights, much like an equivariant architecture would force them to. \n \n \n\nGiven that we have neural network architectures which mimic the natural structures we observe, it seems natural to wonder what features and circuits such models learn. Do they learn the same equivariant features we see naturally form? Or do they do something entirely different?\n To answer these questions, we trained an equivariant model roughly inspired by InceptionV1 on ImageNet. We made half the neurons rotationally equivariant (with 16 rotations), and made the others rotationally invariant. Since we put no effort into tuning it, the model achieved abysmal test accuracy but still learns interesting features.\n Here are the full set of features learned by the equivariant model. Half are forced to be rotationally equivariant, while half are forced to be rotationally invariant.\n \n \n\n![](images/equiv.png)\n\n Looking at mixed3b, we found that the equivariant model learned analogues of many large rotationally equivariant families from InceptionV1, such as [curve detectors](https://distill.pub/2020/circuits/curve-detectors/), [boundary detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary), [divot detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_divots), and [oriented fur detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_generic_oriented_fur):\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[respond to curves in various orientations.\nCurve detectors](https://distill.pub/2020/circuits/curve-detectors/)\nNaturally ocurring rotationally equivariant features\nAnalogous features found in a model where some units are forced to have 16 rotated copies\n[Oriented fur detectors\n detect fur parting in a particular way.](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_generic_oriented_fur)\n[Boundary detectors\n use multiple cues to detect oriented boundaries of objects.](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_boundary)\n[Divot detectors\n\nlook for sharp curves sticking out.](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_divots)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n The existence of analogous features in equivariant models can be seen as a successful prediction of interpretability.\n As researchers engaged in more qualitative research, we should always be worried that we may be fooling ourselves. \n Successfully predicting which features will form in an equivariant neural network architecture is actually a pretty non-trivial prediction to make, and a nice confirmation that we’re correctly understanding things.\n \n \n\n\n Another exciting possibility is that this kind of feature and circuit analysis may be able to help inform equivariance research.\n For example, the kinds of equivariance that naturally form might be helpful in informing what types of equivariance we should design into different layers of a neural network.\n \n \n\n\n\n\n\n---\n\n\nConclusion\n----------\n\n\nEquivariance has a remarkable ability to simplify our understanding of neural networks. When we see neural networks as families of features, interacting in structured ways, understanding small templates can actually turn into understanding how large numbers of neurons interact. Equivariance is a big help whenever we discover it.\n \n \n\nWe sometimes think of understanding neural networks as being like reverse engineering a regular computer program. In this analogy, equivariance is like finding the same inlined function repeated throughout the code. Once you realize that you’re seeing many copies of the same function, you only need to understand it once.\n \n \n\nBut natural equivariance does have some limitations. For starters, we have to find the equivariant families. This can actually take us quite a bit of work, poring through neurons. Further, they may not be exactly equivariant: one unit may be wired up slightly differently, or have a small exception, and so understanding it as equivariant could leave gaps in our understanding.\n \n \n\nWe’re excited about the potential of equivariant architectures to make the features and circuits of neural networks easier to understand. This seems especially promising in the context of early vision, where the vast majority of features seem to be equivariant to rotation, hue, scale, or a combination of those.\n \n \n\nOne of the biggest — and least discussed — advantages we have over neuroscientists in studying vision in artificial neural networks instead of biological neural networks is translational equivariance. By only having one neuron for each feature instead of tens of thousands of translated copies, convolutional neural networks massively reduce the complexity of studying artificial vision systems relative to biological ones. This has been a key ingredient in making it at all plausible that we can systematically understand InceptionV1.\n \n \n\nPerhaps in the future, the right equivariant architecture will be able to shave another order of magnitude of complexity off of understanding early vision in neural networks. If so, understanding early vision might move from “possible with effort” to “easily achievable.”\n \n \n \n \n\n\n\n\n \n\n\n![](images/multiple-pages.svg)\n\n This article is part of the Circuits thread, a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. \n\n\n\n\n[Curve Detectors](/2020/circuits/curve-detectors/)\n[High-Low Frequency Detectors](/2020/circuits/frequency-edges/)", "date_published": "2020-12-08T20:00:00Z", "authors": ["Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh"], "summaries": ["Neural networks naturally learn many transformed copies of the same feature, connected by symmetric weights."], "doi": "10.23915/distill.00024.004", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1109.6638.pdf", "title": "The statistical inefficiency of sparse coding for images (or, one Gabor to rule them all)"}, {"link": "http://proceedings.mlr.press/v48/cohenc16.pdf", "title": "Group equivariant convolutional networks"}, {"link": "https://arxiv.org/pdf/1602.02660.pdf", "title": "Exploiting cyclic symmetry in convolutional neural networks"}, {"link": "https://arxiv.org/pdf/1612.08498.pdf", "title": "Steerable CNNs"}, {"link": "https://arxiv.org/pdf/1802.08219.pdf", "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds"}, {"link": "https://arxiv.org/pdf/1804.04656.pdf", "title": "3D G-CNNs for pulmonary nodule detection"}, {"link": "https://doi.org/10.1201/9781420011432", "title": "An introduction to systems biology: design principles of biological circuits"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "https://www.researchgate.net/profile/Li_Jia_Li/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database/links/00b495388120dbc339000000/ImageNet-a-Large-Scale-Hierarchical-Image-Database.pdf", "title": "Imagenet: A large-scale hierarchical image database"}, {"link": "https://arxiv.org/pdf/1610.02055.pdf", "title": "Places: An image database for deep scene understanding"}, {"link": "https://ai.googleblog.com/2016/08/tf-slim-high-level-library-to-define.html", "title": "TF-Slim: A high level library to define complex models in TensorFlow"}]} {"id": "9025e9a91810edf582669aac3764b350", "title": "Understanding RL Vision", "url": "https://distill.pub/2020/understanding-rl-vision", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Introduction](#introduction)\n[Our CoinRun model](#coinrun)\n[Model analysis](#analysis)\n* [Dissecting failure](#dissecting-failure)\n* [Hallucinations](#hallucinations)\n* [Model editing](#model-editing)\n\n\n[The diversity hypothesis](#diversity-hypothesis)\n[Feature visualization](#feature-visualization)\n[Attribution](#attribution)\n[Questions for further research](#questions)\n\n\n\n\n In this article, we apply interpretability techniques to a reinforcement learning (RL) model trained to play the video game CoinRun . Using attribution combined with dimensionality reduction as in , we build an interface for exploring the objects detected by the model, and how they influence its value function and policy. We leverage this interface in several ways.\n \n\n\n* **[Dissecting failure](#dissecting-failure).** We perform a step-by-step analysis of the agent’s behavior in cases where it failed to achieve the maximum reward, allowing us to understand what went wrong, and why. For example, one case of failure was caused by an obstacle being temporarily obscured from view.\n* **[Hallucinations](#hallucinations).** We find situations when the model “hallucinated” a feature not present in the observation, thereby explaining inaccuracies in the model’s value function. These were brief enough that they did not affect the agent’s behavior.\n* **[Model editing](#model-editing).** We hand-edit the weights of the model to blind the agent to certain hazards, without otherwise changing the agent’s behavior. We verify the effects of these edits by checking which hazards cause the new agents to fail. Such editing is only made possible by our previous analysis, and thus provides a quantitative validation of this analysis.\n\n\n\n Our results depend on levels in CoinRun being procedurally-generated, leading us to formulate a [diversity hypothesis](#diversity-hypothesis) for interpretability. If it is correct, then we can expect RL models to become more interpretable as the environments they are trained on become more diverse. We provide evidence for our hypothesis by measuring the relationship between interpretability and generalization.\n \n\n\n\n Finally, we provide a thorough [investigation](#feature-visualization) of several interpretability techniques in the context of RL vision, and pose a number of [questions](#questions) for further research.\n \n\n\n\nOur CoinRun model\n-----------------\n\n\n\n CoinRun is a side-scrolling platformer in which the agent must dodge enemies and other traps and collect the coin at the end of the level.\n \n\n\n\n\n\n\nOur trained model playing CoinRun. **Left**: full resolution. **Right**: 64x64 RGB observations given to the model.\n\n\n CoinRun is procedurally-generated, meaning that each new level encountered by the agent is randomly generated from scratch. This incentivizes the model to learn how to spot the different kinds of objects in the game, since it cannot get away with simply memorizing a small number of specific trajectories .We use the original version of CoinRun , not the version from Procgen Benchmark , which is slightly different. To play CoinRun yourself, please follow the instructions [here](https://github.com/openai/coinrun).\n\n\n\n\n Here are some examples of the objects used, along with walls and floors, to generate CoinRun levels.\n \n\n\n\n\n\n| | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| Full resolution | \n | | | \n | \n | | |\n| Model resolution | \n | | | \n | \n | | |\n| | The agent, in mid air (left) and about to jump (right). The agent also appears in beige, blue and green. | Coins, which have to be collected. | Stationary buzzsaw obstacles, which must be dodged. | Enemies, which must be dodged, moving left and right. There are several alternative sprites, all with white trails. | Boxes, which the agent can both move past and land on top of. | Lava at the bottom of a chasm. | The velocity info painted into the top left of each observation, indicating the agent’s horizontal and vertical velocities.Painting in the velocity info allows the model to infer the agent’s motion from a single frame. The shade of the left square indicates the agent’s horizontal velocity (black for left at full speed, white for right at full speed), and the shade of the right square indicates the agent’s vertical velocity (black for down at full speed, white for up at full speed). In this example, the agent is moving forward and about to land (and is thus moving right and down). |\n\n\n\n\n There are 9 actions available to the agent in CoinRun:\n \n\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| ← | → | | Left and right change the agent’s horizontal velocity. They still work while the agent is in mid-air, but have less of an effect. |\n| ↓ | | | Down cancels a jump if used immediately after up, and steps the agent down from boxes. |\n| ↑ | ↖ | ↗ | Up causes the agent to jump after the next non-up action. Diagonal directions have the same effect as both component directions combined. |\n| A | B | C | A, B and C do nothing.The original version of CoinRun only has 1 “do nothing” action, but our version ended up with 3 when “A” and “B” actions were added to be used in other games. For consistency, we have relabeled the original “do nothing” action as “C”. |\n\n\n\n\n We trained a convolutional neural network on CoinRun for around 2 billion timesteps, using PPO , an actor-critic algorithm.We used the standard PPO hyperparameters for CoinRun , except that we used twice as many copies of the environment per worker and twice and many workers. The effect of these changes was to increase the effective batch size, which seemed to be necessary to reach the same performance with our smaller architecture. The architecture of our network is described in [Appendix C](#architecture). We used a non-recurrent network, to avoid any need to visualize multiple frames at once. Thus our model observes a single downsampled 64x64 image, and outputs a value function (an estimate of the total future time-discounted reward) and a policy (a probability distribution over the actions, from which the next action is sampled).\n \n\n\n\n\n\n\n\n\n\n\n\nobservation\n\nCNN\n\n\n\n\n\nvalue function\n\n\n\n\n\n\n\n\n\n\nlogits\n\nsoftmax\npolicy\n\nSchematic of a typical non-recurrent convolutional actor-critic model, such as ours.\n\n\n Since the only available reward is a fixed bonus for collecting the coin, the value function estimates the time-discountedWe use a discount rate of 0.999 per timestep. probability that the agent will successfully complete the level.\n \n\n\nModel analysis\n--------------\n\n\n\n Having trained a strong RL agent, we were curious to see what it had learned. Following , we developed an interface for examining trajectories of the agent playing the game. This incorporates attribution from a hidden layer that recognizes objects, which serves to highlight objects that positively or negatively influence a particular network output. By applying dimensionality reduction, we obtain attribution vectors whose components correspond to different types of object, which we indicate using different colors.\n \n\n\n\n Here is our interface for a typical trajectory, with the value function as the network output. It reveals the model using obstacles, coins, enemies and more to compute the value function.\n \n\n\n\n\n\n\n\n### Dissecting failure\n\n\n\n Our fully-trained model fails to complete around 1 in every 200 levels. We explored a few of these failures using our interface, and found that we were usually able to understand why they occurred.\n \n\n\n\n The failure often boils down to the fact that the model has no memory, and must therefore choose its action based only on the current observation. It is also common for some unlucky sampling of actions from the agent’s policy to be partly responsible.\n \n\n\n\n Here are some cherry-picked examples of failures, carefully analyzed step-by-step.\n \n\n\n\n\n| | |\n| --- | --- |\n| \n **Buzzsaw obstacle obscured by enemy**\n\n\n **Stepping down to avoid jumping**\n\n\n **Landing platform moving off-screen**\n | \n\n The agent moves too far to the right while in mid-air as a result of a buzzsaw obstacle being temporarily hidden from view by a moving enemy. The buzzsaw comes back into view, but too late to avoid a collision.\n \n\n The agent presses down in a bid to delay a jump. This causes the agent to inadvertently step down from a box and onto an enemy.\n \n\n The agent fails to move far enough to the right while in mid-air, as a result of the platform where it was intending to land moving below the field of view.\n \n |\n\n\n\n\n\n\nPrev \n\nStart\n\n\n►\n\n\nNext \n\nEnd\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n### Hallucinations\n\n\n\n We searched for errors in the model using generalized advantage estimation (GAE) ,We use the same GAE hyperparameters as in training, namely γ=0.999\\gamma=0.999γ=0.999 and λ=0.95\\lambda=0.95λ=0.95. which measures how successful each action turned out relative to the agent’s expectations. An unusually high or low GAE indicates that either something unexpected occurred, or the agent’s expectations were miscalibrated. Filtering for such timesteps can therefore find problems with the value function or policy.\n \n\n\n\n Using our interface, we found a couple of cases in which the model “hallucinated” a feature not present in the observation, causing the value function to spike.\n \n\n\n\n\n| | |\n| --- | --- |\n| \n **Coin hallucination**\n\n\n **Buzzsaw hallucination**\n | \n\n At one point the value function spiked upwards from 95% to 98% for a single timestep. This was due to a curved yellow-brown shape in the background, which happened to appear next to a wall, being mistaken for a coin.\n \n\n At another point the value function spiked downwards from 94% to 85% for a single timestep. This was due to the agent, colored in gray-blue and crouching against a mottled background, being mistaken for a buzzsaw obstacle. An actual buzzsaw was also present in the observation, but the main effect was from the misjudged agent, as shown by the larger red circle around the agent (hover over the first legend item to isolate).\n \n |\n\n\n\n\n\n\n\n\n### Model editing\n\n\n\n Our analysis so far has been mostly qualitative. To quantitatively validate our analysis, we hand-edited the model to make the agent blind to certain features identified by our interface: buzzsaw obstacles in one case, and left-moving enemies in another. Our method for this can be thought of as a primitive form of [circuit](https://distill.pub/2020/circuits/)-editing , and we explain it in detail in [Appendix A](#model-editing-method).\n \n\n\n\n We evaluated each edit by measuring the percentage of levels that the new agent failed to complete, broken down by the object that the agent collided with to cause the failure. Our results show that our edits were successful and targeted, with no statistically measurable effects on the agent’s other abilities.The data for this plot are as follows. \nPercentage of levels failed due to: buzzsaw obstacle / enemy moving left / enemy moving right / multiple or other: \n- Original model: 0.37% / 0.16% / 0.12% / 0.08% \n- Buzzsaw obstacle blindness: 12.76% / 0.16% / 0.08% / 0.05% \n- Enemy moving left blindness: 0.36% / 4.69% / 0.97% / 0.07% \nEach model was tested on 10,000 levels.\n\n\n\n\n \\*{stroke-linecap:butt;stroke-linejoin:round;} Original modelBuzzsaw obstacleblindnessEnemy moving leftblindness0%2%4%6%8%10%12%Percentage of levels failedFailure rate by causeCausesBuzzsaw obstacleEnemy moving leftEnemy moving rightMultiple or other\nResults of testing each model on 10,000 levels. Note that moving enemies can change direction.\n\n\n We did not manage to achieve complete blindness, however: the buzzsaw-edited model still performed significantly better than the original model did when we made the buzzsaws completely invisible.Our results on the version of the game with invisible buzzsaws are as follows. \nPercentage of levels failed due to: buzzsaw obstacle / enemy moving left / enemy moving right / multiple or other: \nOriginal model, invisible buzzsaws: 32.20% / 0.05% / 0.05% / 0.05% \nWe tested the model on 10,000 levels. \nWe experimented briefly with iterating the editing procedure, but were not able to achieve more than around 50% buzzsaw blindness by this metric without affecting the model’s other abilities. This implies that the model has other ways of detecting buzzsaws than the feature identified by our interface.\n \n\n\n\n Here are the original and edited models playing some cherry-picked levels.\n \n\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| \n **Level 1**\n\n\n **Level 2**\n\n\n **Level 3**\n\n\n\n►\n\n\n►\n\n\n►\n\n | \n\n\n\n\n\n\n\n\n\n\n\n\n\n | \n\n\n\n\n\n\n\n\n\n\n\n\n\n | \n\n\n\n\n\n\n\n\n\n\n\n\n\n |\n| | Original model. | Buzzsaw obstacle blindness. | Enemy moving left blindness. |\n\n\n\nThe diversity hypothesis\n------------------------\n\n\n\n All of the above analysis uses the same hidden layer of our network, the third of five convolutional layers, since it was much harder to find interpretable features at other layers. Interestingly, the level of abstraction at which this layer operates – finding the locations of various in-game objects – is exactly the level at which CoinRun levels are randomized using procedural generation. Furthermore, we found that training on many randomized levels was essential for us to be able to find any interpretable features at all.\n \n\n\n\n This led us to suspect that the diversity introduced by CoinRun’s randomization is linked to the formation of interpretable features. We call this the diversity hypothesis:\n \n\n\n\n> \n> Interpretable features tend to arise (at a given level of abstraction) if and only if the training distribution is diverse enough (at that level of abstraction).\n> \n\n\n\n Our explanation for this hypothesis is as follows. For the forward implication (“only if”), we only expect features to be interpretable if they are general enough, and when the training distribution is not diverse enough, models have no incentive to develop features that generalize instead of overfitting. For the reverse implication (“if”), we do not expect it to hold in a strict sense: diversity on its own is not enough to guarantee the development of interpretable features, since they must also be relevant to the task. Rather, our intention with the reverse implication is to hypothesize that it holds very often in practice, as a result of generalization being bottlenecked by diversity.\n \n\n\n\n In CoinRun, procedural generation is used to incentivize the model to learn skills that generalize to unseen levels . However, only the layout of each level is randomized, and correspondingly, we were only able to find interpretable features at the level of abstraction of objects. At a lower level, there are only a handful of visual patterns in the game, and the low-level features of our model seem to consist mostly of memorized color configurations used for picking these out. Similarly, the game’s high-level dynamics follow a few simple rules, and accordingly the high-level features of our model seem to involve mixtures of combinations of objects that are hard to decipher. To explore the other convolutional layers, see the interface [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/demo/interface.html).\n \n\n\n### Interpretability and generalization\n\n\n\n To test our hypothesis, we made the training distribution less diverse, by training the agent on a fixed set of 100 levels. This dramatically reduced our ability to interpret the model’s features. Here we display an interface for the new model, generated in the same way as the one [above](#interface). The smoothly increasing value function suggests that the model has memorized the number of timesteps until the end of the level, and the features it uses for this focus on irrelevant background objects. Similar overfitting occurs for other video games with a limited number of levels .\n \n\n\n\n\n\n\n\n\n We attempted to quantify this effect by varying the number of levels used to train the agent, and evaluating the 8 features identified by our interface on how interpretable they were.The interfaces used for this evaluation can be found [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/finite_levels/index.html). Features were scored based on how consistently they focused on the same objects, and whether the value function attribution made sense – for example, background objects should not be relevant. This process was subjective and noisy, but that may be unavoidable. We also measured the generalization ability of each model, by testing the agent on unseen levels .The data for this plot are as follows. \n- Number of training levels: 100 / 300 / 1000 / 3,000 / 10,000 / 30,000 / 100,000 \n- Percentage of levels completed (train, run 1): 99.96% / 99.82% / 99.67% / 99.65% / 99.47% / 99.55% / 99.57% \n- Percentage of levels completed (train, run 2): 99.97% / 99.86% / 99.70% / 99.46% / 99.39% / 99.50% / 99.37% \n- Percentage of levels completed (test, run 1): 61.81% / 66.95% / 74.93% / 89.87% / 97.53% / 98.66% / 99.25% \n- Percentage of levels completed (test, run 2): 64.13% / 67.64% / 73.46% / 90.36% / 97.44% / 98.89% / 99.35% \n- Percentage of features interpretable (researcher 1, run 1): 52.5% / 22.5% / 11.25% / 45% / 90% / 75% / 91.25% \n- Percentage of features interpretable (researcher 2, run 1): 8.75% / 8.75% / 10% / 26.25% / 56.25% / 90% / 70% \n- Percentage of features interpretable (researcher 1, run 2): 15% / 13.75% / 15% / 23.75% / 53.75% / 90% / 96.25% \n- Percentage of features interpretable (researcher 2, run 2): 3.75% / 6.25% / 21.25% / 45% / 72.5% / 83.75% / 77.5% \nPercentages of levels completed are estimated by sampling 10,000 levels with replacement.\n\n\n\n\n \\*{stroke-linecap:butt;stroke-linejoin:round;} 10210310410560%70%80%90%100%Percentage of levels completedGeneralizationTrainTest1021031041050%20%40%60%80%100%Percentage of features interpretableInterpretable features0.00.20.40.60.81.0Number of training levels0.00.20.40.60.81.0\nComparison of models trained on different numbers of levels. Two models were trained for each number of levels, and two researchers independently evaluated how interpretable the features of each model were, without being shown the number of levels.Our methodology had some flaws. Firstly, the researchers were not completely blind to the number of levels: for example, it is possible to infer something about the number of levels from the smoothness of graphs of the value function, since with fewer levels the model is better able to memorize the number of timesteps until the end of the level. Secondly, since evaluations are somewhat tedious, we stopped them once we thought the trend had become clear, introducing some selection bias. Therefore these results should be considered primarily illustrative. Each model was tested on 10,000 train and 10,000 test levels sampled with replacement. Shaded areas in the left plot show the range of values over both models, though these are mostly too narrow to be visible. Error bars in the right plot show ±1 population standard deviation over all four model–researcher pairs.\n\n\n Our results illustrate how diversity may lead to interpretable features via generalization, lending support to the diversity hypothesis. Nevertheless, we still consider the hypothesis to be highly unproven.\n \n\n\nFeature visualization\n---------------------\n\n\n\n[Feature visualization](https://distill.pub/2017/feature-visualization/) answers questions about what certain parts of a network are looking for by generating examples. This can be done by applying gradient descent to the input image, starting from random noise, with the objective of activating a particular neuron or group of neurons. While this method works well for an image classifier trained on ImageNet , for our CoinRun model it yields only featureless clouds of color. Only for the first layer, which computes simple convolutions of the input, does the method produce comparable visualizations for the two models.\n \n\n\n\n\n\n\n| | ImageNet | CoinRun |\n| --- | --- | --- |\n| \n First layer\n | \n\n | \n\n |\n| \n Intermediate layer\n | \n\n | \n\n |\n\n\nComparison of gradient-based feature visualization for CNNs trained on ImageNet (GoogLeNet ) and on CoinRun (architecture described [below](#architecture)). Each image was chosen to activate a neuron in the center, with the 3 images corresponding to the first 3 channels. Jittering was applied between optimization steps of up to 2 pixels for the first layer, and up to 8 pixels for the intermediate layer (mixed4a for ImageNet, [2b](#architecture) for CoinRun).\n\n\n\n Gradient-based feature visualization has previously been shown to struggle with RL models trained on Atari games . To try to get it to work for CoinRun, we varied the method in a number of ways. Nothing we tried had any noticeable effect on the quality of the visualizations.\n \n\n\n* **Transformation robustness.** This is the method of stochastically jittering, rotating and scaling the image between optimization steps, to search for examples that are robust to these transformations . We tried both increasing and decreasing the size of the jittering. Rotating and scaling are less appropriate for CoinRun, since the observations themselves are not invariant to these transformations.\n* **Penalizing extremal colors.**By an “extremal” color we mean one of the 8 colors with maximal or minimal RGB values (black, white, red, green, blue, yellow, cyan and magenta). Noticing that our visualizations tend to use extremal colors towards the middle, we tried including in the visualization objective an L2 penalty of various strengths on the activations of the first layer, which successfully reduced the size of the extremally-colored region but did not otherwise help.\n* **Alternative objectives.** We tried using an alternative optimization objective , such as the caricature objective.The caricature objective is to maximize the dot product between the activations of the input image and the activations of a reference image. Caricatures are often an especially easy type of feature visualization to make work, and helpful for getting a first glance into what features a model has. They are demonstrated in [this notebook](https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/misc/feature_inversion_caricatures.ipynb). A more detailed manuscript by its authors is forthcoming. We also tried using dimensionality reduction, as described [below](#dataset-examples), to choose non-axis-aligned directions in activation space to maximize.\n* **Low-level visual diversity.** In an attempt to broaden the distribution of images seen by the model, we retrained it on a version of the game with procedurally-generated sprites. We additionally tried adding noise to the images, both independent per-pixel noise and spatially-correlated noise. Finally, we experimented briefly with adversarial training , though we did not pursue this line of inquiry very far.\n\n\n\n As shown [below](#dataset-examples), we were able to use dataset examples to identify a number of channels that pick out human-interpretable features. It is therefore striking how resistant gradient-based methods were to our efforts. We believe that this is because solving CoinRun does not ultimately require much visual ability. Even with our modifications, it is possible to solve the game using simple visual shortcuts, such as picking out certain small configurations of pixels. These shortcuts work well on the narrow distribution of images on which the model is trained, but behave unpredictably in the full space of images in which gradient-based optimization takes place.\n \n\n\n\n Our analysis here provides further insight into the [diversity hypothesis](#diversity-hypothesis). In support of the hypothesis, we have examples of features that are hard to interpret in the absence of diversity. But there is also evidence that the hypothesis may need to be refined. Firstly, it seems to be a lack of diversity at a low level of abstraction that harms our ability to interpret features at all levels of abstraction, which could be due to the fact that gradient-based feature visualization needs to back-propagate through earlier layers. Secondly, the failure of our efforts to increase low-level visual diversity suggests that diversity may need to be assessed in the context of the requirements of the task.\n \n\n\n### Dataset example-based feature visualization\n\n\n\n As an alternative to gradient-based feature visualization, we use dataset examples. This idea has a long history, and can be thought of as a heavily-regularized form of feature visualization . In more detail, we sample a few thousand observations infrequently from the agent playing the game, and pass them through the model. We then apply a dimensionality reduction method known as non-negative matrix factorization (NMF) to the activation channels .More precisely, we find a non-negative approximate low-rank factorization of the matrix obtained by flattening the spatial dimensions of the activations into the batch dimension. This matrix has one row per observation *per spatial position* and one column per channel: thus the dimensionality reduction does not use spatial information. For each of the resulting channels (which correspond to weighted combinations of the original channels), we choose the observations and spatial positions with the strongest activation (with a limited number of examples per position, for diversity), and display a patch from the observation at that position.\n \n\n\n\n\n![](images/feature_vis_dataset/layer_2b_feature_0.png)Short left-facing wall\n![](images/feature_vis_dataset/layer_2b_feature_1.png)Velocity info or left edge of screen\n![](images/feature_vis_dataset/layer_2b_feature_2.png)Long left-facing wall\n![](images/feature_vis_dataset/layer_2b_feature_3.png)Left end of platform\n![](images/feature_vis_dataset/layer_2b_feature_4.png)Right end of platform\n![](images/feature_vis_dataset/layer_2b_feature_5.png)Buzzsaw obstacle or platform\n![](images/feature_vis_dataset/layer_2b_feature_6.png)Coin\n![](images/feature_vis_dataset/layer_2b_feature_7.png)Top/right edge of screen\n![](images/feature_vis_dataset/layer_2b_feature_8.png)Left end of platform\n![](images/feature_vis_dataset/layer_2b_feature_9.png)Step\n![](images/feature_vis_dataset/layer_2b_feature_10.png)Agent or enemy moving right\n![](images/feature_vis_dataset/layer_2b_feature_11.png)Left edge of box\n![](images/feature_vis_dataset/layer_2b_feature_12.png)Right end of platform\n![](images/feature_vis_dataset/layer_2b_feature_13.png)Buzzsaw obstacle\n![](images/feature_vis_dataset/layer_2b_feature_14.png)Top left corner of box\n![](images/feature_vis_dataset/layer_2b_feature_15.png)Left end of platform or bottom/right of screen?\n\nDataset example-based feature visualizations for 16 NMF directions of layer [2b](#architecture) of our CoinRun model. The grey-white checkerboard represents the edge of the screen. The labels are hand-composed.\n\n\n Unlike gradient-based feature visualization, this method finds some meaning to the different directions in activation space. However, it may still fail to provide a complete picture for each direction, since it only shows a limited number of dataset examples, and with limited context.\n \n\n\n### Spatially-aware feature visualization\n\n\n\n CoinRun observations differ from natural images in that they are much less spatially invariant. For example, the agent always appears in the center, and the agent’s velocity is always encoded in the top left. As a result, some features detect unrelated things at different spatial positions, such as reading the agent’s velocity in the top left while detecting an unrelated object elsewhere. To account for this, we developed a spatially-aware version of dataset example-based feature visualization, in which we fix each spatial position in turn, and choose the observation with the strongest activation at that position (with a limited number of reuses of the same observation, for diversity). This creates a spatial correspondence between visualizations and observations.\n \n\n\n\n Here is such a visualization for a feature that responds strongly to coins. The white squares in the top left show that the feature also responds strongly to the horizontal velocity info when it is white, corresponding to the agent moving right at full speed.\n \n\n\n\n\n![](images/feature_vis_spatial.png)\n\nSpatially-aware dataset example-based feature visualization for the coin-detecting NMF direction of layer [2b](#architecture). Transparency (revealing the diagonally-striped background) indicates a weak response, so the left half of the visualization is mostly transparent because coins never appear in the left half of observations.\n\nAttribution\n-----------\n\n\n\n Attribution answers questions about the relationships between neurons. It is most commonly used to see how the input to a network affects a particular output – for example, in RL – but it can also be applied to the activations of hidden layers . Although there are many approaches to attribution we could have used, we chose the method of integrated gradients . We explain in [Appendix B](#integrated-gradients) how we applied this method a hidden layer, and how positive value function attribution can be thought of as “good news” and negative value function attribution can as “bad news”.\n \n\n\n### Dimensionality reduction for attribution\n\n\n\n We showed [above](#dataset-examples) that a dimensionality reduction method known as non-negative matrix factorization (NMF) could be applied to the channels of activations to produce meaningful directions in activation space . We found that it is even more effective to apply NMF not to activations, but to value function attributionsAs before, we obtain the NMF directions by sampling a few thousand observations infrequently from the agent playing the game, computing the attributions, flattening the spatial dimensions into the batch dimension, and applying NMF. (working around the fact that NMF can only be applied to non-negative matricesOur workaround is to separate out the positive and negative parts of the attributions and concatenate them along the batch dimension. We could also have concatenated them along the channel dimension.). Both methods tend to produce NMF directions that are close to one-hot, and so can be thought of as picking out the most relevant channels. However, when reducing to a small number of dimensions, using attributions usually picks out more salient features, because attribution takes into account not just what neurons respond to but also whether their response matters.\n \n\n\n\n Following , after applying NMF to attributions, we visualize them by assigning a different color to each of the resulting channels. We overlay these visualizations over the observation and contextualize each channel using feature visualization , making use of [dataset example-based feature visualization](#dataset-examples). This gives a basic version of our interface, which allows us to see the effect of the main features at different spatial positions.\n \n\n\n\n\n\n\n| Observation | Positive attribution (good news) | Negative attribution (bad news) |\n| --- | --- | --- |\n| \n\n\n\n | \n\n\n\n\n\n\n | \n\n\n\n\n\n\n |\n\n\n\nLegend (hover to isolate)\n \n\n\n\n\n\n\n\nBuzzsaw \nobstacle\n\n\n\n\n\n\n\n\nCoin\n\n\n\n\n\n\n\n\nEnemy \nmoving \nleft\n\n\n\n\n\n\n\n\nAgent \nor enemy \nmoving right\n\n\n\n\n Value function attribution for a cherry-picked observation using layer [2b](#architecture) of our CoinRun model, reduced to 4 channels using attribution-based NMF. The dataset example-based feature visualizations of these directions reveal more salient features than the visualizations of the first 4 activation-based NMF directions from the preceding section.\n \n\n\n\n For the [full version](#interface) of our interface, we simply repeat this for an entire trajectory of the agent playing the game. We also incorporate video controls, a timeline view of compressed observations , and additional information, such as model outputs and sampled actions. Together these allow the trajectory to be easily explored and understood.\n \n\n\n### Attribution discussion\n\n\n\n Attributions for our CoinRun model have some interesting properties that would be unusual for an ImageNet model.\n \n\n\n* **Sparsity.** Attribution tends to be concentrated in a very small number of spatial positions and (post-NMF) channels. For example, in the figure above, the top 10 position–channel pairs account for more than 80% of the total absolute attribution. This might be explained by our [earlier](#feature-visualization-discussion) hypothesis that the model identifies objects by picking out certain small configurations of pixels. Because of this sparsity, we smooth out attribution over nearby spatial positions for the full version of our interface, so that the amount of visual space taken up can be used to judge attribution strength. This trades off some spatial precision for more precision with magnitudes.\n* **Unexpected sign.** Value function attribution usually has the sign one would expect: positive for coins, negative for enemies, and so on. However, this is sometimes not the case. For example, in the figure above, the red channel that detects buzzsaw obstacles has both positive and negative attribution in two neighboring spatial positions towards the left. Our best guess is that this phenomenon is a result of statistical [collinearity](https://en.wikipedia.org/wiki/Multicollinearity), caused by certain correlations in the procedural level generation together with the agent’s behavior. These could be visual, such as correlations between nearby pixels, or more abstract, such as both coins and long walls appearing at the end of every level. As a toy example, supposing the value function ought to increase by 2% when the end of the level becomes visible, the model could either increase the value function by 1% for coins and 1% for long walls, or by 3% for coins and −1% for long walls, and the effect would be similar.\n* **Outlier frames.** When an unusual event causes the network to output extreme values, attribution can behave especially strangely. For example, in the [buzzsaw hallucination](#hallucinations) frame, most features have a significant amount of both positive and negative attribution. We do not have a good explanation for this, but perhaps features are interacting in more complicated ways than usual. Moreover, in these cases there is often a significant component of the attribution lying outside the space spanned by the NMF directions, which we display as an additional “residual” feature. This could be because each frame is weighted equally when computing NMF, so outlier frames have little influence over the NMF directions.\n\n\n\n These considerations suggest that some care may be required when interpreting attributions.\n \n\n\nQuestions for further research\n------------------------------\n\n\n### The [diversity hypothesis](#diversity-hypothesis)\n\n\n1. **Validity.** Does the diversity hypothesis hold in other contexts, both within and outside of reinforcement learning?\n2. **Relationship to generalization.** What is the three-way relationship between diversity, interpretable features and generalization? Do non-interpretable features indicate that a model will fail to generalize in certain ways? Generalization refers implicitly to an underlying distribution – how should this distribution be chosen?For example, to measure generalization for CoinRun models trained on a limited number of levels, we used the distribution over all possible procedurally-generated levels. However, to formalize the sense in which CoinRun is not diverse in its visual patterns or dynamics rules, one would need a distribution over levels from a wider class of games.\n3. **Caveats.** How are interpretable features affected by other factors, such as the choice of task or algorithm, and how do these interact with diversity? Speculatively, do big enough models obtain interpretable features via the double descent phenomenon , even in the absence of diversity?\n4. **Quantification.** Can we quantitatively predict how much diversity is needed for interpretable features, perhaps using generalization metrics? Can we be precise about what is meant by an “interpretable feature” and a “level of abstraction”?\n\n\n### Interpretability in the absence of diversity\n\n\n1. **Pervasiveness of non-diverse features.** Do “non-diverse features”, by which we mean the hard-to-interpret features that tend to arise in the absence of diversity, remain when diversity is present? Is there a connection between these non-diverse features and the “non-robust features” that have been posited to explain adversarial examples ?\n2. **Coping with non-diverse levels of abstraction.** Are there levels of abstraction at which even broad distributions like ImageNet remain non-diverse, and how can we best interpret models at these levels of abstraction?\n3. **Gradient-based feature visualization.** Why does gradient-based feature visualization [break down](#feature-visualization) in the absence of diversity, and can it be made to work using transformation robustness, regularization, data augmentation, adversarial training, or other techniques? What property of the optimization leads to the clouds of [extremal colors](#extremal-colors)?\n4. **Trustworthiness of dataset examples and attribution.** How reliable and trustworthy can we make very heavily-regularized versions of feature visualization, such as those based on [dataset examples](#dataset-examples)?Heavily-regularized feature visualization may be untrustworthy by failing to separate the things causing certain behavior from the things that merely correlate with those causes . What explains the [strange behavior](#attribution-discussion) of attribution, and how trustworthy is it?\n\n\n### Interpretability in the RL framework\n\n\n1. **Non-visual and abstract features.** What are the best methods for interpreting models with non-visual inputs? Even vision models may also have interpretable abstract features, such as relationships between objects or anticipated events: will any method of generating examples be enough to understand these, or do we need an entirely new approach? For models with memory, how can we interpret their hidden states ?\n2. **Improving reliability.** How can we best identify, understand and correct rare [failures](#dissecting-failure) and [other errors](#hallucinations) in RL models? Can we actually improve models by [model editing](#model-editing), rather than merely degrading them?\n3. **Modifying training.** In what ways can we train RL models to make them more interpretable without a significant performance cost, such as by altering architectures or adding auxiliary predictive losses?\n4. **Leveraging the environment.** How can we enrich interfaces using RL-specific data, such as trajectories of agent–environment interaction, state distributions, and advantage estimates? What are the benefits of incorporating user–environment interaction, such as for exploring counterfactuals?\n\n\n### What we would like to see from further research and why\n\n\n\n We are motivated to study interpretability for RL for two reasons.\n \n\n\n* **To be able to interpret RL models.** RL can be applied to an enormous variety of tasks, and seems likely to be a part of increasingly influential AI systems. It is therefore important to be able to scrutinize RL models and to understand how they might fail. This may also benefit RL research through an improved understanding of the pitfalls of different algorithms and environments.\n* **As a testbed for interpretability techniques.** RL models pose a number of distinctive challenges for interpretability techniques. In particular, environments like CoinRun straddle the boundary between memorization and generalization, making them useful for studying the [diversity hypothesis](#diversity-hypothesis) and related ideas.\n\n\n\n We think that large neural networks are currently the most likely type of model to be used in highly capable and influential AI systems in the future. Contrary to the traditional perception of neural networks as black boxes, we think that there is a fighting chance that we will be able to clearly and thoroughly understand the behavior even of very large networks. We are therefore most excited by neural network interpretability research that scores highly according to the following criteria.\n \n\n\n* **Scalability.** The takeaways of the research should have some chance of scaling to harder problems and larger networks. If the techniques themselves do not scale, they should at least reveal some relevant insight that might.\n* **Trustworthiness.** Explanations should be faithful to the model. Even if they do not tell the full story, they should at least not be biased in some fatal way (such as by using an approval-based objective that leads to bad explanations that sound good, or by depending on another model that badly distorts information).\n* **Exhaustiveness.** This may turn out to be impossible at scale, but we should strive for techniques that explain every essential feature of our models. If there are theoretical limits to exhaustiveness, we should try to understand these.\n* **Low cost.** Our techniques should not be significantly more computationally expensive than training the model. We hope that we will not need to train models differently for them to be interpretable, but if we do, we should try to minimize both the computational expense and any performance cost, so that interpretable models are not disincentivized from being used in practice.\n\n\n\n Our proposed questions reflect this perspective. One of the reasons we emphasize diversity relates to exhaustiveness. If “non-diverse features” remain when diversity is present, then our current techniques are not exhaustive and could end up missing important features of more capable models. Developing tools to understand non-diverse features may shed light on whether this is likely to be a problem.\n \n\n\n\n We think there may be significant mileage in simply applying existing interpretability techniques, with attention to detail, to more models. Indeed, this was the mindset with which we initially approached this project. If the diversity hypothesis is correct, then this may become easier as we train our models to perform more complex tasks. Like early biologists encountering a new species, there may be a lot we can glean from taking a magnifying glass to the creatures in front of us.\n \n\n\nSupplementary material\n----------------------\n\n\n* **Code.** Utilities for computing feature visualization, attribution and dimensionality reduction for our models can be found in `lucid.scratch.rl_util`, a submodule of [Lucid](https://github.com/tensorflow/lucid). We demonstrate these in a [![](images/colab.svg) notebook](https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/misc/rl_util.ipynb).\n* **Model weights.** The weights of our model are available for download, along with those of a number of other models, including the models trained on different numbers of levels, the edited models, and models trained on all 16 of the Procgen Benchmark games. These are indexed [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/models/index.html).\n* **More interfaces.** We generated an expanded version of our interface for every convolutional layer in our model, which can be found [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/demo/interface.html). We also generated similar interfaces for each of our other models, which are indexed [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/index.html).\n* **Interface code.** The code used to generate the expanded version of our interface can be found [here](https://github.com/openai/understanding-rl-vision).", "date_published": "2020-11-17T20:00:00Z", "authors": ["Jacob Hilton", "Nick Cammarata", "Shan Carter", "Gabriel Goh", "Chris Olah"], "summaries": ["With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution."], "doi": "10.23915/distill.00029", "journal_ref": "distill-pub", "bibliography": [{"link": "https://openai.com/blog/quantifying-generalization-in-reinforcement-learning/", "title": "Quantifying generalization in reinforcement learning"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "https://arxiv.org/pdf/1412.6806.pdf", "title": "Striving for simplicity: The all convolutional net"}, {"link": "http://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf", "title": "Grad-CAM: Visual explanations from deep networks via gradient-based localization"}, {"link": "http://openaccess.thecvf.com/content_ICCV_2017/papers/Fong_Interpretable_Explanations_of_ICCV_2017_paper.pdf", "title": "Interpretable explanations of black boxes by meaningful perturbation"}, {"link": "https://arxiv.org/pdf/1705.05598.pdf", "title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"link": "https://arxiv.org/pdf/1711.00867.pdf", "title": "The (un)reliability of saliency methods"}, {"link": "https://arxiv.org/pdf/1703.01365.pdf", "title": "Axiomatic attribution for deep networks"}, {"link": "https://doi.org/10.23915/distill.00010", "title": "The Building Blocks of Interpretability"}, {"link": "https://openai.com/blog/procgen-benchmark/", "title": "Leveraging Procedural Generation to Benchmark Reinforcement Learning"}, {"link": "https://openai.com/blog/openai-baselines-ppo/", "title": "Proximal policy optimization algorithms"}, {"link": "https://arxiv.org/pdf/1506.02438.pdf", "title": "High-dimensional continuous control using generalized advantage estimation"}, {"link": "https://doi.org/10.23915/distill.00024", "title": "Thread: Circuits"}, {"link": "https://arxiv.org/pdf/1802.10363", "title": "General Video Game AI: A multi-track framework for evaluating agents, games and content generation algorithms"}, {"link": "https://arxiv.org/pdf/1902.01378.pdf", "title": "Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning"}, {"link": "https://arxiv.org/pdf/1912.02975.pdf", "title": "Observational Overfitting in Reinforcement Learning"}, {"link": "https://doi.org/10.23915/distill.00007", "title": "Feature Visualization"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "http://openaccess.thecvf.com/content_cvpr_2017/papers/Nguyen_Plug__Play_CVPR_2017_paper.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "http://www.image-net.org/papers/imagenet_cvpr09.pdf", "title": "Imagenet: A large-scale hierarchical image database"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "https://arxiv.org/pdf/1812.07069.pdf", "title": "An Atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents"}, {"link": "https://arxiv.org/pdf/1904.01318.pdf", "title": "Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents"}, {"link": "https://arxiv.org/pdf/1312.6199.pdf", "title": "Intriguing properties of neural networks"}, {"link": "https://arxiv.org/pdf/1711.00138.pdf", "title": "Visualizing and understanding Atari agents"}, {"link": "https://nikaashpuri.github.io/sarfa-saliency/", "title": "Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution"}, {"link": "https://rmozone.com/snapshots/2017/10/rmo-at-google/#chew", "title": "Video Interface: Assuming Multiple Perspectives on a Video Exposes Hidden Structure"}, {"link": "http://www.cs.columbia.edu/~djhsu/papers/biasvariance-pnas.pdf", "title": "Reconciling modern machine-learning practice and the classical bias--variance trade-off"}, {"link": "https://gradientscience.org/adv/", "title": "Adversarial examples are not bugs, they are features"}, {"link": "https://doi.org/10.23915/distill.00019", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'"}, {"link": "https://deepmind.com/blog/article/capture-the-flag-science", "title": "Human-level performance in 3D multiplayer games with population-based reinforcement learning"}, {"link": "https://openai.com/blog/solving-rubiks-cube/", "title": "Solving Rubik's Cube with a Robot Hand"}, {"link": "https://openai.com/projects/five/", "title": "Dota 2 with Large Scale Deep Reinforcement Learning"}, {"link": "https://arxiv.org/pdf/1802.01561.pdf", "title": "IMPALA: Scalable distributed deep-RL with importance weighted actor-learner architectures"}]} {"id": "dcf03df9a5ecaa60883a9e6fcf18ac7f", "title": "Communicating with Interactive Articles", "url": "https://distill.pub/2020/communicating-with-interactive-articles", "source": "distill", "source_type": "blog", "text": "Computing has changed how people communicate. The transmission of news, messages, and ideas is instant. Anyone’s voice can be heard. In fact, access to digital communication technologies such as the Internet is so fundamental to daily life that their disruption by government is condemned by the United Nations Human Rights Council . But while the technology to distribute our ideas has grown in leaps and bounds, the interfaces have remained largely the same.\n \n\n\n\n Parallel to the development of the internet, researchers like Alan Kay and Douglas Engelbart worked to build technology that would empower individuals and enhance cognition. Kay imagined the Dynabook in the hands of children across the world. Engelbart, while best remembered for his “mother of all demos,” was more interested in the ability of computation to augment human intellect . Neal Stephenson wrote speculative fiction that imagined interactive paper that could display videos and interfaces, and books that could teach and respond to their readers .\n \n\n\n\n\n More recent designs (though still historical by personal computing standards) point to a future where computers are connected and assist people in decision-making and communicating using rich graphics and interactive user interfaces . While some technologies have seen mainstream adoption, such as Hypertext , unfortunately, many others have not. The most popular publishing platforms, for example WordPress and Medium, choose to prioritize social features and ease-of-use while limiting the ability for authors to communicate using the dynamic features of the web.\n \n\n\n\n In the spirit of previous computer-assisted cognition technologies, a new type of computational communication medium has emerged that leverages active reading techniques to make ideas more accessible to a broad range of people. These interactive articles build on a long history, from Plato to PHeT to explorable explanations . They have been shown to be more engaging, can help improve recall and learning, and attract broad readership and acclaim,For example, some of *The New York Times* and *The Washington Post’s* most read articles are interactive stories. yet we do not know that much about them.\n \n\n\n\n In this work, for the the first time, we connect the dots between interactive articles such as those featured in this journal and publications like *The New York Times* and the techniques, theories, and empirical evaluations put forth by academic researchers across the fields of education, human-computer interaction, information visualization, and digital journalism. We show how digital designers are operationalizing these ideas to create interactive articles that help boost learning and engagement for their readers compared to static alternatives.\n \n\n\n\n\n\n Today there is a growing excitement around the use of interactive articles for communication since they offer unique capabilities to help people learn and engage with complex ideas that traditional media lacks. After describing the affordances of interactive articles, we provide critical reflections from our own experience with open-source, interactive publishing at scale. We conclude with discussing practical challenges and open research directions for authoring, designing, and publishing interactive articles.\n \n\n\n\n This style of communication — and the platforms which support it — are still in their infancy. When choosing where to publish this work, we wanted the medium to reflect the message. Journals like *Distill* are not only pushing the boundaries of machine learning research but also offer a space to put forth new interfaces for dissemination. This work ties together the theory and practice of authoring and publishing interactive articles. It demonstrates the power that the medium has for providing new representations and interactions to make systems and ideas more accessible to broad audiences.\n \n\n\n\nInteractive Articles: Theory & Practice\n---------------------------------------\n\n\n\n Interactive articles draw from and connect many types of media, from static text and images to movies and animations. But in contrast to these existing forms, they also leverage interaction techniques such as details-on demand, belief elicitation, play, and models and simulations to enhance communication.\n \n\n\n\n While the space of possible designs is far too broad to be solved with one-size-fits-all guidelines, by connecting the techniques used in these articles back to underlying theories presented across disparate fields of research we provide a missing foundation for designers to use when considering the broad space of interactions that could be added to a born-digital article.\n \n\n\n\n We draw from a corpus of over sixty interactive articles to highlight the breadth of techniques available and analyze how their authors took advantage of a digital medium to improve the reading experience along one or more dimensions, for example, by reducing the overall cognitive load, instilling positive affect, or improving information recall.\n \n\n\n\n\n Because diverse communities create interactive content, this medium goes by many different names and has not yet settled on a standardized format nor definition.However, one is taking shape. Researchers have proposed artifacts such as explorable multiverse analyses , explainables , and exploranations to more effectively disseminate their work, communicate their results to the public, and remove research debt . In newsrooms, data journalists, developers, and designers work together to make complex news and investigative reporting clear and engaging using interactive stories . Educators use interactive textbooks as an alternative learning format to give students hands-on experience with learning material .\n \n\n\n\n Besides these groups, others such as academics, game developers, web developers, and designers blend editorial, design, and programming skills to create and publish explorable explanations , interactive fiction , interactive non-fiction , active essays , and interactive games . While these all slightly differ in their technical approach and target audience, they all largely leverage the interactivity of the modern web.\n \n\n\n\n We focus on five unique affordances of interactive articles, listed below. In-line videos and example interactive graphics are presented alongside this discussion to demonstrate specific techniques.\n \n\n\n\n\n\n### Connecting People and Data\n\n\n\n As visual designers are well aware, and as journalism researchers have confirmed empirically , an audience which finds content to be aesthetically pleasing is more likely to have a positive attitude towards it. This in turn means people will spend more time engaging with content and ultimately lead to improved learning outcomes. While engagement itself may not be an end goal of most research communications, the ability to influence both audience attitude and the amount of time that is spent is a useful lever to improve learning: we know from education research that both time spent and emotion are predictive of learning outcomes.\n \n\n\n\n Animations can also be used to improve engagement . While there is debate amongst researchers if animations in general are able to more effectively convey the same information compared to a well designed static graphic , animation has been shown to be effective specifically for communicating state transitions , uncertainty , causality , and constructing narratives . A classic example of this is Muybridge’s motion study that can be seen in [3](#horse): while the series of still images may be more effective for answering specific questions like, “Does a horse lift all four of its feet off the ground when it runs?” watching the animation in slow motion gives the viewer a much more visceral sense of how it runs. A more modern example can be found in OpenAI’s reporting on their hide-and-seek agents . The animations here instantly give the viewer a sense of how the agents are operating in their environment.\n \n\n\n\n\n Passively, animation can be used to add drama to a graphic displaying important information, but which readers may otherwise find dry. Scientific data which is inherently time varying may be shown using an animation to connect viewers more closely with the original data, as compared to seeing an abstracted static view. For example, Ed Hawkins designed “Climate Spirals,” which shows the average global temperature change over time . This presentation of the data resonated with a large public audience, so much so that it was displayed at the opening ceremony at the 2016 Rio Olympics. In fact, many other climate change visualizations of this same dataset use animation to build suspense and highlight the recent spike in global temperatures .\n \n\n\n\n\n By adding variation over time, authors have access to a new dimension to encode information and an even wider design space to work in. Consider the animated graphic in *The New York Times* story “Extensive Data Shows Punishing Reach of Racism for Black Boys,” which shows economic outcomes for 10,000 men who grew up in rich families . While there are many ways in which the same data could have been communicated more succinctly using a static visualization , by utilizing animation, it became possible for the authors to design a unit visualization in which each data point shown represented an individual, reminding readers that the data in this story was about real peoples’ lives.\n \n\n\n\n Unit visualizations have also been used to evoke empathy in readers in other works covering grim topics such as gun deaths and soldier deaths in war . Using person-shaped glyphs (as opposed to abstract symbols like circles or squares) has been shown not to produce additional empathic responses , but including actual photographs of people helps readers gain interest in, remember , and communicate complex phenomena using visualizations. Correll argues that much of the power of visualization comes from abstraction, but quantization stymies empathy . He instead suggests anthropomorphizing data, borrowing journalistic and rhetoric techniques to create novel designs or interventions to foster empathy in readers when viewing visualizations .\n \n\n\n\n Regarding the format of interactive articles, an ongoing debate within the data journalism community has been whether articles which utilize scroll-based graphics (scrollytelling) are more effective than those which use step-based graphics (slideshows). McKenna et al. found that their study participants largely preferred content to be displayed with a step- or scroll-based navigation as opposed to traditional static articles, but did not find a significant difference in engagement between the two layouts. In related work, Zhi et al. found that performance on comprehension tasks was better in slideshow layouts than in vertical scroll-based layouts . Both studies focused on people using desktop (rather than mobile) devices. More work is needed to evaluate the effectiveness of various layouts on mobile devices, however the interviews conducted by MckEnna et al. suggest that additional features, such as supporting navigation through swipe gestures, may be necessary to facilitate the mobile reading experience.\n \n\n\n\n\n\n The use of games to convey information has been explored in the domains of journalism and education . Designers of newsgames use them to help readers build empathy with their subject, for example in *The Financial Times’s* “Uber Game” , and explain complex systems consisting of multiple parts, for example in *Wired’s* “Cutthroat Capitalism: The Game” . In educational settings the use of games has been shown to motivate students while maintaining or improving learning outcomes .\n \n\n\n\n As text moves away from author-guided narratives towards more reader-driven ones , the reading experience becomes closer to that of playing a game. For example, the critically acclaimed explorable explanation “Parable of the Polygons” puts play at the center of the story, letting a reader manually run an algorithm that is later simulated in the article to demonstrate how a population of people with slight personal biases against diversity leads to social segregation .\n \n\n\n\n### Making Systems Playful\n\n\n\n Interactive articles utilize an underlying computational infrastructure, allowing authors editorial control over the computational processes happening on a page. This access to computation allows interactive articles to engage readers in an experience they could not have with traditional media. For example, in “Drawing Dynamic Visualizations”, Victor demonstrates how an interactive visualization can allow readers to build an intuition about the behavior of a system, leading to a fundamentally different understanding of an underlying system compared to looking at a set of static equations . These articles leverage active learning and reading, combined with critical thinking to help diverse sets of people learn and explore using sandboxed models and simulations .\n \n\n\n\n Complex systems often require extensive setup to allow for proper study: conducting scientific experiments, training machine learning models, modeling social phenomenon, digesting advanced mathematics, and researching recent political events, all require the configuration of sophisticated software packages before a user can interact with a system at all, even just to tweak a single parameter. This barrier to entry can deter people from engaging with complex topics, or explicitly prevent people who do not have the necessary resources, for example, computer hardware for intense machine learning tasks. Interactive articles drastically lower these barriers.\n \n\n\n\n Science that utilizes physical and computational experiments requires systematically controlling and changing parameters to observe their effect on the modeled system. In research, dissemination is typically done through static documents, where various figures show and compare the effect of varying particular parameters. However, efforts have been made to leverage interactivity in academic publishing, summarized in . Reimagining the research paper with interactive graphics , as exploranations , or as explorable multiverse analyses , gives readers control over the reporting of the research findings and shows great promise in helping readers both digest new ideas and learn about existing fields that are built upon piles of research debt .\n \n\n\n\n Beyond reporting statistics, interactive articles are extremely powerful when the studied systems can be modeled or simulated in real-time with interactive parameters without setup, e.g., in-browser sandboxes. Consider the example in [4](#simulation-vis) of a Boids simulation that models how birds flock together. Complex systems such as these have many different parameters that change the resulting simulation. These sandbox simulations allow readers to play with parameters to see their effect without worrying about technical overhead or other experimental consequences.\n \n\n\n\n\n This is a standout design pattern within interactive articles, and many examples exist ranging in complexity. “How You Will Die” visually simulates the average life expectancy of different groups of people, where a reader can choose the gender, race, and age of a person . “On Particle Physics” allows readers to experiment with accelerating different particles through electric and magnetic fields to build intuition behind electromagnetism foundations such as the Lorentz force and Maxwell’s equations — the experiments backing these simulations cannot be done without multi-million dollar machinery . “Should Prison Sentences Be Based On Crimes That Haven’t Been Committed Yet?” shows the outcome of calculating risk assessments for recidivism where readers adjust the thresholds for determining who gets parole .\n \n\n\n\n\n The dissemination of modern machine learning techniques has been bolstered by interactive models and simulations. Three articles, “How to Use t-SNE Effectively” , “The Beginner’s Guide to Dimensionality Reduction” , and “Understanding UMAP” show the effect that hyperparameters and different dimensionality reduction techniques have on creating low dimensional embeddings of high-dimensional data. A popular approach is to demonstrate how machine learning models work with in-browser models , for example, letting readers use their own video camera as input to an image classification model or handwriting as input to a stroke prediction model . Other examples are aimed at technical readers who wish to learn about specific concepts within deep learning. Here, interfaces allow readers to choose model hyperparameters, datasets, and training procedures that, once selected, visualize the training process and model internals to inspect the effect of varying the model configuration .\n \n\n\n\n Interactive articles commonly communicate a single idea or concept using multiple representations. The same information represented in different forms can have different impact. For example, in mathematics often a single object has both an algebraic and a geometric representation. A clear example of this is the definition of a circle . Both are useful, inform one another, and lead to different ways of thinking. Examples of interactive articles that demonstrate this include various media publications’ political election coverage that break down the same outcome in multiple ways, for example, by voter demographics, geographical location, and historical perspective .\n \n\n\n\n The Multimedia Principle states that people learn better from words and pictures rather than words or pictures alone , as people can process information through both a visual channel and auditory channel simultaneously. Popular video creators such as 3Blue1Brown and Primer exemplify these principles by using rich animation and simultaneous narration to break down complex topics. These videos additionally take advantage of the Redundancy Principle by including complementary information in the narration and in the graphics rather than repeating the same information in both channels .\n \n\n\n\n\n\n While these videos are praised for their approachability and rich exposition, they are not interactive. One radical extension from traditional video content is also incorporating user input into the video while narration plays. A series of these interactive videos on “Visualizing Quaternions” lets a reader listen to narration of a live animation on screen, but at any time the viewer can take control of the video and manipulate the animation and graphics while simultaneously listening to the narration .\n \n\n\n\n Utilizing multiple representations allows a reader to see different abstractions of a single idea. Once these are familiar and known, an author can build interfaces from multiple representations and let readers interact with them simultaneously, ultimately leading to interactive experiences that demonstrate the power of computational communication mediums. Next, we discuss such experiences where interactive articles have transformed communication and learning by making live models and simulations of complex systems and phenomena accessible.\n \n\n\n\n### Prompting Self-Reflection\n\n\n\n Asking a student to reflect on material that they are studying and explain it back to themselves — a learning technique called self-explanation — is known to have a positive impact on learning outcomes . By generating explanations and refining them as new information is obtained, it is hypothesized that a student will be more engaged with the processes which they are studying . When writing for an interactive environment, components can be included which prompt readers to make a prediction or reflection about the material and cause them to engage in self-explanation .\n \n\n\n\n While these prompts may take the form of text entry or other standard input widgets, one of the most prominent examples of this technique used in practice comes from *The New York Times* “You Draw It” visualizations . In these visualizations, readers are prompted to complete a trendline on a chart, causing them to generate an explanation based on their current beliefs for why they think the trend may move in a certain direction. Only after readers make their prediction are they shown the actual data. Kim et al. showed that using visualizations as a prompt is an effective way to encourage readers to engage in self explanation and improve their recall of the information . [5](#you-draw-it) shows one these visualizations for CO₂ emissions from burning fossil fuels. After clicking and dragging to guess the trend, your guess will be compared against the actual data.\n \n\n\n\n\n\n\n In the case of “You Draw It,” readers were also shown the predictions that others made, adding a social comparison element to the experience. This additional social information was not shown to necessarily be effective for improving recall . However, one might hypothesize that this social aspect may have other benefits such as improving reader engagement, due to the popularity of recent visual stories using this technique, for example in *The Pudding’s* “Gyllenhaal Experiment” and *Quartz’s* “How do you draw a circle?” .\n \n\n\n\n Prompting readers to remember previously presented material, for example through the use of quizzes, can be an effective way to improve their ability to recall it in the future . This result from cognitive psychology, known as the testing effect , can be utilized by authors writing for an interactive medium . While testing may call to mind stressful educational experiences for many, quizzes included in web articles can be low stakes: there is no need to record the results or grade readers. The effect is enhanced if feedback is given to the quiz-takers, for example by providing the correct answer after the user has recorded their response .\n \n\n\n\n\n The benefits of the testing effect can be further enhanced if the testing is repeated over a period of time , assuming readers are willing to participate in the process. The idea of spaced repetition has been a popular foundation for memory building applications, for example in the Anki flash card system. More recently, authors have experimented with building spaced repetition directly into their web-based writing , giving motivated readers the opportunity to easily opt-in to a repeated testing program over the relevant material.\n \n\n\n\n### Personalizing Reading\n\n\n\n Content personalization — automatically modifying text and multimedia based on a reader’s individual features or input (e.g., demographics or location) — is a technique that has been shown to increase engagement and learning within readers and support behavioral change . The PersaLog system gives developers tools to build personalized content and presents guidelines for personalization based on user research from practicing journalists . Other work has shown that “personalized spatial analogies,” presenting distance measurements in regions where readers are geographically familiar with, help people more concretely understand new distance measurements within news stories .\n \n\n\n\n\n Personalization alone has also been used as the standout feature of multiple interactive articles. Both “How Much Hotter Is Your Hometown Than When You Were Born?” and “Human Terrain” use a reader’s location to drive stories relating to climate change and population densities respectively. Other examples ask for explicit reader input, such as a story that visualizes a reader’s net worth to challenge a reader’s assumptions if they are wealthy or not (relative to the greater population) , or predicting a reader’s political party affiliation . Another example is the interactive scatterplot featured in “Find Out If Your Job Will Be Automated” . In this visualization, professions are plotted to determine their likelihood of being automated against their average annual wage. The article encourages readers to use the search bar to type in their own profession to highlight it against the others.\n \n\n\n\n An interactive medium has the potential to offer readers an experience other than static, linear text. Non-linear stories, where a reader can choose their own path through the content, have the potential to provide a more personalized experience and focus on areas of user interest . For example, the *BBC* has used this technique in both online articles and in a recent episode of “Click” , a technology focused news television program. Non-linear stories present challenges for authors, as they must consider the myriad possible paths through the content, and consider the different possible experiences that the audience would have when pursuing different branches.\n \n\n\n\n\n Another technique interactive articles often use is segmenting content into small pieces to be read in-between or alongside other graphics. While we have already discussed cognitive load theory, the Segmenting Theory, the idea that complex lessons are broken into smaller, bit-sized parts , also supports personalization within interactive articles. Providing a reader the ability to play, pause, and scrub content allows the reader to move at their own speed, comprehending the information at a speed that works best for them. Segmenting also engages a reader’s essential processing without overloading their cognitive system .\n \n\n\n\n Multiple studies have been conducted showing that learners perform better when information is segmented, whether it be only within an animation or within an interface with textual descriptions . One excellent example of using segmentation and animation to personalize content delivery is “A Visual Introduction to Machine Learning,” which introduces fundamental concepts within machine learning in bite-sized pieces, while transforming a single dataset into a trained machine learning model . Extending this idea, in “Quantum Country,” an interactive textbook covering quantum computing, the authors implemented a user account system, allowing readers to save their position in the text and consume the content at their own pace . This book further utilizes the interactive medium by utilizing spaced repetition that helps improve recall.\n \n\n\n### Reducing Cognitive Load\n\n\n\n Authors must calibrate the detail at which to discuss ideas and content to their readers expertise and interest to not overload them. When topics become multifaceted and complex, a balance must be struck between a high-level overview of a topic and its lower-level details. One interaction technique used to prevent a cognitive overload within a reader is “details-on-demand.”\n \n\n\n\n Details-on-demand has become an ubiquitous design pattern. For example, modern operating systems offer to fetch dictionary definitions when a word is highlighted. When applied to visualization, this technique allows users to select parts of a dataset to be shown in more detail while maintaining a broad overview. This is particularly useful when a change of view is not required, so that users can inspect elements of interest on a point-by-point basis in the context of the whole . Below we highlight areas where details-on-demand has been successfully applied to reduce the amount of information present within an interface at once.\n \n\n\n#### Data Visualization\n\n\n\n Details-on-demand is core to information visualization, and concludes the seminal Visual Information-Seeking Mantra: *“Overview first, zoom and filter, then details-on-demand”* . Successful visualizations not only provide the base representations and techniques for these three steps, but also bridge the gaps between them . In practice, the solidified standard for details-on-demand in data visualization manifests as a tooltip, typically summoned on a cursor mouseover, that presents extra information in an overlay. Given that datasets often contain multiple attributes, tooltips can show the other attributes that are not currently encoded visually , for example, the map in [6](#details-vis) that shows where different types of birdsongs where recorded and what they sound like.\n \n\n\n\n#### Illustration\n\n\n\n Details-on-demand is also used in illustrations, interactive textbooks, and museum exhibits, where highlighted segments of a figure can be selected to display additional information about the particular segment. For example, in “How does the eye work?” readers can select segments of an anatomical diagram of the human eye to learn more about specific regions, e.g., rods and cones . Another example is “Earth Primer,” an interactive textbook on tablets that allows readers to inspect the Earth’s interior, surface, and biomes . Each illustration contains segments the reader can tap to learn and explore in depth. [7](#details-illustration) demonstrates this by pointing out specific regions in machine-generated imagery to help people spot fake images.\n \n\n\n\n#### Mathematical Notation\n\n\n\n Formal mathematics, a historically static medium, can benefit from details-on-demand, for example, to elucidate a reader with intuition about a particular algebraic term, present a geometric interpretation of an equation, or to help a reader retain high-level context while digesting technical details.See this [list of examples](https://github.com/fredhohman/awesome-mathematical-notation-design) that experiment with applying new design techniques to various mathematical notation. For example, in “Why Momentum Really Works,” equation layout is done using Gestalt principles plus annotation to help a reader easily identify specific terms. In “Colorized Math Equations,” the Fourier transform equation is presented in both mathematical notation and plain text, but the two are linked through a mouseover that highlights which term in the equation corresponds to which word in the text . Another example that visualizes mathematics and computation is the “Image Kernels” tutorial where a reader can mouseover a real image and observe the effect and exact computation for applying a filter over the image .\n \n Instead of writing down long arithmetic sums, the interface allows readers to quickly see the summation operation’s terms and output. In [8](#details-math), one of Maxwell’s equations is shown. Click the two buttons to reveal, or remind yourself, what each notation mark and variable represent.\n \n\n\n\n#### Text\n\n\n\n While not as pervasive, text documents and other long-form textual mediums have also experimented with letting readers choose a variable level of detail to read. This idea was explored as early as the 1960s in StretchText, a hypertext feature that allows a reader to reveal a more descriptive or exhaustive explanation of something by expanding or contracting the content in place . The idea has resurfaced in more recent examples, including “On Variable Level-of-detail Documents” , a PhD thesis turned interactive article , and the call for proposals of *The Parametric Press* . One challenge that has limited this technique’s adoption is the burden it places on authors to write multiple versions of their content. For example, drag the slider in [9](#details-text) to read descriptions of the Universal Approximation Theorem in increasing levels of detail. For other examples of details-on-demand for text, such as application in code documentation, see this small collection of examples .\n \n\n\n\n#### Previewing Content\n\n\n\n Details-on-demand can also be used as a method for previewing content without committing to another interaction or change of view. For example, when hovering over a hyperlink on Wikipedia, a preview card is shown that can contain an image and brief description; this gives readers a quick preview of the topic without clicking through and loading a new page . This idea is also not new: work from human-computer interaction explored fluid links within hypertext that present information about a particular topic in a location that does not obscure the source material. Both older and modern preview techniques use perceptually-based animation and simple tooltips to ensure their interactions are natural and lightweight feeling to readers .\n \n\n\nChallenges for Authoring Interactives\n-------------------------------------\n\n\n\n*If interactive articles provide clear benefits over other mediums for communicating complex ideas, then why aren’t they more prevalent?* \n\n\n\n\n Unfortunately, creating interactive articles today is difficult. Domain-specific diagrams, the main attraction of many interactive articles, must be individually designed and implemented, often from scratch. Interactions need to be intuitive and performant to achieve a nice reading experience. Needless to say, the text must also be well-written, and, ideally, seamlessly integrated with the graphics.\n \n\n\n\n The act of creating a successful interactive article is closer to building a website than writing a blog post, often taking significantly more time and effort than a static article, or even an academic publication.As a proxy, see the number of commits on an [example *Distill* article](https://github.com/distillpub/post--building-blocks). Most interactive articles are created using general purpose web-development frameworks which, while expressive, can be difficult to work with for authors who are not also web developers. Even for expert web developers, current tools offer lower levels of abstraction than may be desired to prototype and iterate on designs.\n \n\n\n\n While there are some tools that help with alleviating this problem , they are relatively immature and mainly help with reducing the necessary programming tedium. Tools like Idyll can help authors start writing quickly and even enable rapid iteration through various designs (for example, letting an author quickly compare between sequencing content using a “scroller” or “stepper” based layout). However, Idyll does not offer any design guidance, help authors think through where interactivity would be most effectively applied, nor highlight how their content could be improved to increase its readability and memorability. For example, Idyll encodes no knowledge of the positive impact of self-explanation, instead it requires authors to be familiar with this research and how to operationalize it.\n \n\n\n\n To design an interactive article successfully requires a diverse set of editorial, design, and programming skills. While some individuals are able to author these articles on their own, many interactive articles are created by a collective team consisting of multiple members with specialized skills, for example, data analysts, scripters, editors, journalists, graphic designers, and typesetters, as outlined in . The current generation of authoring tools do not acknowledge this collaboration. For example, to edit only the text of *this article* requires one to clone its source code using git, install project-specific dependencies using a terminal, and be comfortable editing HTML files. All of this complexity is incidental to task of editing text.\n \n\n\n\n Publishing to the web brings its own challenges: while interactive articles are available to anyone with a browser, they are burdened by rapidly changing web technologies that could break interactive content after just a few years. For this reason, easy and accessible interactive article archival is important for authors to know their work can be confidently preserved indefinitely to support continued readership.This challenge has been [pointed out](https://twitter.com/redblobgames/status/1168520452634865665) by the community. Authoring interactive articles also requires designing for a diverse set of devices, for example, ensuring bespoke content can be adapted for desktop and mobile screen sizes with varying connection speeds, since accessing interactive content demands more bandwidth.\n \n\n\n\n There are other non-technical limitations for publishing interactive articles. For example, in non-journalism domains, there is a mis-aligned incentive structure for authoring and publishing interactive content: why should a researcher spend time on an “extra” interactive exposition of their work when they could instead publish more papers, a metric by which their career depends on? While different groups of people seek to maximize their work’s impact, legitimizing interactive artifacts requires buy-in from a collective of communities.\n \n\n\n\n Making interactive articles accessible to people with disabilities is an open challenge. The dynamic medium exacerbates this problem compared to traditional static writing, especially when articles combine multiple formats like audio, video, and text. Therefore, ensuring interactive articles are accessible to everyone will require alternative modes of presenting content (e.g. text-to-speech, video captioning, data physicalization, data sonification) and careful interaction design.\n \n\n\n\n It is also important to remember that not everything needs to be interactive. Authors should consider the audience and context of their work when deciding if use of interactivity would be valuable. In the worst case, interactivity may be distracting to readers or the functionality may go unused, the author having wasted their time implementing it. However, even in a domain where the potential communication improvement is incremental,In reality, multimedia studies show large effect sizes for improvement of transfer learning in many cases, see Chapter 12 of. at scale (e.g., delivering via the web), interactive articles can still [have impact](https://twitter.com/michael_nielsen/status/1031256363458916352?lang=en) .\n \n\n\nCritical Reflections\n--------------------\n\n\n\n We write this article not as media theorists, but as practitioners, researchers, and tool builders. While it has never been easier for writers to share their ideas online, current publishing tools largely support only static authoring and do not take full advantage of the fact that the web is a dynamic medium. We want that to change, and we are not alone. Others from the explorable explanations community have identified design patterns that help share complex ideas through play .\n \n\n\n\n To explore these ideas further, two of this work’s authors created *The Parametric Press:* an annually published digital magazine that showcases the expository power that interactive dynamic media can have when effectively combined . In late 2018, we invited writers to respond to a call for proposals for our first issue focusing on exploring scientific and technological phenomena that stand to shape society at large. We sought to cover topics that would benefit from using the interactive or otherwise dynamic capabilities of the web. Given the challenges of authoring interactive articles, we did not ask authors to submit fully developed pieces. Instead, we accepted idea submissions, and collaborated with the authors over the course of four months to develop the issue, offering technical, design, and editorial assistance collectively to the authors that lacked experience in one of these areas. For example, we helped a writer implement visualizations, a student frame a cohesive narrative, and a scientist recap history and disseminate to the public. Multiple views from one article are shown in [10](#parametric).\n \n\n\n\n\n\n We see *The Parametric Press* as a crucial connection between the often distinct worlds of research and practice. The project serves as a platform through which to operationalize the theories put forth by education, journalism, and HCI researchers. Tools like Idyll which are designed in a research setting need to be validated and tested to ensure that they are of practical use; *The Parametric Press* facilitates this by allowing us to study its use in a real-world setting, by authors who are personally motivated to complete their task of constructing a high-quality interactive article, and only have secondary concerns and care about the tooling being used, if at all.\n \n\n\n\n Through *The Parametric Press*, we saw the many challenges of authoring, designing, and publishing first hand, dually as researchers and practitioners. [2](#research-x-practice-table) summarizes interactive communication opportunities from both research and practice.\n \n\n\n\n\n As researchers we can treat the project as a series of case studies, where we were observers of the motivation and workflows which were used to craft the stories, from their initial conception to their publication. Motivation to contribute to the project varied by author. Where some authors had personal investment in an issue or dataset they wanted to highlight and raise awareness to broadly, others were drawn towards the medium, recognizing its potential but not having the expertise or support to communicate interactively. We also observed how research software packages like Apparatus , Idyll , and D3 fit into the production of interactive articles, and how authors must combine these disparate tools to create an engaging experience for readers. In one article, “On Particle Physics,” an author combined two tools in a way that allowed him to create and embed dynamic graphics directly into his article without writing any code beyond basic markup. One of the creators of Apparatus had not considered this type of integration before, and upon seeing the finished article [commented](https://twitter.com/qualmist/status/1128157840672051200?s=20), *“That’s fantastic! Reading that article, I had no idea that Apparatus was used. This is a very exciting proof-of-concept for unconventional explorable-explanation workflows.”*\n\n\n\n\n We were able to provide editorial guidance to the authors drawing on our knowledge of empirical studies done in the multimedia learning and information visualization communities to recommend graphical structures and page layouts, helping each article’s message be communicated most effectively. One of the most exciting outcomes of the project is that we saw authors develop interactive communication skills like any other skill: through continued practice, feedback, and iteration. We also observed the challenges that are inherent in publishing dynamic content on the web and identified the need for improved tooling in this area, specifically around the archiving of interactive articles. Will an article’s code still run a year from now? Ten years from now? To address interactive content archival, we set up a system to publish a digital archive of all of our articles at the time that they are first published to the site. At the top of each article on *The Parametric Press* is an archive link that allows readers to download a WARC (Web ARChive) file that can “played back” without requiring any web infrastructure. While our first iteration of the project relied on ad-hoc solutions to these problems, we hope to show how digital works such as ours can be published confidently knowing that they will be preserved indefinitely.\n \n\n\n\n As practitioners we pushed the boundaries of the current generation of tools designed to support the creation of interactive articles on the web. We found bugs and limitations in Idyll, a tool which was originally designed to support the creation of one-off articles that we used as a content management system to power an entire magazine issue. We were forced to write patches and plugins to work around the limitations and achieve our desired publication.Many of these patches have since been merged to Idyll itself. This is the power of modular open-source tooling in action. We were also forced to craft designs under a more realistic set of constraints than academics usually deal with: when creating a visualization it is not enough to choose the most effective visual encodings, the graphics also had to be aesthetically appealing, adhere to a house style, have minimal impact on page load time and runtime performance, be legible on both mobile and desktop devices, and not be overly burdensome to implement. Any extra hour spent implementing one graphic was an hour that was not spent improving some other part of the issue, such as the clarity of the text, or the overall site design.\n \n\n\n\n There are relatively few outlets that have the skills, technology, and desire to publish interactive articles. From its inception, one of the objectives of *The Parametric Press* is to showcase the new forms of media and publishing that are possible with tools available today, and inspire others to create their own dynamic writings. For example, Omar Shehata, the authors of *The Parametric Press* article [“Unraveling the JPEG,”](https://parametric.press/issue-01/unraveling-the-jpeg/) told us he had wanted to write this interactive article for years yet never had the opportunity, support, or incentive to create it. His article drew wide interest and critical acclaim.\n \n\n\n\n We also wanted to take the opportunity as an independent publication to serve as a concrete example for others to follow, to represent a set of best practices for publishing interactive content. To that end, we made available all of the software that runs the site, including reusable components, custom data visualizations, and the publishing engine itself.\n \n\n\nLooking Forward\n---------------\n\n\n\n A diverse community has emerged to meet these challenges, exploring and experimenting with what interactive articles could be. The [Explorable Explanations community](https://explorabl.es/) is a “disorganized ‘movement’ of artists, coders & educators who want to reunite play and learning.” Their online hub contains 170+ interactive articles on topics ranging from art, natural sciences, social sciences, journalism, and civics. The curious can also find tools, tutorials, and meta-discussion around learning, play, and representations. Explorables also hosted a mixed in-person and [online Jam](https://explorabl.es/jam/): a community-based sprint focused on creating new explorable explanations. [11](#jam) highlights a subset of the interactive articles created during the Jam.\n \n\n\n\n\n Many interactive articles are self-published due to a lack of platforms that support interactive publishing. Creating more outlets that allow authors to publish interactive content will help promote their development and legitimization. The few existing examples, including newer journals such as *Distill*, academic workshops like VISxAI , open-source publications like *The Parametric Press* , and live programming notebooks like Observable help, but currently target a narrow group of authors, namely those who have programming skills. Such platforms should also provide clear paths to submission, quality and editorial standards, and authoring guidelines. For example, news outlets have clear instructions for pitching written pieces, yet this is under-developed for interactive articles. Lastly, there is little funding available to support the development of interactive articles and the tools that support them. Researchers do not receive grants to communicate their work, and practitioners outside of the largest news outlets are not able to afford the time and implementation investment. Providing more funding for enabling interactive articles incentivizes their creation and can contribute to a culture where readers expect digital communications to better utilize the dynamic medium.\n \n\n\n\n We have already discussed the breadth of skills required to author an interactive article. Can we help lower the barrier to entry? While there have been great, practical strides in this direction , there is still opportunity for creating tools to design, develop, and evaluate interactive articles in the wild. Specific features should include supporting mobile-friendly adaptations of interactive graphics (for example ), creating content for different platforms besides just the web, and tools that allow people to create interactive content without code.\n \n\n\n\n The usefulness of interactive articles is predicated on the assumption that these interactive articles actually facilitate communication and learning. There is limited empirical evaluation of the effectiveness of interactive articles. The problem is exacerbated by the fact that large publishers are unwilling to share internal metrics, and laboratory studies may not generalize to real world reading trends. *The New York Times* provided one of the few available data points, stating that only a fraction of readers interact with non-static content, and suggested that designers should move away from interactivity . However, other research found that many readers, even those on mobile devices, are interested in utilizing interactivity when it is a core part of the article’s message . This statement from *The New York Times* has solidified as a rule-of-thumb for designers and many choose not to utilize interactivity because of it, despite follow-up discussion that contextualizes the original point and highlights scenarios where interactivity can be beneficial . This means designers are potentially choosing a suboptimal presentation of their story due to this anecdote. More research is needed in order to identify the cases in which interactivity is worth the cost of creation.\n \n\n\n\n We believe in the power and untapped potential of interactive articles for sparking reader’s desire to learn and making complex ideas accessible and understandable to all.", "date_published": "2020-09-11T20:00:00Z", "authors": ["Fred Hohman", "Matthew Conlen", "Jeffrey Heer", "Duen Horng (Polo) Chau"], "summaries": ["Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization."], "doi": "10.23915/distill.00028", "journal_ref": "distill-pub", "bibliography": [{"link": "https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/SR2017ReporttoHRC.aspx", "title": "Report on the role of digital access providers"}, {"link": "https://dl.acm.org/doi/abs/10.1145/800193.1971922", "title": "A personal computer for children of all ages"}, {"link": "https://www.dougengelbart.org/content/view/138", "title": "Augmenting human intellect: A conceptual framework"}, {"link": "https://en.wikipedia.org/wiki/Knowledge_Navigator", "title": "The knowledge navigator"}, {"link": "https://en.wikipedia.org/wiki/PLATO_(computer_system)", "title": "PLATO"}, {"link": "https://phet.colorado.edu/", "title": "PhET interactive simulations"}, {"link": "http://worrydream.com/ExplorableExplanations/", "title": "Explorable explanations"}, {"link": "https://www.nytimes.com/interactive/2014/upshot/dialect-quiz-map.html", "title": "How y’all, youse and you guys talk"}, {"link": "http://www.nytimes.com/projects/2012/snow-fall/index.html", "title": "Snow fall: The avalanche at tunnel creek"}, {"link": "https://www.washingtonpost.com/graphics/2020/world/corona-simulator/", "title": "Why outbreaks like coronavirus spread exponentially, and how to 'flatten the curve'"}, {"link": "https://research.google.com/bigpicture/attacking-discrimination-in-ml/", "title": "Attacking discrimination with smarter machine learning"}, {"link": "http://tomasp.net/coeffects/", "title": "Coeffects: Context-aware programming languages"}, {"link": "https://complexityexplained.github.io/", "title": "Complexity explained"}, {"link": "https://www.bloomberg.com/graphics/2015-whats-warming-the-world/", "title": "What's really warming the world"}, {"link": "https://www.nytimes.com/interactive/2015/05/28/upshot/you-draw-it-how-family-income-affects-childrens-college-chances.html", "title": "You draw it: How family income predicts children’s college chances"}, {"link": "https://ig.ft.com/uber-game/", "title": "The Uber Game"}, {"link": "https://pudding.cool/2018/02/waveforms/", "title": "Let's learn about waveforms"}, {"link": "https://thebookofshaders.com", "title": "The book of shaders"}, {"link": "http://www.econgraphs.org/", "title": "EconGraphs"}, {"link": "https://ncase.me/ballot/", "title": "To build a better ballot"}, {"link": "https://projects.fivethirtyeight.com/redistricting-maps/", "title": "The atlas of redistricting"}, {"link": "https://www.nytimes.com/interactive/2014/upshot/buy-rent-calculator.html", "title": "Is it better to rent or buy"}, {"link": "https://en.wikipedia.org/wiki/Explorable_explanation", "title": "Explorable explanation"}, {"link": "https://explorablemultiverse.github.io", "title": "Increasing the transparency of research papers with explorable multiverse analyses"}, {"link": "https://visxai.io/", "title": "Workshop on Visualization for AI Explainability"}, {"link": "https://ieeexplore.ieee.org/abstract/document/8370186?casa_token=rvOSVHyPfCcAAAAA:hpGZrF1ytQIJ4x0jS8J99ok7v16vM-OC_keXvKCbQIuHsACTLXxLr-cutvgZ9PyC3MiAEbNb5Q", "title": "Exploranation: A new science communication paradigm"}, {"link": "https://distill.pub/2017/research-debt", "title": "Research debt"}, {"link": "https://ieeexplore.ieee.org/abstract/document/7274435", "title": "More than telling a story: Transforming data into visually shared stories"}, {"link": "https://srcd.onlinelibrary.wiley.com/doi/full/10.1111/j.1750-8606.2009.00095.x", "title": "\"Concrete\" computer manipulatives in mathematics education"}, {"link": "https://link.springer.com/chapter/10.1007/978-3-642-25289-1_37", "title": "Interactive non-fiction: Towards a new approach for storytelling in digital journalism"}, {"link": "https://ieeexplore.ieee.org/abstract/document/5350160", "title": "Active essays on the web"}, {"link": "https://www.tandfonline.com/doi/full/10.1080/21670811.2018.1488598", "title": "Simply bells and whistles? Cognitive effects of visual aesthetics in digital longforms"}, {"link": "https://www.tandfonline.com/doi/abs/10.1080/00220671.1980.10885233", "title": "Learning as a function of time"}, {"link": "https://psycnet.apa.org/doiLanding?doi=10.1037%2Fa0026609", "title": "Emotional design in multimedia learning"}, {"link": "https://dl.acm.org/doi/abs/10.1145/3206505.3206552", "title": "Hooked on data videos: assessing the effect of animation and pictographs on viewer engagement"}, {"link": "https://www.sciencedirect.com/science/article/abs/pii/S1071581902910177", "title": "Animation: Can it facilitate?"}, {"link": "https://ieeexplore.ieee.org/abstract/document/4376146", "title": "Animated transitions in statistical data graphics"}, {"link": "https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0142444", "title": "Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering"}, {"link": "https://www.nature.com/articles/025605b0", "title": "The horse in motion"}, {"link": "http://arxiv.org/pdf/1909.07528.pdf", "title": "Emergent tool use from multi-agent autocurricula"}, {"link": "https://www.climate-lab-book.ac.uk/spirals/", "title": "Climate spirals"}, {"link": "https://www.bloomberg.com/graphics/hottest-year-on-record/", "title": "Earth's relentless warming sets a brutal new record in 2017"}, {"link": "https://climate.nasa.gov/vital-signs/global-temperature/", "title": "Global temperature"}, {"link": "https://www.nytimes.com/interactive/2017/07/28/climate/more-frequent-extreme-summer-heat.html", "title": "It's not your imagination. Summers are getting hotter."}, {"link": "https://www.nytimes.com/interactive/2018/03/19/upshot/race-class-white-and-black-men.html", "title": "Extensive data shows punishing reach of racism for black boys"}, {"link": "https://youtu.be/Kx6VNVLBgFI", "title": "Disagreements"}, {"link": "https://fivethirtyeight.com/features/gun-deaths/", "title": "Gun deaths in america"}, {"link": "http://www.fallen.io/ww2/", "title": "The fallen of World War II"}, {"link": "https://dl.acm.org/doi/abs/10.1145/3025453.3025512", "title": "Showing people behind data: Does anthropomorphizing visualizations elicit more empathy for human rights data?"}, {"link": "https://ieeexplore.ieee.org/abstract/document/7192646", "title": "Beyond memorability: Visualization recognition and recall"}, {"link": "https://ieeexplore.ieee.org/abstract/document/6634103", "title": "What makes a visualization memorable?"}, {"link": "https://source.opennews.org/articles/what-if-data-visualization-actually-people/", "title": "What if the data visualization is actually people"}, {"link": "https://arxiv.org/pdf/1811.07271.pdf", "title": "Ethical dimensions of visualization research"}, {"link": "https://ieeexplore.ieee.org/abstract/document/8640251", "title": "A walk among the data"}, {"link": "https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.13195", "title": "Visual narrative flow: Exploring factors shaping data visualization story reading experiences"}, {"link": "https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.13719", "title": "Linking and layout: Exploring the integration of text and visualization in storytelling"}, {"link": "https://www.wired.com/2009/07/cutthroat-capitalism-the-game/", "title": "Cutthroat Capitalism: The Game"}, {"link": "https://www.jstor.org/stable/pdf/jeductechsoci.8.2.54.pdf", "title": "Combining software games with education: Evaluation of its educational effectiveness"}, {"link": "https://ieeexplore.ieee.org/abstract/document/5613452", "title": "Narrative visualization: Telling stories with data"}, {"link": "http://ncase.me/polygons", "title": "Parable of the polygons"}, {"link": "https://vimeo.com/66085662", "title": "Drawing dynamic visualizations"}, {"link": "http://worrydream.com/ScientificCommunicationAsSequentialArt", "title": "Scientific communication as sequential art"}, {"link": "https://flowingdata.com/2016/01/19/how-you-will-die/", "title": "How you will die"}, {"link": "https://parametric.press/issue-01/on-particle-physics/", "title": "On Particle Physics"}, {"link": "https://fivethirtyeight.com/features/prison-reform-risk-assessment/", "title": "Should prison sentences be based on crimes that haven’t been committed yet"}, {"link": "https://distill.pub/2016/misread-tsne/", "title": "How to use t-SNE effectively"}, {"link": "https://idyll.pub/post/dimensionality-reduction-293e465c2a3443e8941b016d/", "title": "The beginner's guide to dimensionality reduction"}, {"link": "https://pair-code.github.io/understanding-umap/", "title": "Understanding UMAP"}, {"link": "http://arxiv.org/pdf/1901.05350.pdf", "title": "Tensorflow.js: Machine learning for the web and beyond"}, {"link": "https://design.google/library/designing-and-learning-teachable-machine/", "title": "Designing (and learning from) a teachable machine"}, {"link": "http://distill.pub/2016/handwriting", "title": "Experiments in handwriting with a neural network"}, {"link": "https://poloclub.github.io/ganlab/", "title": "Gan lab: Understanding complex deep generative models using interactive visual experimentation"}, {"link": "https://distill.pub/2017/aia/", "title": "Using artificial intelligence to augment human intelligence"}, {"link": "https://projects.fivethirtyeight.com/2016-election-forecast", "title": "Who will win the presidency"}, {"link": "https://www.nytimes.com/interactive/2016/upshot/presidential-polls-forecast.html", "title": "Who will be president"}, {"link": "https://www.washingtonpost.com/2016-election-results/us-presidential-race/?itid=sf_2016-election-results-washington", "title": "Live results: Presidential election"}, {"link": "https://www.sciencedirect.com/science/article/pii/S0079742102800056", "title": "Multimedia learning"}, {"link": "https://www.3blue1brown.com/", "title": "3Blue1Brown"}, {"link": "https://www.youtube.com/primerlearning", "title": "Primer"}, {"link": "https://psycnet.apa.org/record/2008-05694-008", "title": "Revising the redundancy principle in multimedia learning"}, {"link": "https://eater.net/quaternions", "title": "Visualizing quaternions: An explorable video series"}, {"link": "https://www.nytimes.com/interactive/2017/04/14/upshot/drug-overdose-epidemic-you-draw-it.html", "title": "You draw it: Just how bad is the drug overdose epidemic"}, {"link": "https://www.nytimes.com/interactive/2017/01/15/us/politics/you-draw-obama-legacy.html", "title": "You draw it: What got better or worse during Obama's presidency"}, {"link": "http://mucollective.co/theydrawit/", "title": "They draw it!"}, {"link": "https://ieeexplore.ieee.org/abstract/document/8019830", "title": "Data through others' eyes: The impact of visualizing others' expectations on visualization interpretation"}, {"link": "https://pudding.cool/2019/02/gyllenhaal/", "title": "The Gyllenhaal experiment"}, {"link": "https://qz.com/994486/the-way-you-draw-circles-says-a-lot-about-you/", "title": "How do you draw a circle? We analyzed 100,000 drawings to show how culture shapes our instincts"}, {"link": "https://journals.sagepub.com/doi/full/10.1111/j.1745-6916.2006.00012.x", "title": "The power of testing memory: Basic research and implications for educational practice"}, {"link": "https://www.khanacademy.org/", "title": "Khan Academy"}, {"link": "https://science.sciencemag.org/content/319/5865/966.abstract", "title": "The critical importance of retrieval for learning"}, {"link": "https://ncase.me/remember/", "title": "How to remember anything for forever-ish"}, {"link": "https://quantum.country/", "title": "Quantum country"}, {"link": "https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-0663.88.4.715", "title": "Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice."}, {"link": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1839536/", "title": "Authoring and generation of individualized patient education materials"}, {"link": "http://www.cond.org/persalog.html", "title": "PersaLog: Personalization of news article content"}, {"link": "https://dl.acm.org/doi/abs/10.1145/2858036.2858440", "title": "Generating personalized spatial analogies for distances and areas"}, {"link": "https://www.nytimes.com/interactive/2018/08/30/climate/how-much-hotter-is-your-hometown.html", "title": "How much hotter is your hometown than when you were born"}, {"link": "https://pudding.cool/2018/10/city_3d/", "title": "Human terrain"}, {"link": "https://www.nytimes.com/interactive/2019/08/01/upshot/are-you-rich.html", "title": "Are you rich? This income-rank quiz might change how you see yourself"}, {"link": "https://www.nytimes.com/interactive/2019/08/08/opinion/sunday/party-polarization-quiz.html", "title": "Quiz: Let us predict whether you’re a democrat or a republican"}, {"link": "https://www.bloomberg.com/graphics/2017-job-risk/", "title": "Find Out If Your Job Will Be Automated"}, {"link": "https://www.bbc.com/news/health-30500372", "title": "Booze calculator: What's your drinking nationality"}, {"link": "https://www.bbc.com/news/technology-48867302", "title": "Click 1,000: How the pick-your-own-path episode was made"}, {"link": "https://psycnet.apa.org/record/2003-09576-012", "title": "Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds?"}, {"link": "https://psycnet.apa.org/record/2002-02073-016", "title": "Pictorial aids for learning by doing in a multimedia geology simulation game."}, {"link": "http://www.r2d3.us/visual-intro-to-machine-learning-part-1/", "title": "A visual introduction to machine learning"}, {"link": "https://ieeexplore.ieee.org/abstract/document/1509067", "title": "Beyond guidelines: What can we learn from the visual information seeking mantra?"}, {"link": "https://ieeexplore.ieee.org/abstract/document/545307", "title": "The eyes have it: A task by data type taxonomy for information visualizations"}, {"link": "https://ieeexplore.ieee.org/abstract/document/981847", "title": "Information visualization and visual data mining"}, {"link": "https://www.nytimes.com/interactive/2014/06/05/upshot/how-the-recession-reshaped-the-economy-in-255-charts.html", "title": "How the recession shaped the economy, in 255 charts"}, {"link": "https://idyll.pub/post/the-eye-5b169094cce3bece5d95e964/", "title": "How does the eye work"}, {"link": "https://www.earthprimer.com/", "title": "Earth primer"}, {"link": "http://arxiv.org/pdf/1710.10196.pdf", "title": "Progressive growing of gans for improved quality, stability, and variation"}, {"link": "https://openaccess.thecvf.com/content_CVPR_2019/html/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.html", "title": "A style-based generator architecture for generative adversarial networks"}, {"link": "http://distill.pub/2017/momentum", "title": "Why momentum really works"}, {"link": "https://betterexplained.com/articles/colorized-math-equations/", "title": "Colorized math equations"}, {"link": "http://setosa.io/ev/image-kernels/", "title": "Image kernels"}, {"link": "http://symbolflux.com/lodessay/", "title": "On variable level-of-detail documents"}, {"link": "https://parametric.press/issue-01/call-for-proposals/", "title": "Call for proposals winter/spring 2019"}, {"link": "https://kayce.basqu.es/blog/information-control?curator=theedge", "title": "A UI that lets readers control how much information they see"}, {"link": "https://www.mediawiki.org/wiki/Page_Previews", "title": "Wikipedia Preview Card"}, {"link": "https://dl.acm.org/doi/10.1145/276627.276633", "title": "Fluid links for informed and incremental link transitions"}, {"link": "https://dl.acm.org/doi/abs/10.1145/513338.513353", "title": "Reading and writing fluid hypertext narratives"}, {"link": "https://idyll-lang.org/", "title": "Idyll: A markup language for authoring and publishing interactive articles on the web"}, {"link": "http://aprt.us/", "title": "Apparatus: A hybrid graphics editor and programming environment for creating interactive diagrams"}, {"link": "https://observablehq.com/", "title": "Observable"}, {"link": "https://ncase.me/loopy/", "title": "LOOPY: a tool for thinking in systems"}, {"link": "https://webstrates.net/", "title": "Webstrates: shareable dynamic media"}, {"link": "http://neuralnetworksanddeeplearning.com/", "title": "Neural networks and deep learning"}, {"link": "https://blog.ncase.me/how-i-make-an-explorable-explanation/", "title": "How I make explorable explanations"}, {"link": "https://blog.ncase.me/explorable-explanations-4-more-design-patterns/", "title": "Explorable explanations: 4 more design patterns"}, {"link": "https://www.microsoft.com/en-us/research/publication/emerging-and-recurring-data-driven-storytelling-techniques-analysis-of-a-curated-collection-of-recent-stories/", "title": "Emerging and recurring data-driven storytelling techniques: Analysis of a curated collection of recent stories"}, {"link": "https://parametric.press", "title": "Issue 01: Science & Society"}, {"link": "https://parametric.press/issue-01/the-myth-of-the-impartial-machine/", "title": "The myth of the impartial machine"}, {"link": "https://d3js.org/", "title": "D3 data-driven documents"}, {"link": "https://parametric.press", "title": "Launching the Parametric Press"}, {"link": "https://dl.acm.org/doi/abs/10.1145/3313831.3376777", "title": "Techniques for flexible responsive visualization design"}, {"link": "https://ieeexplore.ieee.org/abstract/document/8805428", "title": "A Comparative Evaluation of Animation and Small Multiples for Trend Visualization on Mobile Phones"}, {"link": "https://ieeexplore.ieee.org/abstract/document/8440812", "title": "Visualizing ranges over time on mobile phones: a task-based crowdsourced evaluation"}, {"link": "https://github.com/archietse/malofiej-2016/blob/master/tse-malofiej-2016-slides.pdf", "title": "Why we are doing fewer interactives"}, {"link": "https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.13720", "title": "Capture & analysis of active reading behaviors for interactive articles on the web"}, {"link": "https://vis4.net/blog/2017/03/in-defense-of-interactive-graphics/", "title": "In defense of interactive graphics"}]} {"id": "e2c468fc2ab939af3d98f18583f395d1", "title": "Thread: Differentiable Self-organizing Systems", "url": "https://distill.pub/2020/selforg", "source": "distill", "source_type": "blog", "text": "Self-organisation is omnipresent on all scales of biological life. From complex interactions between molecules\n forming structures such as proteins, to cell colonies achieving global goals like exploration by means of the\n individual cells collaborating and communicating, to humans forming collectives in society such as tribes,\n governments or countries. The old adage “the whole is greater than the sum of its parts”, often ascribed to\n Aristotle, rings true everywhere we look.\n \n\n\n\n The articles in this thread focus on practical ways of designing self-organizing systems. In particular we use\n Differentiable Programming (optimization) to learn agent-level policies that satisfy system-level objectives. The\n cross-disciplinary nature of this thread aims to facilitate ideas exchange between ML and developmental biology\n communities.\n \n\n\nArticles & Comments\n-------------------\n\n\n\n Distill has invited several researchers to publish a “thread” of short articles exploring differentiable\n self-organizing systems,\n interspersed with critical commentary from several experts in adjacent fields.\n The thread will be a living document, with new articles added over time.\n Articles and comments are presented below in chronological order:\n \n\n\n\n\n\n\n### \n[Growing Neural Cellular Automata](/2020/growing-ca/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Alexander Mordvintsev](https://znah.net/),\n Ettore Randazzo,\n [Eyvind Niklasson](https://eyvind.me/),\n [Michael Levin](http://www.drmichaellevin.org/)\n\n\n\n\n[Google](https://research.google/),\n [Allen Discovery Center](https://allencenter.tufts.edu/)\n\n\n\n\n\n\n Building their own bodies is the very first skill all living creatures possess. How can we design systems that\n grow, maintain and repair themselves by regenerating damages? This work investigates morphogenesis, the\n process by which living creatures self-assemble their bodies. It proposes a differentiable, Cellular Automata\n model of morphogenesis and shows how such a model learns a robust and persistent set of dynamics to grow any\n arbitrary structure starting from a single cell.\n [Read Full Article](/2020/growing-ca/) \n\n\n\n\n\n\n\n### \n[Self-classifying MNIST Digits](/2020/selforg/mnist/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\nEttore Randazzo,\n [Alexander Mordvintsev](https://znah.net/),\n [Eyvind Niklasson](https://eyvind.me/),\n [Michael Levin](http://www.drmichaellevin.org/),\n [Sam Greydanus](https://greydanus.github.io/about.html)\n\n\n\n\n[Google](https://research.google/),\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n [Oregon State University and the ML Collective](http://mlcollective.org/)\n\n\n\n\n\n\n This work presents a follow up to Growing Neural CAs, using a similar computational model for the goal of\n digit “self-classification”. The authors show how neural CAs can self-classify the MNIST digit they form. The\n resulting CAs can be interacted with by dynamically changing the underlying digit. The CAs respond to\n perturbations with a learned self-correcting classification behaviour.\n [Read Full Article](/2020/selforg/mnist/) \n\n\n\n\n\n\n\n### \n[Self-Organising Textures](/selforg/2021/textures/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Eyvind Niklasson](https://eyvind.me/),\n [Alexander Mordvintsev](https://znah.net/),\n Ettore Randazzo,\n [Michael Levin](http://www.drmichaellevin.org/)\n\n\n\n\n[Google](https://research.google/),\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n \n\n\n\n\n\n Here the authors apply Neural Cellular Automata to a new domain: texture synthesis. They begin by training NCA\n to mimic a series of textures taken from template images. Then, taking inspiration from adversarial\n camouflages which appear in nature, they use NCA to create textures which maximally excite neurons in a\n pretrained vision model. These results reveal that a simple model combined with well-known objectives can lead\n to robust and unexpected behaviors.\n [Read Full Article](/selforg/2021/textures/) \n\n\n\n\n\n\n\n### \n[Adversarial Reprogramming of Neural Cellular Automata](/selforg/2021/adversarial/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Ettore Randazzo](https://oteret.github.io/),\n [Alexander Mordvintsev](https://znah.net/),\n [Eyvind Niklasson](https://eyvind.me/),\n [Michael Levin](http://www.drmichaellevin.org/)\n\n\n\n\n[Google](https://research.google/),\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n \n\n\n\n\n\nThis work takes existing Neural CA models and shows how they can be adversarially reprogrammed to perform novel tasks. \nMNIST CA can be deceived into outputting incorrect classifications and the patterns in Growing CA can be made to have their shape and colour altered.\n [Read Full Article](/selforg/2021/adversarial/) \n\n\n\n\n\n\n#### This is a living document\n\n\n\n Expect more articles on this topic, along with critical comments from\n experts.\n \n\n\n\nGet Involved\n------------\n\n\n\n The Self-Organizing systems thread is open to articles exploring differentiable self-organizing sytems.\n Critical\n commentary and discussion of existing articles is also welcome. The thread\n is organized through the open `#selforg` channel on the\n [Distill slack](http://slack.distill.pub). Articles can be\n suggested there, and will be included at the discretion of previous\n authors in the thread, or in the case of disagreement by an uninvolved\n editor.\n \n\n\n\n If you would like get involved but don’t know where to start, small\n projects may be available if you ask in the channel.\n \n\n\nAbout the Thread Format\n-----------------------\n\n\n\n Part of Distill’s mandate is to experiment with new forms of scientific\n publishing. We believe that that reconciling faster and more continuous\n approaches to publication with review and discussion is an important open\n problem in scientific publishing.\n \n\n\n\n Threads are collections of short articles, experiments, and critical\n commentary around a narrow or unusual research topic, along with a slack\n channel for real time discussion and collaboration. They are intended to\n be earlier stage than a full Distill paper, and allow for more fluid\n publishing, feedback and discussion. We also hope they’ll allow for wider\n participation. Think of a cross between a Twitter thread, an academic\n workshop, and a book of collected essays.\n \n\n\n\n Threads are very much an experiment. We think it’s possible they’re a\n great format, and also possible they’re terrible. We plan to trial two\n such threads and then re-evaluate our thought on the format.", "date_published": "2020-08-27T20:00:00Z", "authors": ["Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin", "Sam Greydanus", "Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin", "Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin", "Sam Greydanus", "Eyvind Niklasson", "Alexander Mordvintsev", "Ettore Randazzo", "Michael Levin", "Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin"], "summaries": ["A collection of articles and comments with the goal of understanding how to design robust and general purpose self-organizing systems."], "doi": "10.23915/distill.00027", "journal_ref": "distill-pub", "bibliography": []} {"id": "14ddb6790290338e5f5f48ae464dfb20", "title": "Self-classifying MNIST Digits", "url": "https://distill.pub/2020/selforg/mnist", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Model](#model)\n[Experiments](#experiment-1)\n* [Self-classify, persist & mutate](#experiment-1)\n* [Stabilizing classification](#experiment-2)\n\n\n[Robustness](#robustness)\n[Related Work](#related-work)\n[Discussion](#discussion)\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Growing Neural Cellular Automata](/2020/growing-ca/)\n[Self-Organising Textures](/selforg/2021/textures/)\n\nGrowing Neural Cellular Automata demonstrated how simple cellular automata (CAs) can learn to self-organise into complex shapes while being resistant to perturbations. Such a computational model approximates a solution to an open question in biology, namely, how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage? The model parameterizing the cells’ rules is parameter-efficient, end-to-end differentiable, and illustrates a new approach to modeling the regulation of anatomical homeostasis. In this work, we use a version of this model to show how CAs can be applied to a common task in machine learning: classification. We pose the question: *can CAs use local message passing to achieve global agreement* *on* *what digit they compose?*\n\n\n\nOur question is closely related to another unsolved problem in developmental and regenerative biology: how cell groups decide whether an organ or tissue pattern is correct, or whether current anatomy needs to be remodeled (anatomical surveillance and repair toward a specific target morphology). For example, when scientists surgically transplanted a salamander tail to its flank, it slowly remodeled into a limb - the organ that belongs at this location . Similarly, tadpoles with craniofacial organs in the wrong positions usually become normal frogs because they remodel their faces, placing the eye, mouth, nostrils, etc. in their correct locations. Cell groups move around and stop when the correct frog-specific anatomical configuration has been achieved . All of these examples illustrate the ability of biological systems to determine their current anatomical structure and decide whether it matches a species-specific target morphology . Despite the recent progress in molecular biology of genes necessary for this process, there is still a fundamental knowledge gap concerning the algorithms sufficient for cell collectives to measure and classify their own large-scale morphology. More broadly, it is important to create computational models of swarm intelligence that explicitly define and distinguish the dynamics of the basal cognition of single cells versus cell collectives . \n\n\n### The self-classifying MNIST task\n\n\nSuppose a population of agents is arranged on a grid. They do not know where they are in the grid and they can only communicate with their immediate neighbors. They can also observe whether a neighbor is missing. Now suppose these agents are arranged to form the shape of a digit. Given that all the agents operate under the same rules, can they form a communication protocol such that, after a number of iterations of communication, *all of the agents know which digit they are forming**?*Furthermore, if some agents were to be removed and added to form a new digit from a preexisting one, would they be able to know which the new digit is?\n\n\nBecause digits are not rotationally invariant (i.e. 6 is a rotation of 9), we presume the agents must be made aware of their orientation with respect to the grid. Therefore, while they do not know where they are, they do know where up, down, left and right are. The biological analogy here is a situation where the remodeling structures exist in the context of a larger body and a set of morphogen gradients or tissue polarity that indicate directional information with respect to the three major body axes. Given these preliminaries, we introduce the self-classifying MNIST task.\n\n\n\n\n\nA visualisation of a random sample of digits from MNIST, each shaded by the colour corresponding its label.\n\n\nEach sample of the MNIST dataset consists of a 28x28 image with a single monochrome channel that is classically displayed in greyscale. The label is an integer in [0,9][0,9][0,9].\n\n\nOur goal is for all cells that make up the digit to correctly output the label of the digit. To convey this structural information to the cells, we make a distinction between alive and dead cells by rescaling the values of the image to [0, 1]. Then we treat a cell as alive if its value in the MNIST sample is larger than 0.1. The intuition here is that we are placing living cells in a cookie cutter and asking them to identify the global shape of the cookie cutter. We visualize the label output by assigning a color to each cell, as you can see above. We use the same mapping between colors and labels throughout the article. Please note that there is a slider in the interactive demo controls which you can use to adjust the color palette if you have trouble differentiating between the default colors. \n\n\nModel\n-----\n\n\nIn this article, we use a variant of the neural cellular automata model described in Growing Cellular Automata . We refer readers unfamiliar with its implementation to the original [”Model”](https://distill.pub/2020/growing-ca/#model) section. Here we will describe a few areas where our model diverges from the original.\n\n\n### Target labels\n\n\nThe work in Growing CA used RGB images as targets, and optimized the first three state channels to approximate those images. For our experiments, we treat the last ten channels of our cells as a pseudo-distribution over each possible label (digit). During inference, we simply pick the label corresponding to the channel with the highest output value.\n\n\n### Alive cells and cell states\n\n\nIn Growing CA we assigned a cell’s state to be “dead” or “alive” based on the strength of its alpha channel and the activity of its neighbors. This is similar to the rules of Conway’s Game of Life . In the Growing CA model, “alive” cells are cells which update their state and dead cells are “frozen” and do not undergo updates. In contrast to biological life, what we call “dead” cells aren’t dead in the sense of being non-existent or decayed, but rather frozen: they are visible to their neighbors and maintain their state throughout the simulation. In this work, meanwhile, we use input pixel values to determine whether cells are alive or dead and perform computations with alive cells only As introduced in the previous section, cells are considered alive if their normalized grey value is larger than 0.1.. It is important to note that the values of MNIST pixels are exposed to the cell update rule as an immutable channel of the cell state. In other words, we make cells aware of their own pixel intensities as well as those of their neighbors. Given 19 mutable cell state channels (nine general purpose state channels for communication and ten output state channels for digit classification) and an immutable pixel channel, each cell perceives 19 + 1 state channels and only outputs state updates for the 19 mutable state channels.\n\n\n**A note on digit topology.**Keen readers may notice that our model requires each digit to be a single connected component in order for classification to be possible, since any disconnected components will be unable to propagate information between themselves. We made this design decision in order to stay true to our core biological analogy, which involves a group of cells that is trying to identify its global shape. Even though the vast majority of samples from MNIST are fully connected, some aren’t. We do not expect our models to classify non-connected minor components correctly, but we do not remove them This choice complicates comparison between the MNIST train/test accuracies of neural network classifiers vs. CAs. However, such a comparison is not in scope of this article..\n\n\n### Perception\n\n\nThe Growing CA article made use of fixed 3x3 convolutions with Sobel filters to estimate the state gradients in x⃗\\vec{x}x⃗ and y⃗\\vec{y}y⃗​. We found that fully trainable 3x3 kernels outperformed their fixed counterparts and so used them in this work.\n\n\n**A note on model size.** Like the Growing CA model, our MNIST CA is small by the standards of deep learning - it has less than 25k parameters. Since this work aims to demonstrate a novel approach to classification, we do not attempt to maximise the validation accuracy of the model by increasing the number of parameters or any other tuning. We suspect that, as with other deep neural network models, one would observe a positive correlation between accuracy and model size.\n\n\nExperiment 1: Self-classify, persist and mutate\n-----------------------------------------------\n\n\nIn our first experiment, we use the same training paradigm as was discussed in Growing CA. We train with a pool of initial samples to allow the model to learn to persist and then perturb the converged states. However, our perturbation is different. Previously, we destroyed the states of cells at random in order to make the CAs resistant to destructive perturbations (analogous to traumatic tissue loss). In this context, perturbation has a slightly different role to play. Here we aim to build a CA model that not only has regenerative properties, but also *has the ability to correct itself when the shape of the overall digit changes**.*\n\n\nBiologically, this corresponds to a teratogenic influence during development, or alternatively, a case of an incorrect or incomplete remodeling event such as metamorphosis or rescaling. The distinction between training our model from scratch and training it to accommodate perturbations is subtle but important. An important feature of life is the ability to react adaptively to external perturbations that are not accounted for in the normal developmental sequence of events. If our virtual cells simply learned to recognize a digit and then entered some dormant state and did not react to any further changes, we would be missing this key property of living organisms. One could imagine a trivial solution in the absence of perturbations, where a single wave of information is passed from the boundaries of the digit inwards and then back out, in such a way that all cells could agree on a correct classification. By introducing perturbations to new digits, the cells have to be in constant communication and achieve a “dynamic homeostasis” - continually “kept on their toes” in anticipation of new or further communication from their neighbours.\n\n\nIn our model, we achieve this dynamic homeostasis by randomly mutating the underlying digit at training time. Starting from a certain digit and after some time evolution, we sample a new digit, erase all cell states that are not present in both digits and bring alive the cells that were not present in the original digit but are present in the new digit. This kind of mutation teaches CAs to learn to process new information and adapt to changing conditions. It also exposes the cells to training states where all of the cells that remain after a perturbation are misclassifying the new digit and must recover from this catastrophic mutation. This in turn forces our CAs to learn to change their own classifications to adapt to changing global structure.\n\n\nWe use a pixel-wise (cell-wise) cross entropy loss on the last ten channels of each pixel, applying it after letting the CA evolve for 20 steps.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\nA first attempt at having the neural CAs classify digits. Each digit is a separate evolution of the neural CA, with the visualisations collated. Halfway through, the underlying digit is swapped for a new one - a “mutation”.\n\nThe video above shows the CA classifying a batch of digits for 200 steps. We then mutate the digits and let the system evolve and classify for a further 200 steps.\n\n\nThe results look promising overall and we can see how our CAs are able to recover from mutations. However, astute observers may notice that often not all cells agree with each other. Often, the majority of the digit is classified correctly, but some outlier cells are still convinced they are part of a different digit, often switching back and forth in an oscillating pattern, causing a flickering effect in the visualization. This is not ideal, since we would like the population of cells to reach stable, total, agreement. The next experiment troubleshoots this undesired behaviour.\n\n\nExperiment 2: Stabilizing classification\n----------------------------------------\n\n\nQuantifying a qualitative issue is the first step to solving it. We propose a metric to track **average cell** **accuracy**, which we define as the mean percentage of cells that have a correct output. We track this metric both before and after mutation.\n\n\n\n\nAverage accuracy across the cells in a digit over time.\n\nIn the figure above, we show the mean percentage of correctly classified pixels in the test set over the course of 400 steps. At step 200, we randomly mutate the digit. Accordingly, we see a brief drop in accuracy as the cells re-organise and eventually come to agreement on what the new digit is.\n\n\nWe immediately notice an interesting phenomenon: the cell accuracy appears to decrease over time after the cells have come to an agreement. However, the graph does not necessarily reflect the qualitative issue of unstable labels that we set out to solve. The slow decay in accuracy may be a reflection of the lack of total agreement, but doesn’t capture the stark instability issue.\n\n\nInstead of looking at the mean agreement perhaps we should measure **total agreement**. We define total agreement as the percentage of samples from a given batch wherein all the cells output the same label. \n\n\n\n\n\nAverage total agreement among cells across the test set in MNIST, over time.\n\nThis metric does a better job of capturing the issues we are seeing. The total agreement starts at zero and then spikes up to roughly 78%, only to lose more than 10% agreement over the next 100 steps. Again, behaviour after mutation does not appear to be significantly different. Our model is not only unstable in the short term, exhibiting flickering, but is also unstable over longer timescales. As time goes on, cells are becoming less sure of themselves. Let’s inspect the inner states of the CA to see why this is happening.\n\n\n\n\n\nAverage magnitude of the state channels and residual updates in active cells over time in the test set.\n\nThe figure above shows the time evolution of the average magnitude of the state values of active cells (solid line), and the average magnitude of the residual updates (dotted line). Two important things are happening here: 1) the average magnitude of each cell’s internal states is growing monotonically on this timescale; 2) the average magnitude of the residual updates is staying roughly constant. We theorize that, unlike 1), a successful CA model should stabilize the magnitude of its internal states once cells have reached an agreement. In order for this to happen, its residual updates should approach zero over time, unlike what we observed in 2).\n\n\n**Using an** **L2L\\_2L2​** **loss.** One problem with cross entropy loss is that it tends to push raw logit values indefinitely higher. Another problem is that two sets of logits can have vastly different values but essentially the same prediction over classes. As such, training the CA with cross-entropy loss neither requires nor encourages a shared reference range for logit values, making it difficult for the cells to effectively communicate and stabilize. Finally, we theorize that large magnitudes in the classification channels may in turn lead the remaining (non-classification) state channels to transition to a high magnitude regime. More specifically, we believe that *cross-entropy loss causes unbounded growth in classification logits, which prevents residual updates from approaching zero, which means that neighboring cells continue passing messages to each other even after they reach an agreeme**nt**.**Ultimately, this causes the magnitude of the message vectors to grow unboundedly*. With these problems in mind, we instead try training our model with a pixel-wise L2L\\_2L2​ loss and use one-hot vectors as targets. Intuitively, this solution should be more stable since the raw state channels for classification are never pushed out of the range [0,1][0, 1][0,1] and a properly classified digit in a cell will have exactly one classification channel set to 1 and the rest to 0. In summary, an L2L\\_2L2​ loss should decrease the magnitude of all the internal state channels while keeping the classification targets in a reasonable range. \n\n\n**Adding noise to the residual updates**. A number of popular regularization schemes involve injecting noise into a model in order to make it more robust . Here we add noise to the residual updates by sampling from a normal distribution with a mean of zero and a standard deviation of 2×10−22 \\times 10^{-2}2×10−2. We add this noise before randomly masking the updates.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\nNeural CA trained with L2L\\_2L2​ loss, exhibiting less instability after converging to a label.\n\nThe video above shows a batch of runs with the augmentations in place. Qualitatively, the result looks much better as there is less flickering and more total agreement. Let’s check the quantitative metrics to see if they, too, show improvement.\n\n\n\n\n\nComparison of average accuracy and total agreement \n when using cross-entropy and when using L2L\\_2L2​ loss.\n\n\n\n| Model | Top accuracy | Accuracy at 200 | Top agreement | Agreement at 200 |\n| --- | --- | --- | --- | --- |\n| CE | **96.2 at 80** | **95.3** | 77.9 at 80 | 66.2 |\n| L2L\\_2L2​ | 95.0 at 95 | 94.7 | 85.5 at 175 | 85.2 |\n| L2L\\_2L2​ + Noise | 95.4 at 65 | **95.3** | **88.2 at 190** | **88.1** |\n\n\n\nThe figure and table above show that cross-entropy achieves the highest accuracy of all models at roughly 80 steps. However, the accuracy at 200 steps is the same as the L2L\\_2L2​ + Noise model. While accuracy and agreement degrade over time for all models, the L2L\\_2L2​ + Noise appears to be the most stable configuration. In particular, note that the total agreement after 200 steps of L2L\\_2L2​ + Noise is 88%, an improvement of more than 20% compared to the cross-entropy model. \n\n\n### Internal states\n\n\n\n\n\nAverage magnitude of state channels over time for L2L\\_2L2​ loss and cross-entropy loss.\n\nLet’s compare the internal states of the augmented model to those of the original. The figure above shows how switching to an L2L\\_2L2​ loss stabilizes the magnitude of the states, and how residual updates quickly decay to small values as the system nears agreement.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\nVisualisation of internal state channel values during mutations. Note the accelerated timeline after a few seconds showing the relative stability of the channel values.\n\nTo further validate our results, we can visualize the dynamics of the internal states of the final model. For visualization purposes, we have squashed the internal state values by applying an element-wise arctanarctanarctan, as most state values are less than one but a few are much larger. The states converge to stable configurations quickly and the state channels exhibit spatial continuity with the neighbouring states. More specifically, we don’t see any stark discontinuities in state values of neighbouring pixels. Applying a mutation causes the CA to readapt to the new shape and form a new classification in just a few steps, after which its internal values are stable.\n\n\nRobustness\n----------\n\n\nRecall that during training we used random digit mutations to ensure that the resulting CA would be responsive to external changes. This allowed us to learn a dynamical system of agents which interact to produce stable behavior at the population level, even when perturbed to form a different digit from the original. Biologically, this model helps us understand the mutation insensitivity of some large-scale anatomical control mechanisms. For example, planaria continuously accumulate mutations over millions of years of somatic inheritance but still always regenerate the correct morphology in nature (and exhibit no genetic strains with new morphologies) . \n\n\nThis robustness to change was critically important to our interactive demo, since the cells needed to reclassify drawings as the user changed them. For example, when the user converted a six to an eight, the cells needed to quickly re-classify themselves to an eight. We encourage the reader to play with the interactive demo and experience this for themselves. In this section, we want to showcase a few behaviours we found interesting.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\n\n\n\nDemonstration of the CA successfully re-classifying a digit when it is modified by hand.\n\nThe video above shows how the CA is able to interactively adjust to our own writing and to change classification when the drawing is updated.\n\n\n### Robustness to out-of-distribution shapes\n\n\nIn the field of machine learning, researchers take great interest in how their models perform on out-of-distribution data. In the experimental sections of this article, we evaluated our model on the test set of MNIST. In this section, we go further and examine how the model reacts to digits drawn by us and not sampled from MNIST at all. We vary the shapes of the digits until the model is no longer capable of classifying them correctly. Every classification model inherently contains certain inductive biases that render them more or less robust to generalizing to out-of-distribution data. Our model can be seen as a recurrent convolutional model and thus we expect it to exhibit some of the key properties of traditional convolutional models such as translation invariance. However, we strongly believe that the self-organising nature of this model introduces a novel inductive bias which may have interesting properties of its own. Biology offers examples of “repairing to novel configurations”: 2-headed planaria, once created, regenerate to this new configuration which was not present in the evolutionary “training set” . \n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\n\n\n\nDemonstration of some of the failure cases of the CA.\n\nAbove, we can see that our CA fails to classify some variants of 1 and 9. This is likely because MNIST training data is not sufficiently representative of all writing styles. We hypothesize that more varied and extensive datasets would improve performance. The model often oscillates between two attractors (of competing digit labels) in these situations. This is interesting because this behavior could not arise from static classifiers such as traditional convolutional neural networks.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\n\n\n\nDemonstration of the inherent robustness of the model to unseen sizes and variants of numbers.\n\nBy construction, our CA is translation invariant. But perhaps surprisingly, we noticed that our model is also scale-invariant for out-of-distribution digit sizes up to a certain point. Alas, it does not generalize well enough to classify digits of arbitrary lengths and widths.\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n\n\n\n\nDemonstration of the behaviour of the model with chimeric configurations.\n\nIt is also interesting to see how our CA classifies “chimeric digits”, which are shapes composed of multiple digits. First, when creating a 3-5 chimera, the classification of 3 appears to dominate that of the 5. Second, when creating a 8-9 chimera, the CAs reach an oscillating attractor where sections of the two digits are correctly classified. Third, when creating a 6-9 chimera, the CAs converge to an oscillating attractor but the 6 is misclassified as a 4.\n\nThese phenomena are important in biology as scientists begin to develop predictive models for the morphogenetic outcome of chimeric cell collectives. We still do not have a framework for knowing in advance what anatomical structures will form from a combination of, for example leg-and-tail blastema cells in an axolotl, heads of planaria housing stem cells from species with different head shapes, or composite embryos consisting of, for example, frog and axolotl blastomeres . Likewise, designing information signals that induce the emergence of desired tissue patterns from a chimeric cellular collective, in vitro or in vivo, remains an open problem.\n\n\nRelated Work\n------------\n\n\nThis article is follow-up work to Growing Neural Cellular Automata , and it is meant to be read after the latter. In this article, we purposefully skim over details of the original model and we refer the reader to the Growing Neural Cellular Automata article for the [full model description](https://distill.pub/2020/growing-ca/#model) section and [related work](https://distill.pub/2020/growing-ca/#related-work) section.\n\n\n**MNIST and CA.** Since CAs are easy to apply to two dimensional grids, many researchers wondered if they could use them to somehow classify the MNIST dataset. We are aware of work that combines CAs with Reservoir Computing , Boltzmann Machines , Evolutionary Strategies , and ensemble methods . To the best of our knowledge, we are the first to train end-to-end differentiable Neural CAs for classification purposes and we are the first to introduce the self-classifying variant of MNIST wherein each pixel in the digit needs to coordinate locally in order to reach a global agreement about its label.\n\n\nDiscussion\n----------\n\n\nThis article serves as a proof-of-concept for how simple self-organising systems such as CA can be used for classification when trained end-to-end through backpropagation.\n\n\nOur model adapts to writing and erasing and is surprisingly robust to certain ranges of digit stretching and brush widths. We hypothesize that self-organising models with constrained capacity may be inherently robust and have good generalisation properties. We encourage future work to test this hypothesis.\n\n\nFrom a biological perspective, our work shows we can teach things to a collective of cells that they could not learn individually (by training or engineering a single cell). Training cells in unison (while communicating with each other) allows them to learn more complex behaviour than any attempt to train them one by one, which has important implications for strategies in regenerative medicine. The current focus on editing individual cells at the genetic or molecular signaling level faces fundamental barriers when trying to induce desired complex, system-level outcomes (such as regenerating or remodeling whole organs). The inverse problem of determining which cell-level rules (e.g., genetic information) must be changed to achieve a global outcome is very difficult. In contrast and complement to this approach, we show the first component of a roadmap toward developing effective strategies for communication with cellular collectives. Future advances in this field may be able to induce desired outcomes by using stimuli at the system’s input layer (experience), not hardware rewiring, to re-specify outcomes at the tissue, organ, or whole-body level .\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Growing Neural Cellular Automata](/2020/growing-ca/)\n[Self-Organising Textures](/selforg/2021/textures/)", "date_published": "2020-08-27T20:00:00Z", "authors": ["Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin", "Sam Greydanus"], "summaries": ["Training an end-to-end differentiable, self-organising cellular automata for classifying MNIST digits."], "doi": "10.23915/distill.00027.002", "journal_ref": "distill-pub", "bibliography": [{"link": "https://doi.org/10.23915/distill.00023", "title": "Growing Neural Cellular Automata"}, {"link": "https://doi.org/10.1007/bf02159624", "title": "The transformation of a tail into limb after xenoplastic transplantation"}, {"link": "https://doi.org/10.1002/dvdy.23770", "title": "Normalized shape and location of perturbed craniofacial structures in the Xenopus tadpole reveal an innate ability to achieve correct morphology"}, {"link": "https://doi.org/10.1098/rsif.2016.0555", "title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"link": "http://www.jstor.org/stable/24927642", "title": "MATHEMATICAL GAMES"}, {"link": "http://jmlr.org/papers/v15/srivastava14a.html", "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting"}, {"link": "http://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf", "title": "Practical Variational Inference for Neural Networks"}, {"link": "http://www.sciencedirect.com/science/article/pii/S1084952117301970", "title": "Planarian regeneration as a model of anatomical homeostasis: Recent progress in biophysical and computational approaches"}, {"link": "http://www.sciencedirect.com/science/article/pii/S001216060901402X", "title": "Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration"}, {"link": "https://doi.org/10.1089/soro.2014.0011 ", "title": "Bioelectrical Mechanisms for Programming Growth and Form: Taming Physiological Networks for Soft Body Robotics"}, {"link": "https://doi.org/10.1016/j.gde.2018.05.007", "title": "Interspecies chimeras"}, {"link": "https://doi.org/10.1587/nolta.9.24", "title": "Asynchronous network of cellular automaton-based neurons for efficient implementation of Boltzmann machines"}, {"link": "http://www.sciencedirect.com/science/article/pii/S2212683X18300203", "title": "Biologically inspired cellular automata learning and prediction model for handwritten pattern recognition"}]} {"id": "c5dbee2ef12d51ed75670e597b7a1ab7", "title": "Curve Detectors", "url": "https://distill.pub/2020/circuits/curve-detectors", "source": "distill", "source_type": "blog", "text": "### Contents\n\n[A Simplified Story of Curve Neurons](#a-simplified-story-of-curve-neurons)[Feature Visualization](#feature-visualization)[Dataset Analysis](#dataset-analysis)[Visualizing Attribution](#visualizing-attribution)[Human Comparison](#human-comparison)[Joint Tuning Curves](#joint-tuning-curves)[Synthetic Curves](#synthetic-curves)[Radial Tuning Curve](#radial-tuning-curve)[The Curve Families of InceptionV1](#the-curve-families-of-inceptionv1)[Repurposing Curve Detectors](#repurposing-curve-detectors)[The Combing Phenomenon](#the-combing-phenomenon)[Conclusion](#conclusion)![](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPCEtLSBHZW5lcmF0b3I6IEFkb2JlIElsbHVzdHJhdG9yIDI0LjEuMCwgU1ZHIEV4cG9ydCBQbHVnLUluIC4gU1ZHIFZlcnNpb246IDYuMDAgQnVpbGQgMCkgIC0tPgo8c3ZnIHZlcnNpb249IjEuMSIgaWQ9IkxheWVyXzEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IgoJIHZpZXdCb3g9IjAgMCA0MyA1MCIgc3R5bGU9ImVuYWJsZS1iYWNrZ3JvdW5kOm5ldyAwIDAgNDMgNTA7IiB4bWw6c3BhY2U9InByZXNlcnZlIj4KPHN0eWxlIHR5cGU9InRleHQvY3NzIj4KCS5zdDB7ZmlsbC1vcGFjaXR5OjAuMTI7fQoJLnN0MXtmaWxsOiNGRkZGRkY7c3Ryb2tlOiNBQkFCQUI7fQoJLnN0MntmaWxsLXJ1bGU6ZXZlbm9kZDtjbGlwLXJ1bGU6ZXZlbm9kZDtmaWxsOiNGRkZGRkY7fQoJLnN0M3tmaWxsLXJ1bGU6ZXZlbm9kZDtjbGlwLXJ1bGU6ZXZlbm9kZDtmaWxsOiNGRkZGRkY7ZmlsdGVyOnVybCgjQWRvYmVfT3BhY2l0eU1hc2tGaWx0ZXIpO30KCS5zdDR7bWFzazp1cmwoI3BhdGgtMy1pbnNpZGUtMV8xXyk7ZmlsbDojQUJBQkFCO30KCS5zdDV7ZmlsbDojRkZGRkZGO3N0cm9rZTojQUJBQkFCO3N0cm9rZS1saW5lam9pbjpyb3VuZDtzdHJva2UtbWl0ZXJsaW1pdDoxO30KCS5zdDZ7ZmlsbDpub25lO3N0cm9rZTojQUJBQkFCO30KPC9zdHlsZT4KPHBhdGggY2xhc3M9InN0MCIgZD0iTTIwLjcsMTJ2MTEuNEgzMUwyMC43LDEyeiIvPgo8cGF0aCBjbGFzcz0ic3QxIiBkPSJNMzIuNSwxMC41djM2aC0yOXYtMzZIMzIuNXoiLz4KPHBhdGggY2xhc3M9InN0MiIgZD0iTTIyLDVIOHYzN2gzMFYyMkgyMlY1eiIvPgo8ZGVmcz4KCTxmaWx0ZXIgaWQ9IkFkb2JlX09wYWNpdHlNYXNrRmlsdGVyIiBmaWx0ZXJVbml0cz0idXNlclNwYWNlT25Vc2UiIHg9IjciIHk9IjQiIHdpZHRoPSIzMiIgaGVpZ2h0PSIzOSI+CgkJPGZlQ29sb3JNYXRyaXggIHR5cGU9Im1hdHJpeCIgdmFsdWVzPSIxIDAgMCAwIDAgIDAgMSAwIDAgMCAgMCAwIDEgMCAwICAwIDAgMCAxIDAiLz4KCTwvZmlsdGVyPgo8L2RlZnM+CjxtYXNrIG1hc2tVbml0cz0idXNlclNwYWNlT25Vc2UiIHg9IjciIHk9IjQiIHdpZHRoPSIzMiIgaGVpZ2h0PSIzOSIgaWQ9InBhdGgtMy1pbnNpZGUtMV8xXyI+Cgk8cGF0aCBjbGFzcz0ic3QzIiBkPSJNMjIsNUg4djM3aDMwVjIySDIyVjV6Ii8+CjwvbWFzaz4KPHBhdGggY2xhc3M9InN0NCIgZD0iTTIyLDVoMVY0aC0xVjV6IE04LDVWNEg3djFIOHogTTgsNDJIN3YxaDFWNDJ6IE0zOCw0MnYxaDF2LTFIMzh6IE0zOCwyMmgxdi0xaC0xVjIyeiBNMjIsMjJoLTF2MWgxVjIyeiBNMjIsNAoJSDh2MmgxNFY0eiBNNyw1djM3aDJWNUg3eiBNOCw0M2gzMHYtMkg4VjQzeiBNMzksNDJWMjJoLTJ2MjBIMzl6IE0zOCwyMUgyMnYyaDE2VjIxeiBNMjMsMjJWNWgtMnYxN0gyM3oiLz4KPHBhdGggY2xhc3M9InN0NSIgZD0iTTMyLDExLjVWMTJoMC41aDEwdjI0LjVoLTI5di0zNkgzMlYxMS41eiBNMzIuMSwxMS45VjAuNWwxMC40LDExLjRIMzIuMXoiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iNi41IiB4Mj0iMjciIHkyPSI2LjUiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iMTIuNSIgeDI9IjI3IiB5Mj0iMTIuNSIvPgo8bGluZSBjbGFzcz0ic3Q2IiB4MT0iMTkiIHkxPSIxOC41IiB4Mj0iMzciIHkyPSIxOC41Ii8+CjxsaW5lIGNsYXNzPSJzdDYiIHgxPSIxOSIgeTE9IjI0LjUiIHgyPSIzNyIgeTI9IjI0LjUiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iMzAuNSIgeDI9IjM3IiB5Mj0iMzAuNSIvPgo8L3N2Zz4K)This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n\n[An Overview of Early Vision in InceptionV1](/2020/circuits/early-vision/)[Naturally Occurring Equivariance in Neural Networks](/2020/circuits/equivariance/)Every vision model we’ve explored in detail contains neurons which detect curves. Curve detectors in vision models have been hinted at in the literature as far back as 2013 (see figures in Zeiler & Fergus ), and similar neurons have been studied carefully in neuroscience . We [briefly discussed](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_curves) curve in our earlier overview of early vision, but wanted to examine them in more depth. This article is the first part of a three article deep dive into curve detectors: their behavior, how they’re built from earlier neurons, and their prevalence across models.\n\nWe’re doing this because we believe that the interpretability community disagrees on several crucial questions. In particular, are neural network representations composed of meaningful features — that is, features tracking articulable properties of images? On the one hand, there are a number of papers reporting on seemingly meaningful features, such as eye detectors, head detectors, car detectors, and so forth . At the same time, there’s a significant amount of skepticism, only partially reflected in the literature. One concern is that features which seem superficially to be meaningful may in fact not be what they appear . Several papers have suggested that neural networks primarily detect textures or imperceptible patterns rather than the kind of meaningful features described earlier. Finally, even if some meaningful features exist, it’s possible they don’t play an especially important role in the network. Some reconcile these results by concluding that if one observes, for example, what appears to be a dog head detector, it is actually a detector for special textures correlated with dog heads.\n\nThis disagreement really matters. If every neuron was meaningful, and their connections formed meaningful circuits, we believe it would open a path to completely reverse engineering and interpreting neural networks. Of course, we know not every neuron is meaningful, As discussed in Zoom In, the main issue we see is what we call polysemantic neurons which respond to multiple different features, seemingly as a way to compress many features into a smaller number of neurons. We’re hopeful this can be worked around. but we think it’s close enough for this path to be tractable. However, our position is definitely not the consensus view. Moreover, it seems too good to be true, and rings of the similar failed promises in other fieldsFor example, genetics seems to have been optimistic in the past that genes had individual functions and that the human genome project would allow us to “mine miracles,” a position which now seems to be regarded as having been naive. — skepticism is definitely warranted!\n\nWe believe that curve detectors are a good vehicle for making progress on this disagreement. Curve detectors seem like a modest step from edge-detecting Gabor filters, which the community widely agrees often form in the first convolutional layer. Furthermore, artificial curves are simple to generate, opening up lots of possibilities for rigorous investigation. And the fact that they’re only a couple convolutional layers deep means we can follow every string of neurons back to the input. At the same time, the underlying algorithm the model has implemented for curve detection is quite sophisticated. If this paper persuades skeptics that at least curve detectors exist, that seems like a substantial step forward. Similarly, if it surfaces a more precise point of disagreement, that would also advance the dialogue.\n\n[A Simplified Story of Curve Neurons\n-----------------------------------](#a-simplified-story-of-curve-neurons)Before running detailed experiments, let’s look at a high level and slightly simplified story of how the curve 10 neurons in 3b work.\n\n![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_342.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_388.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_324.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_340.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_330.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_349.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_406.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_385.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_343.jpg)Each neuron’s ideal curve, created with feature visualization, which uses optimization to find superstimuli.Each curve detector implements a variant of the same algorithm: it responds to a wide variety of curves, preferring curves of a particular orientation and gradually firing less as the orientation changes. Curve neurons are invariant to cosmetic properties such as brightness, texture, and color. \n\n![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379 Activations by Orientation![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEFUlEQVR4nNWay0syXRzH5xkpctGINFGKMAblEGikodKmGzSEdNFltgkS9C/xstKVrbRVi2oVmBEm0hBC2E2NgrLLRDaSMxPNRjfpswgeet+Ht5vnqO93fX4fPgPDmTPn9/tVrVaRpgzDMOl0+uLiIpfLMQzDsizHcaIolsvlX80mXSwWaZpOJpOpVOr09FQmk2k0GoIglEplZ2cnhmFSqbSJpG9ubqLRaCwWSyQSarXabDbr9XqtVkuSZFdX1z+WVpsgT09PwWBwcnKytbV1ZmYmEAgcHR19sL7x0pFIZH5+HkVRq9UaDocLhcKnJY2U5nne6/WSJGkymQKBQD6f/2Jhw6QzmYzT6UQQxOFw7O/vf6u2MdI0TdtsNpVK5fF4BEH4bnkDpHd3dymK0ul04XD4Z4R6S9M0TVHU0NDQ+vr6jyF1lc5kMjabTafT1WJcrac0z/NOp1OlUv34rfiT+kl7vV4EQTweT+2oOklHIhGSJB0Oxw/2ir+D1uFQwXHc6uqqTCZbXFyUy+UAiLU/96cJBoMoivr9flBA6NLX19cURVmt1q9/pT8N9NcjGo0mEonZ2VmlUgmKCVea47hYLDY1NWWxWABi4Urv7e0lEomJiYl/n+JrC1zpZDKpVqtHRkbAYiFKMwyTSqXMZrPBYABLhiidyWTS6bRerwdOhih9fn6OYZhWqwVOhiidy+X6+vpIkgROhiX9/Px8f3+vVqvB7htvgSXNsizLsgA/KO8DS5rneY7jcByHAYclLYqiKIoymQwGHJZ0uVwulUpSqRQGHJb06+trpVJBUSh8WNISiQRF0UqlAgMOS7qtrU0qlZZKJRhwWNIYhmEY9vLyAgMOS7qjowPHcY7jYMBhSSsUCoVC8fj4CAMOS1oulxMEcXd3VygUgMMhHph6e3uvrq4uLy+BkyFK9/f3i6J4dnYGnAxRemBgYHBw8OTkBDgZojRBEEaj8eDg4Pj4GCwZ7o/t8PAwwzA0TYPFwpUeHR0dGxuLx+Ng9xC40jiOUxS1s7Ozvb0NEAv9WsxisYyPj29ububzeVBM6NI9PT02my0SiWxsbACDgrrJ/CDFYtFutxuNxu/2C/8r9bhUx3F8YWFBFMWVlRVBEAAQgTz6V+Lz+RAEcbvdtaPqJy0IgsvlUqlUoVCoRlRd+4jZbPatj7i2tlYLpzEdW4PBUIt3A3rj8Xj8rTf+4/ekwVMIbreb5/nvljds3iObzbpcLgRBlpaW/h/zHm8RBMHn85EkaTQa/X7/w8PDFwsbP8O0tbVlt9slEsnc3FwoFGJZ9tOSxktXq9Visbi8vExRVEtLy/T0tN/vPzw8/GB9E83l3d7e/pnLIwjCZDIZDAatVqvRaLq7u9+vbCLpt3Ac934Csr29/e8JyN8gySVQhKVRZAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADvElEQVR4nO3aS0gqbRzH8cd6lZCMNMxFGaSRhZeQMcoEMaIIiaBVFIGLtq2jXa2CaFHhJmgdRZGFFWXBJGmLMehCF7pippSkYlAxOt7Oos3hnPc9dPL5j73Qb+18+YwbmWfkZDIZ9H9bXq4Bn9k3mq19o9naN5qt/ZNrwIfGMEw0Gn17e0skEujLommaPj09vby89Hq9gUDg6enpHc0wDEKI86V+EWOxmNvtpijq4ODg5OTk9vZWJpNJpVKJRCIUCgsLC3k8Hvo66EAg4HA4SJLc3d1FCOl0Oo1GU1tbW1VVVVFRUVpa+vOHc48OhUI2m81ut29sbDQ3N5tMJoPBUF9fLxAI/vOaTE63srLS3d2dn5/f0dExPT3t9Xo/clXO0H6/f2RkpLKyUq/XT05O+v3+j1+bG7TL5ert7eXz+QMDAxRF/e3lOUAvLy+bTCalUmm1Wmma/kSBbfTs7KxWqzWZTDab7dMRVtFzc3NqtdpsNu/s7GTTYQ+9urqq0+na29tdLleWKZbQ+/v7bW1tBoNhe3s7+xob6Egk0t/fL5fL5+fnsQTZQI+PjyOExsbGcAXB0SRJajQai8USDodxNWEfAlKp1MLCQjKZ7OvrKykpwdbFdff/usXFRbFYPDw8jDcL+E0zDLO2tiaXy7u6uvCWAdFbW1sOh8NsNtfV1eEtA6KdTqdIJGptbcVehkJfXFy43W6j0djY2Ig9DoX2eDwURen1eog4FPro6Eir1RIEAREHQYfD4bOzM7VarVQqIfog6Ovr66urK4VCARFHQGifz3d3dyeTySDiCAj98PBQVlZWXl4OEUdA6FAoJBaLfzlhwTgQ9PPzc3FxsVAohIgjIDRN03w+n8/nQ8QREDqRSHC5XC6XCxFHQOi8vLx0Op1OpyHiCAjN4/Hi8Xg8HoeIIyC0QCB4fX19eXmBiCMgtEgkikQikUgEIo6A0BKJJBgMPj4+QsQREFoqlWYyGZ/PBxFHQGiZTCaXy29ubiDiCAitUChqamrOz8+j0ShEHwTN4XA0Gs3x8fHh4SFEH+rJhSCIVCrl8Xgg4lDohoaGpqamvb29UCiEPQ6FLioqMhqNTqeTJEnsccBzj5aWFpVKtbm5+f5uGOMA0SqVymw2r6+v2+12vGXYU9POzs7q6uqlpaVwOIyzi/c88/dNTExwOJypqSmMTXB0MBjs6ekhCCLLN1o/D/yfNRKJxGKxMAwzMzNzf3+PJ4rr7v88q9UqEAgGBwdjsVj2NZbQyWRyaGiooKBgdHQ0+9oPRczQFG7feUcAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADdUlEQVR4nO2Xz0siURzAdWlQdKppwDQsMegXWQYmkuSIQRBUl/DeIehe9+5Ff0Kdoksd8mAliUQNWBTSBFIwRUxCUmmNQ8wUNU7MHhaWlmpk6zvjLvi5vw8fHo/3fU8vy7Luf+NHuQO+QiVaKyrRWlGJ1opKtFZUorWiEq0VlWit+C+jq8od8AeSJF1cXGQymevr67u7u4eHh6enJ0mS9Hq9wWBAURTHcavV+k9E397eplKp4+Pj09PT8/PzTCYjimJ9fX1tba3ZbK6qqpJl+eXlhef5QqGQy+X05f1uJZPJnZ2dvb29w8PDurq67u7u9vb25ubmxsZGq9WKYZjJZEIQRJbl5+dnnudZlr25uSlbdCKR2Nzc3Nra4nk+GAz6/f7e3l63211dXV1ybRmiKYpaXV2NRCIIggwPDw8ODoZCIaPR+BcKWUOKxeLCwgJBEHa7fWpqiiTJr3m0i6Zpenp6uqamZmxsbG1t7TsqjaJ3d3fD4bDFYpmZmWEY5ps2LaI3NjYGBgbcbvfi4iKIUPXoaDTq9/sDgUAkEoFyqhudSCQIgggGg7FYDFCrYjRFUSMjI16vd319HdasVjTLspOTk06nc3l5GVyuVvT8/LxOp5ubm1NDrkp0PB53uVwTExMsy6rhh39PC4KwsrKCIMj4+DiO4+B+nU6FMb60tISi6OzsLLj5N8A7fX9/H41G+/r6wuEwrPktwNHxeDwWi42Ojra2tsKa3wIZLcvy9va2x+MZGhoC1L4HMjqZTJIkGQqFOjo6ALXvgYw+ODgQBCEQCAA6PwQsWpIkiqK8Xq/P54NyfgZY9MnJSTqd7unpUetufgNY9NnZGU3TnZ2dUEIFwKIZhnE4HC0tLVBCBcCis9lsU1OTw+GAEioAE/36+prL5Ww2W0NDA4hQGZhojuMKhQKO43q9HkSoDEy0IAiCIKAoCmIrCUy0KIqiKBoMBhBbSWCif70YtTkbOqhoBEEQBCkWiyC2ksBEm0wms9n8+PgIYisJTDSGYRiGcRwHYisJTLTRaLRYLPl8nmVZEKEyYBPRbrdns9mrqysooQJg0U6n8/LykmEYKKECYNFtbW02m42maSihAmDRLperq6srnU5rcPGBRWMY5vF4jo6OUqkUlPMzIP+IPp+P47j9/X1A54dARvf39xMEQZJkPp8H1L7nJ/FTcYntL7mEAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADaklEQVR4nO3Yz0vycADH8e3BIemKNXAJRWYoEZWBmCC2QqhDdCkKOnSJoHt07R/o2FnoEN06CIlUIlGGRT8YRT8wI6ZgFJqblEM0F3sOXh7o6eH7fP26CPY+uw+vfS9zwxVFwX5av74bAJOGVisNrVYaWq00tFppaLXS0GqlodVKQ6uVhlYrDa1WPxKt+24AhmGYLMv5fF6SpPf3d0VRCIIwGAwURTU0NPz199+DrlQqNzc3iUQimUym0+lMJlNFl8vlKtpoNFIUZTKZWltbrVar3W7v6elpbm6uXo6r+bFGUZRYLHZycsJx3PX1dTwet1gs7e3tZrOZpmmSJPV6ffWWisViPp/PZrOPj4/JZJJhmN7eXqfT6Xa7vV6vSuhcLre7u7u3t3d4eChJ0sDAQH9/f3d3t81ms1gsZrMZx/HPV4mimE6neZ6/u7u7urriOE4QBJZl646WJCkQCGxtbe3s7DidTp/P5/V63W43TdP/tSPL8vn5+fHxcTQaxZR6Fg6H5+bmSJIcGRlZXV2Nx+O1b2az2XqdtCiKa2tr6+vrBEHMzMxMTU3Z7XZk67Xf+ucuLi4WFhYwDJufnz84OEC+jx4diUTGx8c7OjpWVlYEQUC+ryBHB4NBlmVdLtfGxgba5T9DiQ6FQh6PZ2hoKBgMIpz9HDJ0NBr1+XyDg4Pb29uoNr8KDTqRSExPTzscjkAggGTw3yFAVyqVpaUlk8nk9/trXwMJAdrv9zc1NS0vL9c+BVitaI7jWJadnJzkeR4JCKRaXwI2Nzd5np+dnbVarSiedWDVcseRSMRmsy0uLqI6QsBqOulQKEQQxMTEBKIDBA0efXR0FA6Hx8bGhoeHEYJAgkfv7++/vb2Njo4i1AAGic5kMrFYjGVZ9Y8Zg0afnZ2dnp56PJ6vXpjrGiT68vKSoiiXy4VWAxgM+uPj4/b2tq+vz+FwIAeBBIN+eHi4v7/v6upqbGxEDgIJBp1KpVKpVGdnJ3INYDDop6enUqnU1taGXAMYDPrl5YVhGIZhkGsAg0G/vr5SFEVRFGoMaDDoYrFoMBiMRiNyDWAwaFmWdTqdTvdtn4lh0DiOV/8iItcABoPW6/XlcrlUKiHXAAaDJkmyUCgUCgXkGsBg0DRNi6IoCAJyDWAw6JaWllwu9/z8jFwD2G9A0XSFkHGobgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADl0lEQVR4nO3Yz0siUQDA8RQrpoZCQedQBo2BKI4kTaAJWgSZEF26FEIRdAj6AzwWXSKIgrn1D6SH6BeCWJRSRmhQRkU/PFSWCDlmIYqO5uxhYdmFTZ/VvGVhvseZ4c1nHu/w5glYlq363xL+a8Bn4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGwEnH9Apqmw+Hww8NDNBqNx+Nvb2+ZTKZQKAgEgtraWhRFJRIJhmFyuRzHcaVSKRSWn0cBRweQ19fXwWAwFApdXl7e3t7e3983NTXJZLLGxsb6+nqRSMSybC6XS6VSLy8vsVisWCwqFAqVSkUQREdHh16vb2hogIRmGGZ7e9vn8/n9/kAgoNPpCIJQKpU4jjc3N8tkMrFYXFdXV11dzbJsNptNpVKJRCIWi0UikXA4fHV1dXZ2VigUDAaDyWTq7e0lCIJD9Pv7+/r6usvl8ng8EonEZDIZDAaSJNVqNfggr6+vp6enwWDw8PDQ6/VqNBqr1To4ONje3v7Hc+x35PV6JycnpVKpXq+fnZ09Ojr64oDxeNzpdI6Pj0ul0q6ursXFxVgs9uvuV9GJRGJhYUGr1arV6unp6VAo9MUBf49hmNXV1ZGREaFQODw87Ha7f17/Evr4+HhiYqKqqmpsbGxnZ+c7nH+JpmmKokiS1Gg0FEXl8/nPo10uV19fn0KhmJ+fp2n6G5V/zefz2Ww2FEXtdvsn0Q6HgyRJo9HodDq/F1eiSCRit9sRBPkMemVlhSCI/v5+7pbER+Vyubm5uYrRGxsbOp3OarXu7+9zwQKpMrTf7+/p6TGbzXt7exyBQKoA/fj4aLPZ1Gr12toadyCQKkDPzMwgCEJRFHcawEDRm5ubOI5PTU1lMhlOQSAB7adpmnY4HBiGjY6OIghSwY6Eo0C+bHl5WSQSLS0tcTyDoJWf6Wg0urW1ZbFYhoaGIEwiSOXRHo/H7XYPDAzI5XIIIJDKoHO53O7ubnd3t8VigQMCqQza7/cfHByYzebW1lY4IJDKoAOBQLFYNBqNcDSAlUJns9mTkxOSJDs7O6GBQCqFvri4OD8/12q1JX6M/0ml0Dc3N+FwWKVSQdMAVgp9d3eH43hbWxs0DWCl0E9PT3K5vKWlBZoGsA/R+Xz++fkZwzAMw2CCQPoQnUwmk8mkWCyGqQHsQ3Q6nU6n0yiKwtQAVmp5MAxTU1MDUwPYf3k+/QPJbtT6sfbhTgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEHUlEQVR4nO3Yy0sqXwDA8bnjz0sufFQDmopapEao1IgGgUSIswhNbVerQkH/Eh8rhUJXunOR62xTIT4wQiTTQCIrlXyUj9AWrtK76HJ/3W6llc3cC/PdOZzjfBzGM4f51u12gX8tEGvAR8LRaIWj0QpHoxWORiscjVY4Gq1wNFrhaLTC0WiFo9EKR6MVjkYrHI1W/yT6vwF+193dXblcrtfr9/f37Xb74eGBQCAMDQ1RKJTR0VEGgzEyMjKQE30Wnc/nU6lUJpPJZrP5fL5cLtdqtVar1W63O50OCIIkEolCoUAQNDY2xuFwJicnp6enJRIJl8v98Em/fexVb61WC4VCh4eH8Xg8mUxSKBQ+n8/j8ZhMJgRBVCqVRCKBINjpdNrtdrPZrNVqpVIpl8udn583m82ZmRmZTDY/P7+wsABB0Jejr66udnd39/b2gsEgl8uVy+UwDItEIoFAwGAw3p57c3NzdnZ2enp6fHx8dHSUy+UWFxdVKtXS0tLExMQ7EN2+q1arbrcbQRAikahWqx0ORyKR6H/6sxKJhMPh0Gg0RCIRQRCXy1WtVvuc2y86EAisra0RCAStVuvxeMrl8ke1v1WpVLxer06nA0FwdXV1Z2enn1m90Y1Gw263C4VCmUzmcDiur68/TX1esVh0Op1yuVwoFNpstnq9/vb4Huh0Om02mwEAMBgM0Wh0cM4XikajRqMRAACTyZRKpd4Y+RY6Eono9XoWi2WxWHr++oHUaDSsViubzdbr9eFw+LVhr6IPDg4QBBGLxR6P52uEr+b1eiUSiUql2t/ff3HAy+hIJIIgCAzD29vbX8l7Nb/fL5VKVSrVi9f7BXQ6nV5ZWRGLxViJH/P7/RKJRKfT/Xl/P0c3Gg2z2cxisdC/K/7M6/Wy2WyTyfTsH/UcbbfbAQCwWCwo2t7KarUCAGCz2Z4e/A0dCASmpqYMBgM6a0U/NRoNo9EoEAiePnf+30/X63Wfz0cmk9fX1we1h/x8w8PDGxsbNBrN5/NVq9WfR3/x3W43gUBwOBwYXM9eOZ1OEARdLtfjx5/oy8tLBEG0Wu1XPKU/X7FY1Ol0KpXq4uKi+wu9tbVFJBL/hhXjtbxe7/fv3zc3N7uP6FqtptVq1Wr1oPZuX1GlUtFoNMvLy7e3tyAAAKFQKBgMKpXKnrt4DKPT6UqlMhgMhsNhEACAWCzG4XAUCgXWsB4pFIrx8fFYLAYWCoV4PD43NyeVSrFW9QiGYblcHo/HwZOTk2QyOTs7izWpr2AYTiaTYCaTIZPJIpEIa09fiUQiGo0GZrNZPp8vFAqx9vSVQCDg8/lgoVDg8Xh/87rxNDqdzuVywVKpxGQysca8IxaLBdbr9Q+848EwCILAVqtFpVKxlrwjCoUCttttEomEteQdkUikH3HlhBNOv3EuAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADsklEQVR4nO3Yy0syXQDH8TNWZhehTMui0rKiNDRJLFBoaKEYLoI2JURBiwhaB62KFi1yY7jpHwiDFoZBF8oEDcTAbirkQqMyKi2NLlpOdJ7FA+/78qCTT9k0L8x3O79z+CCCgwiEEPzfov004DNRaKKi0ERFoYmKQhMVhSYqCk1UFJqoKDRRUWiiotBERaGJikITFYUmqtwMdxBCv98fDAYvLi5ubm6i0ejj42MymXx/f8/LyysoKCgpKeFwOFVVVTwer7Gxkc1m/xj64eHB5XK53e7j4+OTk5NAIIAgCJfLZbFYTCYzPz+fRqNhGBaPx+/v7yORyOXlJZ/Pb2pqEolEbW1tcrm8ubk562gk3b+mXq/XarXa7Xan00mj0SQSiVAobGho4PF4lZWVZWVlf6BjsVg4HA6FQsFg0O/3ezyeg4ODjo4OpVKJoqhKpaLT6d+IPjo6slgsa2trXq8XRVGFQiGXy6VSaWlpaeb3+nw+t9vtdDrtdns0GlWr1Vqttre3Nzc30y8kXvA/XV9fGwwGhULBZrOHh4dNJlM4HIZfy+l0zszMdHZ2cjic0dHRnZ2dL14IIfwXvbGxMTAwgCBIf3//8vLy6+vr12//p8PDw6mpKaFQKBaL9Xr93d3dV24DEMK3tzej0SgWi9vb2+fn5yORSJaof7a1tTU0NAQAGBkZ2dvb+/Q94OrqanJykslk6nQ6m82WPWHqbm9v5+bmBAKBSqVaXV393CVgbGyMwWBMTEycnZ1l14fT0tKSUqmUyWQmk+kTxwGLxZqdnU0kElmX4be9va3RaFpbWxcXF//2LNDr9d9hyiSHw9HT0yOVSs1m818dBB9PvjObzYaiKIqiDocj81M/jIYQms1mkUik0+nOz88zPPLzaAih0WgsLCycnp7OcE8KdCKRGB8fr6urW1lZyWRPivdpBoMxODjI5XJNJlMkEvn4wHd/iplnMBhycnIWFhY+XJLik/5dX1+fRqOxWCyhUAh/SSJ0dXW1VqtdX1/f3NzEX5IIDQBQq9Xd3d1Wq/Xl5QVnRi40n8/v6upyOBy7u7s4M3KhAQAKhQIA4HK5cDakQ8vlcplMtr+/H4/H021Ihy4uLpZIJB6Px+fzpduQDg0AaGlpCQQCfr8/3YCMaIFAUF9ff3p6mm5ARnRtbW1NTQ3OTwwZ0eXl5RUVFeFwOJlMphyQEQ0AYLFYsVgsFoulfEpSdFFR0dPT0/Pzc8qnJEXT6XQMwzAMS/mUpGgEQX6/haZ8+gtxeMiyF7503gAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADcklEQVR4nO3YwUv6fBwH8O1pK8kVajUvBjkCC3KHLMhwjsgIDALrGkRF97r3B3SMjnX05KEOJYEFNWgdshoFWZFiYlQTma0yyRZ8f4fneeD58XueH8+m7kuw91Xeb17uIO6DAgCQ75Y/YAO0xEDrFQOtVwy0XjHQesVA6xUDrVcMtF4x0HrFQOsVA61XviUa+81nHx8fsiyXSiVFUVAUra+vJwjCarXW1dXp5vvX/ISWZTmRSCSTybu7u4eHh3w+L8vy+/v7P9E2m81utzscDoqiXC5XT08Phv3um9ciKADg+fn56OgoHo8LgnB5eSmKotPpdDgcJElarVaz2YzjOACgXC4Xi8VCoSCK4v39fTab7erqomm6t7d3YGDA5/OhKKoTenV1dX9///Dw0Gq1ejwet9vd3d1NUVR7e3tLS8uvBQDA09NTNptNpVJXV1cXFxenp6cEQbAsOzw8PDo62traWnO0yWQKBAIsyw4ODvb39+M4rqovSdLJyQnP8xzHCYIQDAbHx8cnJiYIgqiRGEEQJBwO53I5UHGur69XVlYCgQBBEDMzM7FYrPLN/woKqnqATCaTGxsbkUhEUZTp6em5uTmbzVbF/b9SiyfBcdzs7CyCIPPz84IgVH2/JmgAgCRJy8vLHR0dY2Nje3t71R2vFfrPhMPhvr4+hmG2traqOFtbNABge3vb7/d7vd5oNFqtzZqjAQA7Ozs+n29oaIjjuKoM6oEGAGxubtI0PTk5eXNzU/maTmgAwPr6eltb2+LioqIoFU7phwYALC0tNTc3r62tVbijKzqdTodCIYZhzs7OKtnR9SXA6XROTU2l0+lIJFLRULWe4v/PwsJCZ2fn7u6u5gUIr1uhUAjH8Wg0qnkBAtrv9weDwVgsxvO8tgU4L7YjIyPFYvHg4EBbHQ6aZVmGYXieF0VRQx0O2mQyeb3e4+PjeDyuoQ7t7uHxeGw22/n5uYYuNDRN0263O5FIfH19qe1CQzc1Nblcrtvb21QqpbYL8yxGUVQmk8lkMmqLMNEOh+Pz8/Px8VFtESaaJEmSJPP5vNoiTLTFYrFYLC8vL2qLMNFms7mxsbFUKqktwkRjGIZh2Hf6yUP+/les4dYKE10ul8vlckNDg9oiTPTr6+vb25uG+ypMtCRJhUJBw4USJloUxVwuZ7fb1RZ/AHQIZcSc+H5eAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADhUlEQVR4nO3ZwUsqWwDH8ebWoJmZjViWlLMwKSYqo7SYJhgIXOQiKmjXImgfbfsDjP4Ed9GiTQRJWCFBk4shYgg0KBfpwlmkNY6VU9hMnLvw8bhw3+Pe25w5ccHfH/Dlw6zOmYMBABr+tn37asBnVkejWh2NanU0qtXRqFZHo1odjWp1NKr9legmnudFUby/v5dl+eXlpVqtAgCamposFktbW5vT6ezu7iZJ0uv1NjY2frX2n2EejyefzzscDofD0draajKZMAzTNE1RlKenp2KxaDabSZL0+XwURY2MjAQCgc7Ozi9GRyKR3t7erq6uGtpsNmMYpqrq6+truVwuFAqiKOZyuUwmk06ny+VyMBicmppiWZam6S9D/+Yd8fn5OZVKCYLA83wymbTZbKFQKBwOz8zMGE38eb+L/ndvb28cxyUSiXg8/vHxMT8/v7S05Pf7DfL998Bnd3Z2tra25na7GYaJRqOqqn469af7PLq2vb29ubk5m822vr6eyWSgmH45vWgAwN3d3cbGhtPpXFxc5DhOf/CXg4CuLRqNDg0NsSx7eHgIq/l/g4YGAOzv79M0PTk5GYvFIGZ/Hkw0ACAej09PTzMMk0gk4JZ/HGQ0ACAWi42Njc3Ozl5dXUGP1wYfDQDY2dkhSXJ1dVWSJCP6hqABAJubmw0NDVtbW0bEjUJLkrSyskJR1MnJCfS4UedpgiCWl5dxHN/d3a1UKpDr0D/Dj4tEIlardXt7G27W2JvLwsLCxMTEwcHB4+MjxKyx6L6+vnA4fHR0dHx8DDFr+B0xFAqNjo6enp4CeI87hqP7+/tZlj0/P08mk7CaKG7jNE1XKpWLiwtYQRToQCAwPj4uCIKqqlCCKNAEQQwPD6fT6evrayhBRD9rBgYGbm5uMpkMlBoitNfrJUkym81CqSFCezyenp4eURSh1BChXS6Xy+UqFAqapumvIUJjGEYQRKlUkmVZfw3dX1Or1aooCpQTHzq0yWSqVqvv7+/6U+jQGIbVDpb6U+jQqqriOI7juP4UOrSiKC0tLRaLRX8KHVqWZbvdbrfb9acQoUulUrFYdDqdzc3N+muI0Pl8XhRFt9sNpYYInc1mc7kcSZJQaojQt7e3HR0dPp8PSg0FWtO0VCo1ODhIURSUIAr05eWlIAh+v7+9vR1KEAWa53lJkoLBIKyg4eiHhweO4xiGgfju+B0MqGqH3LSMEAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAD7klEQVR4nO3YS0h6aRzGcRWzTCGzi4G1OYaSdBDrgEphVHTRQKiVUOCmXdvAZbQriAI3bVp3IYoKg0JQC4XSoAuEkV21UEi7WEh6lPe/cBiaasqp35sM9Oz9+lEO6PsyEUKM/9tYuQZ8Zb/on9ov+qf2i/6psc/Pz2maZjAYHA6Hx+MJBIK8vLxcqz4Zk6Kol+ji4mKRSCQWiwmCkEqltbW1BQUFuUa+HnNwcJDD4TAYjGQy+fT0dHd3Fw6Hr66uTk9PpVIpSZJ1dXUqlaqxsTE/Pz/X2r/GfPszHg6Hg8HgycmJz+c7ODjY2dlhsVharbalpaWjo0MsFucE+nLvoF8uFot5vV632+10Ojc2NnQ6ncFg6OnpKS0t/THiO0PZ7ezsbHJysquri81mG43G5eXlLF+IY9miMwsEAhMTE2q1miCI4eHhYDCIifXx/hs6s62trYGBAS6X29vb63K5wE2f7itohFA8HrdYLHK5vLm5eWlpCdb06b6IzmxxcbGpqUmpVE5PT0OBstm30Aghu92u0+lIkpyZmQEBZbPvohFCm5ubnZ2dFEVZrdbv17IZABohZLPZGhoa2tvbvV4vSPDjwaARQrOzsxKJpL+/PxqNQjX/bWBohNDo6CiDwRgbGwNsvjtIdCQSMZlMCoXC4XAAZt8O8hBQUlLS19dH0/Tc3Fw6nQYsvx741zA0NFRWVjY/Pw9e/nvwx63u7m6JRGK1WpPJJHg8M3i0QqHQ6/Xr6+s2mw08nhmWg21bW5tQKHQ4HDjiDExotVqt1WpdLtfR0RGOPq4rBI1Gs7297fF4cMRxoSmKUiqVe3t7OOK40HK5nCTJw8PDSCQCHsd4wySTyY6Pj/1+P3gZI5ogiIuLi8vLS/AyRnRlZaVYLL6+vgYvY0SLRKLy8vKbmxvwMka0QCAQCAQPDw/gZYxoHo9XWFgYj8fByxjRbDabzWanUinwMkZ05m8kk8kEL2NEJxKJRCKB44IYIzoWiz0+PvL5fPAyRnQ0Gr29vRUKheBljOhQKBQOh0UiEXgZIzoQCKTT6aqqKvAyRrTf75dIJARBgJdxoe/v730+X01NjUwmA4/jQu/u7u7v75MkyWLBvwUutMfjSaVS9fX1OOJY0JFIxO12azQalUqFo48FbbfbnU6nVqstKirC0YdH0zS9trYml8tbW1vB45nBo1dWVlZXV/V6PUmS4PHMgNHRaHRhYaG6utpgMMCW/zHY+0yLxcJiscbHx2GzrwaJdjqdFEUZjcZQKASYfTuwxyMYDE5NTT0/P5tMpoqKCqjs+wP56IlEwmw28/l8i8UCEvx4MOiRkREul2s2m2maBgl+vD/lRcKClvexfgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEeElEQVR4nM2azUsyWxyAT2NELjIwA03UIgsLUdF0IYRGjJJUSrtqExToX+LHxtyUK7NNC11GCVEiZQVSaJgQZpSR+RFjagWudO5C6MZbV63O8b3PcmAenjn8mDMMpw0AQKFQqFQqjUZjMBgsFovL5fL5/NHRUZFIxOVywf+PNo/HU6lUyuVyqVQiCCKdTieTyUQi8fb2JhaL5XK5UqlUqVQ9PT1/O/Vf2kiS/ONSNpuNx+OxWCwSiYRCoYeHB7VajeO4TqcbGBj4K5V/Qtbl/PzcbrdPT0+3t7drtVqn00kQRP1bWkCD6BqZTMblcun1egqFsrCwsLu7izqrPk1F10ilUna7XS6XCwQCm832/PyMLqs+34iuEQwGl5eXAQAmk+ny8hJFU0O+HU2SZD6fN5vNbDZ7bm4uGAxCb2rIT6JruFwuoVCo1Wr9fj/EoGb4eTRJkh6PRyqVarXaFq/3r6JJkvR4PEKhcG5urpXz/dtokiRdLhebzTaZTC17n0CIJknSbDYDAGw2GxRbQ+BE5/P55eVlgUDQmn0Hg/ItQKfTl5aWurq6tra2CIKA4qwHxAVYXV2lUChOpxOi80tgRqdSKb1er9Fobm9vIWo/A2c8arDZ7NnZ2UAg4PP5IGo/AzMaAKDT6bRa7f7+PtLJhhzNZDInJycDgcDR0RFc80cgRwMAxsfHeTze6ekpdPM78KNlMplCoTg7O7u/v4curwE/GgAglUovLi6i0SgKOUAULRQKaTTa1dUVCjlAFD08PDw0NHRzc4NCDhBFM5nM/v7++/v7QqGAwo8kGgDQ19eXyWQymQwKOapoBoNBEEQ+n0chRxXd3d398vLy+vqKQo4qmkqllsvlcrmMQo4qGsOwarVaqVSQyFFIAQDVahXDMAqFgkKOKrpcLlOp1M7OThRyVNGlUolGo9FoNBRyVNEEQTAYDES/4lFFp9NpFovFZDJRyJFE53K5ZDLJ5XLpdDoKP5LoeDyeSCT4fD4KOUAUHYvFSqXSyMgICjlAFB2JRCQSiVgsRiEHKKLD4XAoFJLL5TweD7q8BvzoYDCYTCaVSiV08zuQo3O5nN/vn5iYUKlUcM0fgRzt8/n29vZwHGcwGHDNH4EZnU6nt7e3VSqVTqeDqP0CiP8FHQ4HhmHr6+sQnV8CLfr4+FihUMzPzz89PcFy/hdwxqNQKGxubhaLxcXFxd7eXijOekB5dIvFAgCwWq1QbA2BEL2xscHhcIxGYz6f/72tGX4b7fV6RSKRwWCIRqNQgprhV9Fer1cmk+E4fnh4CCuoGX4e7Xa7RSIRjuMHBwcQg5rhJ9HPz88Wi4XD4RgMhhavcY1vR5+cnKysrAAAjEZjK+f4I9+Ifnx8dDgcCoVieHjYarW27F3xmaais9ms2+02GAwYhs3Pz+/s7KDOqk+D6HA47HA4ZmZmOjo6cBxfW1trwS7dkC/O5eVyuevr6/dzeXd3d2q1WqPRTE1NDQ4OIt+im6DN6/VWq9XPJyCLxaJEIhkbG6udgGzFF0XT/APoU9W4mP8pNgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEAklEQVR4nO2ZS0uqWxiAl6bfV5ZgonRBzLAb2sUSlFLCEBSjGtTAcBQURNBYaNSsQU4SJw0bFDUzahBhNxRSJCmTEEGNtCCtLEgUNfIM3Bw60TbP7l27c2A/P+Dh4WUtWLwL5b+MyWRis9kzMzO3t7dft5UCArEsLCyQJDk3N/fy8gIiLA5MdDqdNhqNTCbTYrGACIsDE53P56+urgwGQ2dn5+7uLpTzZ1AREHw+f2pqik6nr6ysxGIxKO3HwM7AbDZTKJSlpSVY7TuAo+Px+Pj4uEKhODs7gzW/Bex4FOByuWNjY4FAYGtrC9b8D8DHkMlkJiYment7fT4fuLwA8KQRQgRBaLXa8/Pz/f19cHkB+GiEkFqtHhgYsNvtz8/POPxYorlcrkKhcDqdLpcLhx9LNEJIJpNRqVSPx4NDjiu6u7u7q6urcBfB5biiq6urRSKR3+8PBALgclzRCKGmpqZQKBQOh8HNGKP5fD6FQolGo+BmjNF1dXW1tbU4Hk8YozkcDpvNTiQS4GaM0Uwmk8lkJpNJcDPGaJIkSZLMZDLgZozRVCqVSqW+vr7Cm8GNf5PL5XK5HJ1OBzdjjE6lUqlUqqKiAtyMMfrx8fHp6YnFYoGbMUbH4/G7uzsulwtuxhh9fX19c3NTX18PbsYYHQ6HBQJBQ0MDuBljdCAQaGlpaW5uBjfjir64uPD5fGKxmMPhgMtxRXs8ntPTU4lEgkOOK9rpdMrlcplMhkOOJdrlctntdqVS2dbWhsOPJdpmsyUSCZVKhUOOcER7vd6dnR2tVqvRaMDlBeCjrVZrMBgcGhoiCAJc/gPYLZvNZhOJRNPT07lcDtb8FshJPzw8rK6u0mg0vV5Po9EAze8BHMDi4iJCyGQyATo/BCx6Y2NDKBROTk7e399DOX8GTPTe3p5SqdRoNG63G0RYHIBoh8Oh0+mkUun29vbXbaXw1ejDw8PBwcH29vb19XWQoFL4UrTValWpVBKJZG1tDSqoFH4xOp1OWywWsVisUqmsVito0uf8SrTb7Z6dnWUwGAaDweFwgDd9yr+LjkajZrO5r69PIBDMz89HIhFMWcUpNfry8nJ5eXl4eLisrEyv129ubmLNKg4lX/R7IZlMut3u4+Pjo6Ojg4MDnU43MjIyOjqKYzFQOh9Ex+PxSCQSCoX8fr/X6z05Ocnn8/39/Wq1WqvV8ni8bwl9C8VoNBbekNlsNplMJhKJWCwWjUbD4bBQKOzo6Ojp6ZHL5Uqlsry8/Ltrf0CRSqXZbBYhRBBEVVUVi8Wqqanh8XiNjY2tra1isZjBYHx35HsowWAwl8shhAiCqKysZLFYJEl+d9UnfHIR/5tg3DDh40/07+JP9O/ifxn9F39IDqMJ8h8PAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADiUlEQVR4nO3YzUsqexzH8ZmbjGFTMZpaYSlGJVQiJj0xJtKDUBFUi9q2aF/7/gBp2bJ2uZYgQwghRkQKEaOkMAyDscgKp4FGUSaas+hyOefce8+5nfnO79yFnz/gzYuBmfnNYJLCCwQC9fX129vbgM0/MIU3PT09OTl5fHz88vIC1VQcbTAYPB5PLBaLx+NQTcXRGIaNjY1RFJVIJKCCKNAul2twcPDs7IzneZAgCjRBEAMDA+l0+vLyEiSIAo1hmM1me3h4yGazIDVE6K6uLqvVent7C1JDhO7o6DCZTPf39yA1RGidTmcwGJ6fnyuVivwaIjSGYRRF8TwP8gBBh9ZoNKVSqVwuy0+hQxMEIYqiKIryU+jQkiThOI7juPwUOnS1WlWr1QRByE+hQwuC0NDQQJKk/BQi9Pv7O8dxWq2Woij5NUToQqFQKBSMRmNdXZ38GiI0y7L5fN5kMoHUEKFvbm5YlrVarSA1ROirqyubzdbb2wtSQ4EuFovn5+d2u72/vx8kiAKdSCSSyaTT6VSpVCBBFOh4PE6S5MjICFRQcXQmk2EYxuPx0DQN1VQcfXR0lEqlJiYmQE4dH1MWnc1mDw8PZ2ZmfD4fYFZZdDAYPD09nZ+fb2lpgewC/mL7bgzDOByO1dXV19dX2LJSV5rjuL29PVEUV1ZWQE523wz2Gvw1v9+PYdjW1pYScUXQgUDAYrGsra0Vi0Ul+vDoUCjkcrlmZ2dTqRR4/GPA6HA4PD4+7na7I5EIbPnrQaL39/dpmh4dHT04OADM/n1g6N3dXbvd7vV6Q6EQVPPfBoDO5XKbm5t6vX5paYlhGPnBn04uOhgMLiwsNDU1bWxsZDIZENNP9+voaDS6vr5uMpncbvfOzo4oioCsHw+XJOlTL6NKpRKNRiORSDgcFkVxcXFxeXnZ6XQCv/N+uP/6KSEIwsXFRTKZPDk5icViJEn6fL65ubmpqSlFff843O/3m83mtrY2nU7X2NioVqtxHH97eyuVSjzPPz093d3d5XK56+vrdDrNcdzw8DBN016vF/BQ/2l0Z2cny7JGo1Gr1X6NLpfLH2iCICwWS09PT19fn8PhGBoaam1t/V3cP9HxeDyfzz8+PnIcJwhCtVqVJEmlUmk0mubmZr1e397ebjabu7u7oT5L5e/TN+L/Yej+mgKuhka1GhrVamhUq6FRrYZGtRoa1WpoVPsC5jxpoAGNgL4AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADc0lEQVR4nO3Yz0vyYAAH8G1kVi5JV0llzAb9ohJCK8o0JKGiIIigY1H0D3Tp0D06evaal6AiJIQQwpEVGCL9QqwoQ6NaNLViUaP2Xt4fEPTSM+eey743D98vnz2K4oMI8LK4uEiS5N7eHmgRQ+CFJMlkMplKpUCLMNFVVVUEQdzf34MWYaIJgiAIgmVZ0CJMdGlpKY7jz8/PoEWYaLVarVar39/fQYsw0RiGYRj2+fkJXMyH5ofheZ7neZVKBVqEieY4juO44uJi0CJMdDqdzmQyZWVloEWYaIZhHh4eysvLQYsw0alUqqioqLq6GrQIE311dWUymUwmE2gRGvrp6Skejzc0NNTX14N2oaGPjo6Oj49bWlowDNgADR2JRLLZbHt7u4guHPTr6+v+/n5nZ2dHR4eIOhw0TdM7Ozu9vb0Gg0FEHQ46EAhotVqn0ymuDgFN07Tf7x8YGLDZbOIWIKA3NjY+Pj5GRkbET+TjH+t/srq6WlNTMz8/n8uIrCd9eXnp9XopipqYmMhpSKoj/EkWFha0Wq3H48lxRz60x+OpqKiYm5vjeT7HKZnQ6+vrZrN5fHw8Ho/nviYH2u/322w2p9NJ07Qkg3lH+3w+h8PR3d29ubkp1WZ+0cvLy1ar1W63+3w+CWfzhX58fFxaWiJJcnh4OBAISDueF3QwGJyenkYQZHZ2NhqNSr6PCoIgzS8HgiAIcn5+vra2trKywvP85OTkzMyMXq+XcP93vF4vwzC5P30sFnO73S6XS6PRTE1NbW1t5b75XdCSkhKXy9XX19fT02O1WgsKCoCemWXZcDi8u7sbDAYjkcjQ0NDo6OjY2BiO49If8J+gbrd7e3s7FAoRBGGxWMxmc1NTE0VRtbW1372zt7e319fXFxcXsVjs8PDw4OAAx3GHw9Hf3z84OCjiHgMYLQgCy7KhUCgcDkej0ZOTE4Zh6urqjEZjZWWlTqfTaDQqlUoQhLe3t5eXF5Zl7+7ukslkIpFobm5ua2uzWCxdXV12ux1F0Xxz/6H/vkin06enp2dnZ4lE4ubmhmGYTCbDcRzP8yiKFhYW4jiu1+sNBoPRaKQoqrGxsbW1VcQNopToL+E4LpvNfkHrdDrQz73kkfgrT57AvBYTHQUtVxS0XFHQckVByxUFLVcUtFxR0HJFQcsVBS1XFLRc+QWmW2isuFqIagAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADt0lEQVR4nO3Y3yt7fxzA8WOzMcvPmd9pwmimWexHtjClIS7cbVFKij9AcSeuKJT9C0pTSFrEwpwwmmWsxi52Lnb82sxpNOSQ873w7fPp82lnxmfO5/2p87x9v97vHu3ivN8NIoBsYGBAJpMhCBJ2lQEBGZvNxnH85eUl7Cqg6MgBisZxnM1ms9nssKuAokOhEJfL5XK5YVcBRWMYlp6enpaWFnYVRPT19bXP58vOzmaxWGEHQER7vV4URQsKCsgGQER7PB4EQYqKisgGQESfnp6WlZUJhUKyAeDQ9/f3x8fHYrFYLBaTzQCHttlsdru9uro6MTGRbAY49O7uLoPBUCgUEWbAQiMIYrFY6urq1Gp1hDGw0Ovr69vb242NjWQX+HsAoVEUNZlMra2tWq028iRA6IWFhbW1tfb29vz8/A9GKX7dk7W/v69UKnU63c3NzYfDQPzST09PMzMzfr9fr9dnZmZ+vIGCX/HDDAYDh8MZGRmJcv7voxcXF0UiUWdnJ4qiUW75y+jNzc36+nqNRrOzsxP9LmhycvL7TJGDYbilpUUqlS4tLX1qI8Tj8cbGxp6fn79JRpbZbG5ubq6srJydnf3sXqivr4/D4QwNDXm93u/Ahc1oNKpUqpqaGqPR+IXt0OXl5eDgIJfL7erqslgsMff9ViAQGB8fLy4u1mq1JpPpa4dABEHgOD49PS0Wi2UymcFgCAQCMXX+zGw2d3d3QxDU29trs9m+fM7Pr8fq6qpOp2MwGHq9fn5+HsfxWDj/z+FwDA8Pi0QiiUQyMTFxe3v7J6f98sm7urqampqqra3Nysrq6emZm5uL5lKNnNVqHR0dVSqVfD6/v79/a2vrDw8kCCKOIIjf7kiHw7G8vLyysuJyuRoaGlQqlVwul0qlZP9ChM3lch0eHlqtVhiGMQzTarVtbW0dHR1MJvMT9ztJYdDvOZ3OjY0NGIb39vZYLJZEIhGJRCUlJYWFhbm5uTweLyUlJSEhIS4u7vX19eHhIRgM+ny+8/NzBEHcbrfT6Tw6OlIoFGq1WqPRNDU1RX4ixwb93t3d3cHBgd1uPzk5OTs783g8TCYzJycnIyMjOTn5B/rx8TEYDPr9/ouLC4FAIBQKKyoqqqqq5HJ5eXl5rKzRon/09vbmdrsRBEFR1OfzYRgWCoXer6T4+PikpKTU1FQ+n5+XlycQCEpLS6N6rH03GqiAeE9/NhpNVTSaqmg0VdFoqqLRVEWjqYpGUxWNpioaTVU0mqpoNFXRaKr6J9H/AR9gyqp6kJWVAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEP0lEQVR4nO3Wy0sqXwDA8eNIgYuSUIkS06A3Yhaa4KLHYhSih7kqWxblX1K6KTflynKZte0hVIZKQQ+zTIg0SnuMYo6mBbqxcxde+vXr2lW73ZkuzHfnOTOcz4yHmQE0Gm1lZQX+Oy0tLSEAgEwmA/6dMpkMwmAwUqkU2ZIiSqfTSHl5eSKRIFtSRIlEAmGxWNFolGxJEUWjUaS6uhrDMLIlRYRhGFJTUxMIBCKRCNmYggqFQoFAAKmrq/P7/RcXF2R7Csrn8/n9fqSlpeXx8dHr9ZLtKSiv1/v8/IyIRKLW1la32022p6DcbrdYLEYEAoFUKt3f3//+bpfLtb+/L5FIEACAXC6/vr52OBxkq/LkcDhub2/lcjkCAOjq6uru7rbZbN/5GRIKhWw2W3d3d1dXFwIA4HA4KIpardb19XWybR+2sbFhtVoVCgWLxQLZT6fLy0sURYeGhjAMI/cjLme3t7cDAwNKpfLq6gpCCF4n5ubmaDSawWAgz/ZhMzMzdDrdaDRmf/6HjkQiw8PDMplsd3eXJFvunE6nVCrVaDTRaDQ7grxuGg6HMzo6Go/HzWZzPB4ncfu+Dcdxs9n89PQ0OjrKYrF+jr67LJ1OBwCYnp4m/IbmbmpqCgCg1+vfDr5H4zg+MTHB4/EWFxeJo32QyWTicrlarTYej78df4+GEHo8HpVKJRKJlpeXCdLlymKxCIVCtVp9dnb2bioHGkJot9tRFJVIJGS5LRZLe3u7Uql0Op2/zuZGQwi3trZQFBWJRMTvE5PJJBQKlUrl9vZ2zgM+REMI7Xa7SqXi8XjT09OxWOzvCP8XjuNTU1NcLletVue8x9l+h4YQejyeyclJAMD4+Pjffn47nc6xsTEAgFar/XUfvy0PGkKI47hOp2toaOjo6DAYDPf391/n/Nnd3d3s7KxUKm1qatLr9Xn/1fzobKurqyMjIwiCqFSqhYWFcDj8x1QIIQyFQiaTaXBwkE6nazSatbW1Qs4qFA0hjEQi8/PzKIqWlpb29/cbDAaXy/VZLTw6Opqdne3r6yspKVEoFEaj8fUtnTcahLCo9+rV1dX6+vrm5ubOzo5AIJDJZG1tbUKhsLGxsbKy8vfnhsNhn8/n9XqPj48PDg6CwWBPTw+Kor29vbW1tYUbikZne3h4cDgce3t7h4eHJycnTCazvr5eIBBUV1ez2Wwmk8lgMBAEeXl5SaVSiUQiGo1iGBYIBPx+fzKZFIvFUqlULpd3dnay2exiV/8k+rVgMHh6enp+fn55eXlzc4NhGI7jyWQylUq9vLwgCMJgMMrLy9lsdlVVFZ/Pr6ura25uFolEfD7/04v+KfptsVgsHA5n0el0OpPJ0Ol0BoNRVlbGYrGqqqoqKiq+ZKGvRBMWkv+Q7xeFJioKTVQUmqgoNFFRaKKi0ERFoYmKQhMVhSYqCk1UFJqoKDRRUWii+gGq6iBYtO3SrgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADr0lEQVR4nO3YTUsyWwDAcQ3ScBJMMhdNixSS0hnKWZQlJgiFLWqZuBJa9A2EVkKLFrUp/AZBSC0yKBdTYYJSNBDYIoxMMBFTm4xeKMYRz10Icbk8o6fS88Rl/suZw+E3h3k5jAQIdHt7azabvV6v0IC/WJtEIJ7neZ6XyWRCA/5igujfnCBaJpPJZLJyuYxSA5kgGsMwDMPe3t5QaiATRKtUqq6urqenJ5QayATRcrlcq9Xm8/lisYgSBFO9BxHH8Ww2m8lkkGkgq4fu7+9PpVKpVAqZBrJ66IGBAb1en0gkkGkgq4c2mUwEQVxeXv62d0g9tEKhMJvNFxcXDMMgA8HU4Is4OjparVZPT0/RaCBrgLZarTabLRKJpNNpJB6oGqA7OjocDkc4HKZpGg0IpsYbpunpaafTGQqFstksAhBMjdE4js/OzoZCod3dXQQgqGA23cVicX5+fnx8nGGYVm/wYYLaT2s0GpfLlcvlNjc3OY5r9To2Dv76fD6fQqHw+/2tW0LIvoDOZDJut9toNAaDwZZ5oPoCGgAQjUbtdrvdbj85OWmNB6qvoQEAwWBweHh4ZmYmGo22AgTTl9EAgK2tLZPJ5HQ6j4+Pmw6q38fHx8rKynfQAIBAIEBRlNVq3d7ebi6rTul02uv1yuXyb6IBAPv7+1NTU3q9fnV1lWXZJuL+WDgcdrvdSqVyaWnp+2gAAMMwCwsLEonE4/EcHR01y/efisXixsYGRVEkSfr9/kql8iM0AIBl2bW1NZIkh4aGfD5fPB5vCrQWx3E7Ozsul0sqlbrdbpqma8d/iq4VDocXFxc1Go3FYlleXj47O/vhhIVCIRAIeDye7u7uiYmJ9fX1QqHweVYKAGjKl7VSqezt7R0cHNA0rVarbTabxWKhKMpoNMJPUiqV4vE4wzCxWCwSiZAk6XQ65+bmSJL897CmoWuVy+XDw8NIJBKLxc7Pz0dGRgiCMBgMOp0Ox/Genh6VSoVhWHt7e7Va5Tju5eXl8fHx/v4+k8kkk8lEIlG7wSwWy+TkpMPh+OM1Nxn92fX1NcMw8Xj86urq5uYmnU739vZqNBqVSqVQKD7Rr6+vpVIpn88DAPR6/eDgIEEQFEWNjY0plUqhyVuF/oxl2WQyeXd3l8vlHh4enp+f39/feZ5va2uTy+WdnZ1qtVqr1fb19el0OoPBIJVKG87ZcnQr+n/9n/7NiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhU/QMA79mYpE8pmQAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADh0lEQVR4nO3YTUsyXRjAcR9RDBtRkkwtG4MUeqE3zIjGMAha1M6Faxd9gDat+gAuWxctora9oGCUBNMg4UJmMQjpIls4i7RGBTVopjrPQri5uV8eTser6VnMf+25/HkW45mjQ58vk8l4PJ54PE6wFiS97vOJolgul2maJlgLEgm6UqkMDAy4XC5wDWYk6FqtZrfb7XY7uAYzEnSz2aQoymKxgGswI0HLsmwymUwmE7gGMxL0x8eHXq/X60nWgkTyxUajUVEURVHANZiRoM1m88vLS7vdBtdgRoK2Wq2NRqPRaEBjcCNB9/f3Pz09VatVcA1mJGi3220wGERRBNdgRoKmaXpkZKRUKoFrMCNB+3w+v99fLBZbrRY4CCcStMFgmJiYyOfzgiCAg3Ai/IOYmZmRJCmXy8FqMCNEz8/PLywsZLPZ19dXWBBOhGiXy8UwDMdxLMuCerAiPz+srKxQFJVOpwE1mJGjGYZZW1tLpVIcxwGCcOrqpLaxsaEoyvn5ORAGuy7fMbe3t4eGhk5OTkDeWDHr9kwcjUa9Xu/x8fHDwwPIJmLV/e/e29uzWCw7Ozvdj8IMAC3L8tbWlsPh2N/f734aTgBohFChUIhEIlNTU2dnZyAD/zsYNEKIZdlwOMwwzMXFBdTMvwWGRgglk8nFxcXl5eVkMgk49vcg0QihRCIRCoUCgcDR0RHs5J8DRiOE0un0+vq61+uNx+OSJIHPR1+BRgjxPL+5uanT6WKxGMuy4PP/QQh9xeO/VqsdHBwcHh4ajcZoNBqJRHw+H9h08G34uaurq1gsRlHU6urq7u7u3d1d9zMfHx+/aqd/1Gq1Tk9PE4lEKpWam5sLh8NLS0vBYPCzl66yLOdyudvb25ubmy9Hd3p+fr68vLy+vuY4rtlsBgKB6enpsbGx0dFRmqadTucfbwYlSSqXy/f394VCQRAEnufr9XooFFIJ3QkhlMlkstksz/OCIBQKheHhYY/H43Q6+/r6KIrq3MQqitJutxuNRrVaFUWxVCq53e7JycnZ2dlgMMgwjKroH729veXz+WKxWCqVRFGsVCr1er3VanXeOI1Go9lsttlsDodjcHDQ6/X6/f7x8XGbzdZZ/j3oX3p/f++gZVnW6XQGg6G3t9dqtfb09Pzx8/8L9Gf7tovxbtLQaqWh1UpDq5WGVisNrVYaWq00tFppaLXS0GqlodXqXz44dGHF6aPIAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADhUlEQVR4nO2Yz0vyYBzAt3JGOkyHawcHkR3awUYwISy0IomKoA5dunXpHHTtH4hOnT0WHYLyICOQoowYkRUrkH5JZEVZxoZiWDLweQ9C8L696cv7frfeg5/7Pnz4wp7v9mAIlMXFRYvFsrKyAqv9hToMDkVRtre3g8Hg4OAgoPYzkNGSJO3t7fX29jqdTkDtZyCj4/E4TdPd3d2Azt8CFq2qqizLgiB4vV4o51eARZ+dnSUSiY6ODpPJBOX8CrDoq6urTCbDcRyUsAJg0alUyu12t7W1QQkrABb98PDgcrlYloUSVgAmulAoZDIZhmEoigIRVgYmOpfLZbNZu90OYqsK2KQLhYLVagWxVQUmWtM0TdMIggCxVQUmGsdxHMcRQiC2qsBEm81ms9lcLBZBbFWBiSZJkiTJ19dXEFtVYKIdDgdFUaqqgtiqAhNtMpkYhnl6ekqn0yDCyoBtRJZl7+/v7+7uoIQVAIt2u92pVCqZTEIJKwAW3d7eznHc+fk5lLACYNEej4fn+dPTUwNeR7BogiAEQTg6OorH41DOr4D8R+zq6rJarZIkATp/C2S03+8PBAKxWOzi4gJQ+xnIaBzHBwYGjo+Po9EooPYzkNEYhg0NDQ0PD4uiqOvZBxztdDrHxsb29/fX19dhzT8BftGWz+enpqY6OztjsRi4vAzwpDEMI0lycnJS07SlpSW9zmydhrGwsIBh2Pz8vB5yvaIVRZmenm5paVleXgaX6xWNEJJleXR01Ov1RiIRWLOO0Qihzc3N8sbZ2NgA1OobjRCKRCI+n6+npyccDkM5dY9GCImi2N/fz/N8KBQCERoRjRDa3d2dmJigaXpubu76+vofbQZFI4QuLy9nZ2dtNtv4+Pja2tq/qIyLRghpmhYKhfx+v8vlmpmZ+euVady10AeyLK+urobD4fr6+pGRkWAw2NfX19jY+OeGb4gus7W1JYpiNBrN5XKBQMDn8wmCwPO8zWar+uy3RZeRJGlnZ0eSpIODg6amJo/Hw3Fca2sry7LNzc0Oh8NisRAEUSqVisViPp9XFCWdTn9zdJnn5+fDw8OTk5NEIpFMJm9ubt7f32mattvtv0Srqqooyn8R/UGpVEomk7e3t4+Pjy8vL9ls9u3tTdO0urq6hoYGkiQpimIY5v+K/kPgv6cNoBZtFLVoo6hFG0Ut2ihq0UZRizaKWrRR/ACNtnaECXxszwAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAD5UlEQVR4nO3ayUtqfRzH8eN1wsQKM5JoEBsoK2xCK6M0CItatYmGVQhF/0Bt2xU0UEH7IMJNk0QSYouDlNIgKUlncYzQJCWHQCXt4O9ZBM8iuM9zb/2+1oX72f/evDyI4jkSKFsLBAKTk5MSiWRpaemTKQIL6H+XTqdnZmYEAsH8/Pzna1lCr66uCoXC2dnZVCr1+Vo20Gazub6+fnx83OfzYQn+IID3+Pi4ubkpEAgMBkNJSQmeKJaX/h9bXl5ms9nr6+sYm7Bop9PZ3t4+Ojr69PSEMQv79jCZTDRNDw0NFRQU4OxivADv5nK51Gr1xMTE6+sr3jLglbZarR6PR6/XczgcvGUo9PPzM0mSWq22p6cHexwK7XA4Tk9PNRqNRCLBHodCX15ecrlclUoFEQdBZzIZl8ulVCqbmpog+iBoiqJub28VCkV+fj5EHwTt9Xppmq6srISIE0Bon8/HZrPLy8sh4gQQOhgMSqVSqVQKESeA0JFIRCwWQ3zYvQ0EHY/HRSKRSCSCiBNA6FQqxefz+Xw+RJwAQiOEWCwWi8WCiBNAaA6HwzAMwzAQcQIInZOTk0wmE4kERJwAQufl5cVisVgsBhEngNCFhYWhUCgYDELECSB0cXHxw8OD3++HiBNAaJlMJpPJvF4vRJwAQldVVVVXV1MUBREngNASiaSurs7tdns8Hog+1C+XxsZGp9N5cXEBEYdCq1QqtVp9dnYGEYdC19TUdHZ2kiRpt9uxxwHve+h0ukgkYrFYsJcB0b29vXq9/ujo6Pr6Gm8ZEM3j8QYHB2ma3tvbw5zGe5ft3RiGmZqaUigUFosFYxb2rimbzR4eHuZyuVtbW+FwGFsX4wX42RYXFwmCWFhYwBXMBjocDhsMBrlcbjQasQSz9HTr/Pxcr9drNBosb+4soRFCh4eHra2tfX19JEl+MpU9NELIaDQ2NDT09/efnJx8ppNVNEJoe3u7ubm5u7t7Z2fnw5FsoxFC+/v7Op1OoVCsra0lk8kPFL4AjRCy2WxjY2MCgWB6etput//u8a9BI4T8fv/c3JxcLm9ra1tZWbm/v//1s1+GftvBwcHIyAiHwxkYGNjY2KBp+ldOsRBC2L5dP7RwOLy7u2symcxmc1dXl1ar7ejoUKlUubm5Pzvy9ei3BQKB4+Njq9VKkmQmk2lpaVEqlbW1tRUVFWVlZe9udX8X9NvS6bTNZnM4HFdXV263m6IouVxeWlpaVFQkFouFQiGPx2OxWN8L/e9eXl5ubm4oirq7u/P7/aFQKBqNxuPxtyfW3xT9bgzDRKPRRCLxJ6HfDfyfNRD7i87W/qKztT8S/Q9AvSg7LHfzLwAAAABJRU5ErkJggg==)Later in this article we’ll look in depth at activations to synthetic curve images.Curve detectors collectively span all orientations.\n\nCurve Family Activations by Orientation![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEFUlEQVR4nNWay0syXRzH5xkpctGINFGKMAblEGikodKmGzSEdNFltgkS9C/xstKVrbRVi2oVmBEm0hBC2E2NgrLLRDaSMxPNRjfpswgeet+Ht5vnqO93fX4fPgPDmTPn9/tVrVaRpgzDMOl0+uLiIpfLMQzDsizHcaIolsvlX80mXSwWaZpOJpOpVOr09FQmk2k0GoIglEplZ2cnhmFSqbSJpG9ubqLRaCwWSyQSarXabDbr9XqtVkuSZFdX1z+WVpsgT09PwWBwcnKytbV1ZmYmEAgcHR19sL7x0pFIZH5+HkVRq9UaDocLhcKnJY2U5nne6/WSJGkymQKBQD6f/2Jhw6QzmYzT6UQQxOFw7O/vf6u2MdI0TdtsNpVK5fF4BEH4bnkDpHd3dymK0ul04XD4Z4R6S9M0TVHU0NDQ+vr6jyF1lc5kMjabTafT1WJcrac0z/NOp1OlUv34rfiT+kl7vV4EQTweT+2oOklHIhGSJB0Oxw/2ir+D1uFQwXHc6uqqTCZbXFyUy+UAiLU/96cJBoMoivr9flBA6NLX19cURVmt1q9/pT8N9NcjGo0mEonZ2VmlUgmKCVea47hYLDY1NWWxWABi4Urv7e0lEomJiYl/n+JrC1zpZDKpVqtHRkbAYiFKMwyTSqXMZrPBYABLhiidyWTS6bRerwdOhih9fn6OYZhWqwVOhiidy+X6+vpIkgROhiX9/Px8f3+vVqvB7htvgSXNsizLsgA/KO8DS5rneY7jcByHAYclLYqiKIoymQwGHJZ0uVwulUpSqRQGHJb06+trpVJBUSh8WNISiQRF0UqlAgMOS7qtrU0qlZZKJRhwWNIYhmEY9vLyAgMOS7qjowPHcY7jYMBhSSsUCoVC8fj4CAMOS1oulxMEcXd3VygUgMMhHph6e3uvrq4uLy+BkyFK9/f3i6J4dnYGnAxRemBgYHBw8OTkBDgZojRBEEaj8eDg4Pj4GCwZ7o/t8PAwwzA0TYPFwpUeHR0dGxuLx+Ng9xC40jiOUxS1s7Ozvb0NEAv9WsxisYyPj29ububzeVBM6NI9PT02my0SiWxsbACDgrrJ/CDFYtFutxuNxu/2C/8r9bhUx3F8YWFBFMWVlRVBEAAQgTz6V+Lz+RAEcbvdtaPqJy0IgsvlUqlUoVCoRlRd+4jZbPatj7i2tlYLpzEdW4PBUIt3A3rj8Xj8rTf+4/ekwVMIbreb5/nvljds3iObzbpcLgRBlpaW/h/zHm8RBMHn85EkaTQa/X7/w8PDFwsbP8O0tbVlt9slEsnc3FwoFGJZ9tOSxktXq9Visbi8vExRVEtLy/T0tN/vPzw8/GB9E83l3d7e/pnLIwjCZDIZDAatVqvRaLq7u9+vbCLpt3Ac934Csr29/e8JyN8gySVQhKVRZAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADvElEQVR4nO3aS0gqbRzH8cd6lZCMNMxFGaSRhZeQMcoEMaIIiaBVFIGLtq2jXa2CaFHhJmgdRZGFFWXBJGmLMehCF7pippSkYlAxOt7Oos3hnPc9dPL5j73Qb+18+YwbmWfkZDIZ9H9bXq4Bn9k3mq19o9naN5qt/ZNrwIfGMEw0Gn17e0skEujLommaPj09vby89Hq9gUDg6enpHc0wDEKI86V+EWOxmNvtpijq4ODg5OTk9vZWJpNJpVKJRCIUCgsLC3k8Hvo66EAg4HA4SJLc3d1FCOl0Oo1GU1tbW1VVVVFRUVpa+vOHc48OhUI2m81ut29sbDQ3N5tMJoPBUF9fLxAI/vOaTE63srLS3d2dn5/f0dExPT3t9Xo/clXO0H6/f2RkpLKyUq/XT05O+v3+j1+bG7TL5ert7eXz+QMDAxRF/e3lOUAvLy+bTCalUmm1Wmma/kSBbfTs7KxWqzWZTDab7dMRVtFzc3NqtdpsNu/s7GTTYQ+9urqq0+na29tdLleWKZbQ+/v7bW1tBoNhe3s7+xob6Egk0t/fL5fL5+fnsQTZQI+PjyOExsbGcAXB0SRJajQai8USDodxNWEfAlKp1MLCQjKZ7OvrKykpwdbFdff/usXFRbFYPDw8jDcL+E0zDLO2tiaXy7u6uvCWAdFbW1sOh8NsNtfV1eEtA6KdTqdIJGptbcVehkJfXFy43W6j0djY2Ig9DoX2eDwURen1eog4FPro6Eir1RIEAREHQYfD4bOzM7VarVQqIfog6Ovr66urK4VCARFHQGifz3d3dyeTySDiCAj98PBQVlZWXl4OEUdA6FAoJBaLfzlhwTgQ9PPzc3FxsVAohIgjIDRN03w+n8/nQ8QREDqRSHC5XC6XCxFHQOi8vLx0Op1OpyHiCAjN4/Hi8Xg8HoeIIyC0QCB4fX19eXmBiCMgtEgkikQikUgEIo6A0BKJJBgMPj4+QsQREFoqlWYyGZ/PBxFHQGiZTCaXy29ubiDiCAitUChqamrOz8+j0ShEHwTN4XA0Gs3x8fHh4SFEH+rJhSCIVCrl8Xgg4lDohoaGpqamvb29UCiEPQ6FLioqMhqNTqeTJEnsccBzj5aWFpVKtbm5+f5uGOMA0SqVymw2r6+v2+12vGXYU9POzs7q6uqlpaVwOIyzi/c88/dNTExwOJypqSmMTXB0MBjs6ekhCCLLN1o/D/yfNRKJxGKxMAwzMzNzf3+PJ4rr7v88q9UqEAgGBwdjsVj2NZbQyWRyaGiooKBgdHQ0+9oPRczQFG7feUcAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADdUlEQVR4nO2Xz0siURzAdWlQdKppwDQsMegXWQYmkuSIQRBUl/DeIehe9+5Ff0Kdoksd8mAliUQNWBTSBFIwRUxCUmmNQ8wUNU7MHhaWlmpk6zvjLvi5vw8fHo/3fU8vy7Luf+NHuQO+QiVaKyrRWlGJ1opKtFZUorWiEq0VlWit+C+jq8od8AeSJF1cXGQymevr67u7u4eHh6enJ0mS9Hq9wWBAURTHcavV+k9E397eplKp4+Pj09PT8/PzTCYjimJ9fX1tba3ZbK6qqpJl+eXlhef5QqGQy+X05f1uJZPJnZ2dvb29w8PDurq67u7u9vb25ubmxsZGq9WKYZjJZEIQRJbl5+dnnudZlr25uSlbdCKR2Nzc3Nra4nk+GAz6/f7e3l63211dXV1ybRmiKYpaXV2NRCIIggwPDw8ODoZCIaPR+BcKWUOKxeLCwgJBEHa7fWpqiiTJr3m0i6Zpenp6uqamZmxsbG1t7TsqjaJ3d3fD4bDFYpmZmWEY5ps2LaI3NjYGBgbcbvfi4iKIUPXoaDTq9/sDgUAkEoFyqhudSCQIgggGg7FYDFCrYjRFUSMjI16vd319HdasVjTLspOTk06nc3l5GVyuVvT8/LxOp5ubm1NDrkp0PB53uVwTExMsy6rhh39PC4KwsrKCIMj4+DiO4+B+nU6FMb60tISi6OzsLLj5N8A7fX9/H41G+/r6wuEwrPktwNHxeDwWi42Ojra2tsKa3wIZLcvy9va2x+MZGhoC1L4HMjqZTJIkGQqFOjo6ALXvgYw+ODgQBCEQCAA6PwQsWpIkiqK8Xq/P54NyfgZY9MnJSTqd7unpUetufgNY9NnZGU3TnZ2dUEIFwKIZhnE4HC0tLVBCBcCis9lsU1OTw+GAEioAE/36+prL5Ww2W0NDA4hQGZhojuMKhQKO43q9HkSoDEy0IAiCIKAoCmIrCUy0KIqiKBoMBhBbSWCif70YtTkbOqhoBEEQBCkWiyC2ksBEm0wms9n8+PgIYisJTDSGYRiGcRwHYisJTLTRaLRYLPl8nmVZEKEyYBPRbrdns9mrqysooQJg0U6n8/LykmEYKKECYNFtbW02m42maSihAmDRLperq6srnU5rcPGBRWMY5vF4jo6OUqkUlPMzIP+IPp+P47j9/X1A54dARvf39xMEQZJkPp8H1L7nJ/FTcYntL7mEAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADaklEQVR4nO3Yz0vycADH8e3BIemKNXAJRWYoEZWBmCC2QqhDdCkKOnSJoHt07R/o2FnoEN06CIlUIlGGRT8YRT8wI6ZgFJqblEM0F3sOXh7o6eH7fP26CPY+uw+vfS9zwxVFwX5av74bAJOGVisNrVYaWq00tFppaLXS0GqlodVKQ6uVhlYrDa1WPxKt+24AhmGYLMv5fF6SpPf3d0VRCIIwGAwURTU0NPz199+DrlQqNzc3iUQimUym0+lMJlNFl8vlKtpoNFIUZTKZWltbrVar3W7v6elpbm6uXo6r+bFGUZRYLHZycsJx3PX1dTwet1gs7e3tZrOZpmmSJPV6ffWWisViPp/PZrOPj4/JZJJhmN7eXqfT6Xa7vV6vSuhcLre7u7u3t3d4eChJ0sDAQH9/f3d3t81ms1gsZrMZx/HPV4mimE6neZ6/u7u7urriOE4QBJZl646WJCkQCGxtbe3s7DidTp/P5/V63W43TdP/tSPL8vn5+fHxcTQaxZR6Fg6H5+bmSJIcGRlZXV2Nx+O1b2az2XqdtCiKa2tr6+vrBEHMzMxMTU3Z7XZk67Xf+ucuLi4WFhYwDJufnz84OEC+jx4diUTGx8c7OjpWVlYEQUC+ryBHB4NBlmVdLtfGxgba5T9DiQ6FQh6PZ2hoKBgMIpz9HDJ0NBr1+XyDg4Pb29uoNr8KDTqRSExPTzscjkAggGTw3yFAVyqVpaUlk8nk9/trXwMJAdrv9zc1NS0vL9c+BVitaI7jWJadnJzkeR4JCKRaXwI2Nzd5np+dnbVarSiedWDVcseRSMRmsy0uLqI6QsBqOulQKEQQxMTEBKIDBA0efXR0FA6Hx8bGhoeHEYJAgkfv7++/vb2Njo4i1AAGic5kMrFYjGVZ9Y8Zg0afnZ2dnp56PJ6vXpjrGiT68vKSoiiXy4VWAxgM+uPj4/b2tq+vz+FwIAeBBIN+eHi4v7/v6upqbGxEDgIJBp1KpVKpVGdnJ3INYDDop6enUqnU1taGXAMYDPrl5YVhGIZhkGsAg0G/vr5SFEVRFGoMaDDoYrFoMBiMRiNyDWAwaFmWdTqdTvdtn4lh0DiOV/8iItcABoPW6/XlcrlUKiHXAAaDJkmyUCgUCgXkGsBg0DRNi6IoCAJyDWAw6JaWllwu9/z8jFwD2G9A0XSFkHGobgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADl0lEQVR4nO3Yz0siUQDA8RQrpoZCQedQBo2BKI4kTaAJWgSZEF26FEIRdAj6AzwWXSKIgrn1D6SH6BeCWJRSRmhQRkU/PFSWCDlmIYqO5uxhYdmFTZ/VvGVhvseZ4c1nHu/w5glYlq363xL+a8Bn4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGw4tGwEnH9Apqmw+Hww8NDNBqNx+Nvb2+ZTKZQKAgEgtraWhRFJRIJhmFyuRzHcaVSKRSWn0cBRweQ19fXwWAwFApdXl7e3t7e3983NTXJZLLGxsb6+nqRSMSybC6XS6VSLy8vsVisWCwqFAqVSkUQREdHh16vb2hogIRmGGZ7e9vn8/n9/kAgoNPpCIJQKpU4jjc3N8tkMrFYXFdXV11dzbJsNptNpVKJRCIWi0UikXA4fHV1dXZ2VigUDAaDyWTq7e0lCIJD9Pv7+/r6usvl8ng8EonEZDIZDAaSJNVqNfggr6+vp6enwWDw8PDQ6/VqNBqr1To4ONje3v7Hc+x35PV6JycnpVKpXq+fnZ09Ojr64oDxeNzpdI6Pj0ul0q6ursXFxVgs9uvuV9GJRGJhYUGr1arV6unp6VAo9MUBf49hmNXV1ZGREaFQODw87Ha7f17/Evr4+HhiYqKqqmpsbGxnZ+c7nH+JpmmKokiS1Gg0FEXl8/nPo10uV19fn0KhmJ+fp2n6G5V/zefz2Ww2FEXtdvsn0Q6HgyRJo9HodDq/F1eiSCRit9sRBPkMemVlhSCI/v5+7pbER+Vyubm5uYrRGxsbOp3OarXu7+9zwQKpMrTf7+/p6TGbzXt7exyBQKoA/fj4aLPZ1Gr12toadyCQKkDPzMwgCEJRFHcawEDRm5ubOI5PTU1lMhlOQSAB7adpmnY4HBiGjY6OIghSwY6Eo0C+bHl5WSQSLS0tcTyDoJWf6Wg0urW1ZbFYhoaGIEwiSOXRHo/H7XYPDAzI5XIIIJDKoHO53O7ubnd3t8VigQMCqQza7/cfHByYzebW1lY4IJDKoAOBQLFYNBqNcDSAlUJns9mTkxOSJDs7O6GBQCqFvri4OD8/12q1JX6M/0ml0Dc3N+FwWKVSQdMAVgp9d3eH43hbWxs0DWCl0E9PT3K5vKWlBZoGsA/R+Xz++fkZwzAMw2CCQPoQnUwmk8mkWCyGqQHsQ3Q6nU6n0yiKwtQAVmp5MAxTU1MDUwPYf3k+/QPJbtT6sfbhTgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEHUlEQVR4nO3Yy0sqXwDA8bnjz0sufFQDmopapEao1IgGgUSIswhNbVerQkH/Eh8rhUJXunOR62xTIT4wQiTTQCIrlXyUj9AWrtK76HJ/3W6llc3cC/PdOZzjfBzGM4f51u12gX8tEGvAR8LRaIWj0QpHoxWORiscjVY4Gq1wNFrhaLTC0WiFo9EKR6MVjkYrHI1W/yT6vwF+193dXblcrtfr9/f37Xb74eGBQCAMDQ1RKJTR0VEGgzEyMjKQE30Wnc/nU6lUJpPJZrP5fL5cLtdqtVar1W63O50OCIIkEolCoUAQNDY2xuFwJicnp6enJRIJl8v98Em/fexVb61WC4VCh4eH8Xg8mUxSKBQ+n8/j8ZhMJgRBVCqVRCKBINjpdNrtdrPZrNVqpVIpl8udn583m82ZmRmZTDY/P7+wsABB0Jejr66udnd39/b2gsEgl8uVy+UwDItEIoFAwGAw3p57c3NzdnZ2enp6fHx8dHSUy+UWFxdVKtXS0tLExMQ7EN2+q1arbrcbQRAikahWqx0ORyKR6H/6sxKJhMPh0Gg0RCIRQRCXy1WtVvuc2y86EAisra0RCAStVuvxeMrl8ke1v1WpVLxer06nA0FwdXV1Z2enn1m90Y1Gw263C4VCmUzmcDiur68/TX1esVh0Op1yuVwoFNpstnq9/vb4Huh0Om02mwEAMBgM0Wh0cM4XikajRqMRAACTyZRKpd4Y+RY6Eono9XoWi2WxWHr++oHUaDSsViubzdbr9eFw+LVhr6IPDg4QBBGLxR6P52uEr+b1eiUSiUql2t/ff3HAy+hIJIIgCAzD29vbX8l7Nb/fL5VKVSrVi9f7BXQ6nV5ZWRGLxViJH/P7/RKJRKfT/Xl/P0c3Gg2z2cxisdC/K/7M6/Wy2WyTyfTsH/UcbbfbAQCwWCwo2t7KarUCAGCz2Z4e/A0dCASmpqYMBgM6a0U/NRoNo9EoEAiePnf+30/X63Wfz0cmk9fX1we1h/x8w8PDGxsbNBrN5/NVq9WfR3/x3W43gUBwOBwYXM9eOZ1OEARdLtfjx5/oy8tLBEG0Wu1XPKU/X7FY1Ol0KpXq4uKi+wu9tbVFJBL/hhXjtbxe7/fv3zc3N7uP6FqtptVq1Wr1oPZuX1GlUtFoNMvLy7e3tyAAAKFQKBgMKpXKnrt4DKPT6UqlMhgMhsNhEACAWCzG4XAUCgXWsB4pFIrx8fFYLAYWCoV4PD43NyeVSrFW9QiGYblcHo/HwZOTk2QyOTs7izWpr2AYTiaTYCaTIZPJIpEIa09fiUQiGo0GZrNZPp8vFAqx9vSVQCDg8/lgoVDg8Xh/87rxNDqdzuVywVKpxGQysca8IxaLBdbr9Q+848EwCILAVqtFpVKxlrwjCoUCttttEomEteQdkUikH3HlhBNOv3EuAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADsklEQVR4nO3Yy0syXQDH8TNWZhehTMui0rKiNDRJLFBoaKEYLoI2JURBiwhaB62KFi1yY7jpHwiDFoZBF8oEDcTAbirkQqMyKi2NLlpOdJ7FA+/78qCTT9k0L8x3O79z+CCCgwiEEPzfov004DNRaKKi0ERFoYmKQhMVhSYqCk1UFJqoKDRRUWiiotBERaGJikITFYUmqtwMdxBCv98fDAYvLi5ubm6i0ejj42MymXx/f8/LyysoKCgpKeFwOFVVVTwer7Gxkc1m/xj64eHB5XK53e7j4+OTk5NAIIAgCJfLZbFYTCYzPz+fRqNhGBaPx+/v7yORyOXlJZ/Pb2pqEolEbW1tcrm8ubk562gk3b+mXq/XarXa7Xan00mj0SQSiVAobGho4PF4lZWVZWVlf6BjsVg4HA6FQsFg0O/3ezyeg4ODjo4OpVKJoqhKpaLT6d+IPjo6slgsa2trXq8XRVGFQiGXy6VSaWlpaeb3+nw+t9vtdDrtdns0GlWr1Vqttre3Nzc30y8kXvA/XV9fGwwGhULBZrOHh4dNJlM4HIZfy+l0zszMdHZ2cjic0dHRnZ2dL14IIfwXvbGxMTAwgCBIf3//8vLy6+vr12//p8PDw6mpKaFQKBaL9Xr93d3dV24DEMK3tzej0SgWi9vb2+fn5yORSJaof7a1tTU0NAQAGBkZ2dvb+/Q94OrqanJykslk6nQ6m82WPWHqbm9v5+bmBAKBSqVaXV393CVgbGyMwWBMTEycnZ1l14fT0tKSUqmUyWQmk+kTxwGLxZqdnU0kElmX4be9va3RaFpbWxcXF//2LNDr9d9hyiSHw9HT0yOVSs1m818dBB9PvjObzYaiKIqiDocj81M/jIYQms1mkUik0+nOz88zPPLzaAih0WgsLCycnp7OcE8KdCKRGB8fr6urW1lZyWRPivdpBoMxODjI5XJNJlMkEvn4wHd/iplnMBhycnIWFhY+XJLik/5dX1+fRqOxWCyhUAh/SSJ0dXW1VqtdX1/f3NzEX5IIDQBQq9Xd3d1Wq/Xl5QVnRi40n8/v6upyOBy7u7s4M3KhAQAKhQIA4HK5cDakQ8vlcplMtr+/H4/H021Ihy4uLpZIJB6Px+fzpduQDg0AaGlpCQQCfr8/3YCMaIFAUF9ff3p6mm5ARnRtbW1NTQ3OTwwZ0eXl5RUVFeFwOJlMphyQEQ0AYLFYsVgsFoulfEpSdFFR0dPT0/Pzc8qnJEXT6XQMwzAMS/mUpGgEQX6/haZ8+gtxeMiyF7503gAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADcklEQVR4nO3YwUv6fBwH8O1pK8kVajUvBjkCC3KHLMhwjsgIDALrGkRF97r3B3SMjnX05KEOJYEFNWgdshoFWZFiYlQTma0yyRZ8f4fneeD58XueH8+m7kuw91Xeb17uIO6DAgCQ75Y/YAO0xEDrFQOtVwy0XjHQesVA6xUDrVcMtF4x0HrFQOsVA61XviUa+81nHx8fsiyXSiVFUVAUra+vJwjCarXW1dXp5vvX/ISWZTmRSCSTybu7u4eHh3w+L8vy+/v7P9E2m81utzscDoqiXC5XT08Phv3um9ciKADg+fn56OgoHo8LgnB5eSmKotPpdDgcJElarVaz2YzjOACgXC4Xi8VCoSCK4v39fTab7erqomm6t7d3YGDA5/OhKKoTenV1dX9///Dw0Gq1ejwet9vd3d1NUVR7e3tLS8uvBQDA09NTNptNpVJXV1cXFxenp6cEQbAsOzw8PDo62traWnO0yWQKBAIsyw4ODvb39+M4rqovSdLJyQnP8xzHCYIQDAbHx8cnJiYIgqiRGEEQJBwO53I5UHGur69XVlYCgQBBEDMzM7FYrPLN/woKqnqATCaTGxsbkUhEUZTp6em5uTmbzVbF/b9SiyfBcdzs7CyCIPPz84IgVH2/JmgAgCRJy8vLHR0dY2Nje3t71R2vFfrPhMPhvr4+hmG2traqOFtbNABge3vb7/d7vd5oNFqtzZqjAQA7Ozs+n29oaIjjuKoM6oEGAGxubtI0PTk5eXNzU/maTmgAwPr6eltb2+LioqIoFU7phwYALC0tNTc3r62tVbijKzqdTodCIYZhzs7OKtnR9SXA6XROTU2l0+lIJFLRULWe4v/PwsJCZ2fn7u6u5gUIr1uhUAjH8Wg0qnkBAtrv9weDwVgsxvO8tgU4L7YjIyPFYvHg4EBbHQ6aZVmGYXieF0VRQx0O2mQyeb3e4+PjeDyuoQ7t7uHxeGw22/n5uYYuNDRN0263O5FIfH19qe1CQzc1Nblcrtvb21QqpbYL8yxGUVQmk8lkMmqLMNEOh+Pz8/Px8VFtESaaJEmSJPP5vNoiTLTFYrFYLC8vL2qLMNFms7mxsbFUKqktwkRjGIZh2Hf6yUP+/les4dYKE10ul8vlckNDg9oiTPTr6+vb25uG+ypMtCRJhUJBw4USJloUxVwuZ7fb1RZ/AHQIZcSc+H5eAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADhUlEQVR4nO3ZwUsqWwDH8ebWoJmZjViWlLMwKSYqo7SYJhgIXOQiKmjXImgfbfsDjP4Ed9GiTQRJWCFBk4shYgg0KBfpwlmkNY6VU9hMnLvw8bhw3+Pe25w5ccHfH/Dlw6zOmYMBABr+tn37asBnVkejWh2NanU0qtXRqFZHo1odjWp1NKr9legmnudFUby/v5dl+eXlpVqtAgCamposFktbW5vT6ezu7iZJ0uv1NjY2frX2n2EejyefzzscDofD0draajKZMAzTNE1RlKenp2KxaDabSZL0+XwURY2MjAQCgc7Ozi9GRyKR3t7erq6uGtpsNmMYpqrq6+truVwuFAqiKOZyuUwmk06ny+VyMBicmppiWZam6S9D/+Yd8fn5OZVKCYLA83wymbTZbKFQKBwOz8zMGE38eb+L/ndvb28cxyUSiXg8/vHxMT8/v7S05Pf7DfL998Bnd3Z2tra25na7GYaJRqOqqn469af7PLq2vb29ubk5m822vr6eyWSgmH45vWgAwN3d3cbGhtPpXFxc5DhOf/CXg4CuLRqNDg0NsSx7eHgIq/l/g4YGAOzv79M0PTk5GYvFIGZ/Hkw0ACAej09PTzMMk0gk4JZ/HGQ0ACAWi42Njc3Ozl5dXUGP1wYfDQDY2dkhSXJ1dVWSJCP6hqABAJubmw0NDVtbW0bEjUJLkrSyskJR1MnJCfS4UedpgiCWl5dxHN/d3a1UKpDr0D/Dj4tEIlardXt7G27W2JvLwsLCxMTEwcHB4+MjxKyx6L6+vnA4fHR0dHx8DDFr+B0xFAqNjo6enp4CeI87hqP7+/tZlj0/P08mk7CaKG7jNE1XKpWLiwtYQRToQCAwPj4uCIKqqlCCKNAEQQwPD6fT6evrayhBRD9rBgYGbm5uMpkMlBoitNfrJUkym81CqSFCezyenp4eURSh1BChXS6Xy+UqFAqapumvIUJjGEYQRKlUkmVZfw3dX1Or1aooCpQTHzq0yWSqVqvv7+/6U+jQGIbVDpb6U+jQqqriOI7juP4UOrSiKC0tLRaLRX8KHVqWZbvdbrfb9acQoUulUrFYdDqdzc3N+muI0Pl8XhRFt9sNpYYInc1mc7kcSZJQaojQt7e3HR0dPp8PSg0FWtO0VCo1ODhIURSUIAr05eWlIAh+v7+9vR1KEAWa53lJkoLBIKyg4eiHhweO4xiGgfju+B0MqGqH3LSMEAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAD7klEQVR4nO3YS0h6aRzGcRWzTCGzi4G1OYaSdBDrgEphVHTRQKiVUOCmXdvAZbQriAI3bVp3IYoKg0JQC4XSoAuEkV21UEi7WEh6lPe/cBiaasqp35sM9Oz9+lEO6PsyEUKM/9tYuQZ8Zb/on9ov+qf2i/6psc/Pz2maZjAYHA6Hx+MJBIK8vLxcqz4Zk6Kol+ji4mKRSCQWiwmCkEqltbW1BQUFuUa+HnNwcJDD4TAYjGQy+fT0dHd3Fw6Hr66uTk9PpVIpSZJ1dXUqlaqxsTE/Pz/X2r/GfPszHg6Hg8HgycmJz+c7ODjY2dlhsVharbalpaWjo0MsFucE+nLvoF8uFot5vV632+10Ojc2NnQ6ncFg6OnpKS0t/THiO0PZ7ezsbHJysquri81mG43G5eXlLF+IY9miMwsEAhMTE2q1miCI4eHhYDCIifXx/hs6s62trYGBAS6X29vb63K5wE2f7itohFA8HrdYLHK5vLm5eWlpCdb06b6IzmxxcbGpqUmpVE5PT0OBstm30Aghu92u0+lIkpyZmQEBZbPvohFCm5ubnZ2dFEVZrdbv17IZABohZLPZGhoa2tvbvV4vSPDjwaARQrOzsxKJpL+/PxqNQjX/bWBohNDo6CiDwRgbGwNsvjtIdCQSMZlMCoXC4XAAZt8O8hBQUlLS19dH0/Tc3Fw6nQYsvx741zA0NFRWVjY/Pw9e/nvwx63u7m6JRGK1WpPJJHg8M3i0QqHQ6/Xr6+s2mw08nhmWg21bW5tQKHQ4HDjiDExotVqt1WpdLtfR0RGOPq4rBI1Gs7297fF4cMRxoSmKUiqVe3t7OOK40HK5nCTJw8PDSCQCHsd4wySTyY6Pj/1+P3gZI5ogiIuLi8vLS/AyRnRlZaVYLL6+vgYvY0SLRKLy8vKbmxvwMka0QCAQCAQPDw/gZYxoHo9XWFgYj8fByxjRbDabzWanUinwMkZ05m8kk8kEL2NEJxKJRCKB44IYIzoWiz0+PvL5fPAyRnQ0Gr29vRUKheBljOhQKBQOh0UiEXgZIzoQCKTT6aqqKvAyRrTf75dIJARBgJdxoe/v730+X01NjUwmA4/jQu/u7u7v75MkyWLBvwUutMfjSaVS9fX1OOJY0JFIxO12azQalUqFo48FbbfbnU6nVqstKirC0YdH0zS9trYml8tbW1vB45nBo1dWVlZXV/V6PUmS4PHMgNHRaHRhYaG6utpgMMCW/zHY+0yLxcJiscbHx2GzrwaJdjqdFEUZjcZQKASYfTuwxyMYDE5NTT0/P5tMpoqKCqjs+wP56IlEwmw28/l8i8UCEvx4MOiRkREul2s2m2maBgl+vD/lRcKClvexfgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEeElEQVR4nM2azUsyWxyAT2NELjIwA03UIgsLUdF0IYRGjJJUSrtqExToX+LHxtyUK7NNC11GCVEiZQVSaJgQZpSR+RFjagWudO5C6MZbV63O8b3PcmAenjn8mDMMpw0AQKFQqFQqjUZjMBgsFovL5fL5/NHRUZFIxOVywf+PNo/HU6lUyuVyqVQiCCKdTieTyUQi8fb2JhaL5XK5UqlUqVQ9PT1/O/Vf2kiS/ONSNpuNx+OxWCwSiYRCoYeHB7VajeO4TqcbGBj4K5V/Qtbl/PzcbrdPT0+3t7drtVqn00kQRP1bWkCD6BqZTMblcun1egqFsrCwsLu7izqrPk1F10ilUna7XS6XCwQCm832/PyMLqs+34iuEQwGl5eXAQAmk+ny8hJFU0O+HU2SZD6fN5vNbDZ7bm4uGAxCb2rIT6JruFwuoVCo1Wr9fj/EoGb4eTRJkh6PRyqVarXaFq/3r6JJkvR4PEKhcG5urpXz/dtokiRdLhebzTaZTC17n0CIJknSbDYDAGw2GxRbQ+BE5/P55eVlgUDQmn0Hg/ItQKfTl5aWurq6tra2CIKA4qwHxAVYXV2lUChOpxOi80tgRqdSKb1er9Fobm9vIWo/A2c8arDZ7NnZ2UAg4PP5IGo/AzMaAKDT6bRa7f7+PtLJhhzNZDInJycDgcDR0RFc80cgRwMAxsfHeTze6ekpdPM78KNlMplCoTg7O7u/v4curwE/GgAglUovLi6i0SgKOUAULRQKaTTa1dUVCjlAFD08PDw0NHRzc4NCDhBFM5nM/v7++/v7QqGAwo8kGgDQ19eXyWQymQwKOapoBoNBEEQ+n0chRxXd3d398vLy+vqKQo4qmkqllsvlcrmMQo4qGsOwarVaqVSQyFFIAQDVahXDMAqFgkKOKrpcLlOp1M7OThRyVNGlUolGo9FoNBRyVNEEQTAYDES/4lFFp9NpFovFZDJRyJFE53K5ZDLJ5XLpdDoKP5LoeDyeSCT4fD4KOUAUHYvFSqXSyMgICjlAFB2JRCQSiVgsRiEHKKLD4XAoFJLL5TweD7q8BvzoYDCYTCaVSiV08zuQo3O5nN/vn5iYUKlUcM0fgRzt8/n29vZwHGcwGHDNH4EZnU6nt7e3VSqVTqeDqP0CiP8FHQ4HhmHr6+sQnV8CLfr4+FihUMzPzz89PcFy/hdwxqNQKGxubhaLxcXFxd7eXijOekB5dIvFAgCwWq1QbA2BEL2xscHhcIxGYz6f/72tGX4b7fV6RSKRwWCIRqNQgprhV9Fer1cmk+E4fnh4CCuoGX4e7Xa7RSIRjuMHBwcQg5rhJ9HPz88Wi4XD4RgMhhavcY1vR5+cnKysrAAAjEZjK+f4I9+Ifnx8dDgcCoVieHjYarW27F3xmaais9ms2+02GAwYhs3Pz+/s7KDOqk+D6HA47HA4ZmZmOjo6cBxfW1trwS7dkC/O5eVyuevr6/dzeXd3d2q1WqPRTE1NDQ4OIt+im6DN6/VWq9XPJyCLxaJEIhkbG6udgGzFF0XT/APoU9W4mP8pNgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEAklEQVR4nO2ZS0uqWxiAl6bfV5ZgonRBzLAb2sUSlFLCEBSjGtTAcBQURNBYaNSsQU4SJw0bFDUzahBhNxRSJCmTEEGNtCCtLEgUNfIM3Bw60TbP7l27c2A/P+Dh4WUtWLwL5b+MyWRis9kzMzO3t7dft5UCArEsLCyQJDk3N/fy8gIiLA5MdDqdNhqNTCbTYrGACIsDE53P56+urgwGQ2dn5+7uLpTzZ1AREHw+f2pqik6nr6ysxGIxKO3HwM7AbDZTKJSlpSVY7TuAo+Px+Pj4uEKhODs7gzW/Bex4FOByuWNjY4FAYGtrC9b8D8DHkMlkJiYment7fT4fuLwA8KQRQgRBaLXa8/Pz/f19cHkB+GiEkFqtHhgYsNvtz8/POPxYorlcrkKhcDqdLpcLhx9LNEJIJpNRqVSPx4NDjiu6u7u7q6urcBfB5biiq6urRSKR3+8PBALgclzRCKGmpqZQKBQOh8HNGKP5fD6FQolGo+BmjNF1dXW1tbU4Hk8YozkcDpvNTiQS4GaM0Uwmk8lkJpNJcDPGaJIkSZLMZDLgZozRVCqVSqW+vr7Cm8GNf5PL5XK5HJ1OBzdjjE6lUqlUqqKiAtyMMfrx8fHp6YnFYoGbMUbH4/G7uzsulwtuxhh9fX19c3NTX18PbsYYHQ6HBQJBQ0MDuBljdCAQaGlpaW5uBjfjir64uPD5fGKxmMPhgMtxRXs8ntPTU4lEgkOOK9rpdMrlcplMhkOOJdrlctntdqVS2dbWhsOPJdpmsyUSCZVKhUOOcER7vd6dnR2tVqvRaMDlBeCjrVZrMBgcGhoiCAJc/gPYLZvNZhOJRNPT07lcDtb8FshJPzw8rK6u0mg0vV5Po9EAze8BHMDi4iJCyGQyATo/BCx6Y2NDKBROTk7e399DOX8GTPTe3p5SqdRoNG63G0RYHIBoh8Oh0+mkUun29vbXbaXw1ejDw8PBwcH29vb19XWQoFL4UrTValWpVBKJZG1tDSqoFH4xOp1OWywWsVisUqmsVito0uf8SrTb7Z6dnWUwGAaDweFwgDd9yr+LjkajZrO5r69PIBDMz89HIhFMWcUpNfry8nJ5eXl4eLisrEyv129ubmLNKg4lX/R7IZlMut3u4+Pjo6Ojg4MDnU43MjIyOjqKYzFQOh9Ex+PxSCQSCoX8fr/X6z05Ocnn8/39/Wq1WqvV8ni8bwl9C8VoNBbekNlsNplMJhKJWCwWjUbD4bBQKOzo6Ojp6ZHL5Uqlsry8/Ltrf0CRSqXZbBYhRBBEVVUVi8Wqqanh8XiNjY2tra1isZjBYHx35HsowWAwl8shhAiCqKysZLFYJEl+d9UnfHIR/5tg3DDh40/07+JP9O/ifxn9F39IDqMJ8h8PAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADiUlEQVR4nO3YzUsqexzH8ZmbjGFTMZpaYSlGJVQiJj0xJtKDUBFUi9q2aF/7/gBp2bJ2uZYgQwghRkQKEaOkMAyDscgKp4FGUSaas+hyOefce8+5nfnO79yFnz/gzYuBmfnNYJLCCwQC9fX129vbgM0/MIU3PT09OTl5fHz88vIC1VQcbTAYPB5PLBaLx+NQTcXRGIaNjY1RFJVIJKCCKNAul2twcPDs7IzneZAgCjRBEAMDA+l0+vLyEiSIAo1hmM1me3h4yGazIDVE6K6uLqvVent7C1JDhO7o6DCZTPf39yA1RGidTmcwGJ6fnyuVivwaIjSGYRRF8TwP8gBBh9ZoNKVSqVwuy0+hQxMEIYqiKIryU+jQkiThOI7juPwUOnS1WlWr1QRByE+hQwuC0NDQQJKk/BQi9Pv7O8dxWq2Woij5NUToQqFQKBSMRmNdXZ38GiI0y7L5fN5kMoHUEKFvbm5YlrVarSA1ROirqyubzdbb2wtSQ4EuFovn5+d2u72/vx8kiAKdSCSSyaTT6VSpVCBBFOh4PE6S5MjICFRQcXQmk2EYxuPx0DQN1VQcfXR0lEqlJiYmQE4dH1MWnc1mDw8PZ2ZmfD4fYFZZdDAYPD09nZ+fb2lpgewC/mL7bgzDOByO1dXV19dX2LJSV5rjuL29PVEUV1ZWQE523wz2Gvw1v9+PYdjW1pYScUXQgUDAYrGsra0Vi0Ul+vDoUCjkcrlmZ2dTqRR4/GPA6HA4PD4+7na7I5EIbPnrQaL39/dpmh4dHT04OADM/n1g6N3dXbvd7vV6Q6EQVPPfBoDO5XKbm5t6vX5paYlhGPnBn04uOhgMLiwsNDU1bWxsZDIZENNP9+voaDS6vr5uMpncbvfOzo4oioCsHw+XJOlTL6NKpRKNRiORSDgcFkVxcXFxeXnZ6XQCv/N+uP/6KSEIwsXFRTKZPDk5icViJEn6fL65ubmpqSlFff843O/3m83mtrY2nU7X2NioVqtxHH97eyuVSjzPPz093d3d5XK56+vrdDrNcdzw8DBN016vF/BQ/2l0Z2cny7JGo1Gr1X6NLpfLH2iCICwWS09PT19fn8PhGBoaam1t/V3cP9HxeDyfzz8+PnIcJwhCtVqVJEmlUmk0mubmZr1e397ebjabu7u7oT5L5e/TN+L/Yej+mgKuhka1GhrVamhUq6FRrYZGtRoa1WpoVPsC5jxpoAGNgL4AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADc0lEQVR4nO3Yz0vyYAAH8G1kVi5JV0llzAb9ohJCK8o0JKGiIIigY1H0D3Tp0D06evaal6AiJIQQwpEVGCL9QqwoQ6NaNLViUaP2Xt4fEPTSM+eey743D98vnz2K4oMI8LK4uEiS5N7eHmgRQ+CFJMlkMplKpUCLMNFVVVUEQdzf34MWYaIJgiAIgmVZ0CJMdGlpKY7jz8/PoEWYaLVarVar39/fQYsw0RiGYRj2+fkJXMyH5ofheZ7neZVKBVqEieY4juO44uJi0CJMdDqdzmQyZWVloEWYaIZhHh4eysvLQYsw0alUqqioqLq6GrQIE311dWUymUwmE2gRGvrp6Skejzc0NNTX14N2oaGPjo6Oj49bWlowDNgADR2JRLLZbHt7u4guHPTr6+v+/n5nZ2dHR4eIOhw0TdM7Ozu9vb0Gg0FEHQ46EAhotVqn0ymuDgFN07Tf7x8YGLDZbOIWIKA3NjY+Pj5GRkbET+TjH+t/srq6WlNTMz8/n8uIrCd9eXnp9XopipqYmMhpSKoj/EkWFha0Wq3H48lxRz60x+OpqKiYm5vjeT7HKZnQ6+vrZrN5fHw8Ho/nviYH2u/322w2p9NJ07Qkg3lH+3w+h8PR3d29ubkp1WZ+0cvLy1ar1W63+3w+CWfzhX58fFxaWiJJcnh4OBAISDueF3QwGJyenkYQZHZ2NhqNSr6PCoIgzS8HgiAIcn5+vra2trKywvP85OTkzMyMXq+XcP93vF4vwzC5P30sFnO73S6XS6PRTE1NbW1t5b75XdCSkhKXy9XX19fT02O1WgsKCoCemWXZcDi8u7sbDAYjkcjQ0NDo6OjY2BiO49If8J+gbrd7e3s7FAoRBGGxWMxmc1NTE0VRtbW1372zt7e319fXFxcXsVjs8PDw4OAAx3GHw9Hf3z84OCjiHgMYLQgCy7KhUCgcDkej0ZOTE4Zh6urqjEZjZWWlTqfTaDQqlUoQhLe3t5eXF5Zl7+7ukslkIpFobm5ua2uzWCxdXV12ux1F0Xxz/6H/vkin06enp2dnZ4lE4ubmhmGYTCbDcRzP8yiKFhYW4jiu1+sNBoPRaKQoqrGxsbW1VcQNopToL+E4LpvNfkHrdDrQz73kkfgrT57AvBYTHQUtVxS0XFHQckVByxUFLVcUtFxR0HJFQcsVBS1XFLRc+QWmW2isuFqIagAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADt0lEQVR4nO3Y3yt7fxzA8WOzMcvPmd9pwmimWexHtjClIS7cbVFKij9AcSeuKJT9C0pTSFrEwpwwmmWsxi52Lnb82sxpNOSQ873w7fPp82lnxmfO5/2p87x9v97vHu3ivN8NIoBsYGBAJpMhCBJ2lQEBGZvNxnH85eUl7Cqg6MgBisZxnM1ms9nssKuAokOhEJfL5XK5YVcBRWMYlp6enpaWFnYVRPT19bXP58vOzmaxWGEHQER7vV4URQsKCsgGQER7PB4EQYqKisgGQESfnp6WlZUJhUKyAeDQ9/f3x8fHYrFYLBaTzQCHttlsdru9uro6MTGRbAY49O7uLoPBUCgUEWbAQiMIYrFY6urq1Gp1hDGw0Ovr69vb242NjWQX+HsAoVEUNZlMra2tWq028iRA6IWFhbW1tfb29vz8/A9GKX7dk7W/v69UKnU63c3NzYfDQPzST09PMzMzfr9fr9dnZmZ+vIGCX/HDDAYDh8MZGRmJcv7voxcXF0UiUWdnJ4qiUW75y+jNzc36+nqNRrOzsxP9LmhycvL7TJGDYbilpUUqlS4tLX1qI8Tj8cbGxp6fn79JRpbZbG5ubq6srJydnf3sXqivr4/D4QwNDXm93u/Ahc1oNKpUqpqaGqPR+IXt0OXl5eDgIJfL7erqslgsMff9ViAQGB8fLy4u1mq1JpPpa4dABEHgOD49PS0Wi2UymcFgCAQCMXX+zGw2d3d3QxDU29trs9m+fM7Pr8fq6qpOp2MwGHq9fn5+HsfxWDj/z+FwDA8Pi0QiiUQyMTFxe3v7J6f98sm7urqampqqra3Nysrq6emZm5uL5lKNnNVqHR0dVSqVfD6/v79/a2vrDw8kCCKOIIjf7kiHw7G8vLyysuJyuRoaGlQqlVwul0qlZP9ChM3lch0eHlqtVhiGMQzTarVtbW0dHR1MJvMT9ztJYdDvOZ3OjY0NGIb39vZYLJZEIhGJRCUlJYWFhbm5uTweLyUlJSEhIS4u7vX19eHhIRgM+ny+8/NzBEHcbrfT6Tw6OlIoFGq1WqPRNDU1RX4ixwb93t3d3cHBgd1uPzk5OTs783g8TCYzJycnIyMjOTn5B/rx8TEYDPr9/ouLC4FAIBQKKyoqqqqq5HJ5eXl5rKzRon/09vbmdrsRBEFR1OfzYRgWCoXer6T4+PikpKTU1FQ+n5+XlycQCEpLS6N6rH03GqiAeE9/NhpNVTSaqmg0VdFoqqLRVEWjqYpGUxWNpioaTVU0mqpoNFXRaKr6J9H/AR9gyqp6kJWVAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAEP0lEQVR4nO3Wy0sqXwDA8eNIgYuSUIkS06A3Yhaa4KLHYhSih7kqWxblX1K6KTflynKZte0hVIZKQQ+zTIg0SnuMYo6mBbqxcxde+vXr2lW73ZkuzHfnOTOcz4yHmQE0Gm1lZQX+Oy0tLSEAgEwmA/6dMpkMwmAwUqkU2ZIiSqfTSHl5eSKRIFtSRIlEAmGxWNFolGxJEUWjUaS6uhrDMLIlRYRhGFJTUxMIBCKRCNmYggqFQoFAAKmrq/P7/RcXF2R7Csrn8/n9fqSlpeXx8dHr9ZLtKSiv1/v8/IyIRKLW1la32022p6DcbrdYLEYEAoFUKt3f3//+bpfLtb+/L5FIEACAXC6/vr52OBxkq/LkcDhub2/lcjkCAOjq6uru7rbZbN/5GRIKhWw2W3d3d1dXFwIA4HA4KIpardb19XWybR+2sbFhtVoVCgWLxQLZT6fLy0sURYeGhjAMI/cjLme3t7cDAwNKpfLq6gpCCF4n5ubmaDSawWAgz/ZhMzMzdDrdaDRmf/6HjkQiw8PDMplsd3eXJFvunE6nVCrVaDTRaDQ7grxuGg6HMzo6Go/HzWZzPB4ncfu+Dcdxs9n89PQ0OjrKYrF+jr67LJ1OBwCYnp4m/IbmbmpqCgCg1+vfDr5H4zg+MTHB4/EWFxeJo32QyWTicrlarTYej78df4+GEHo8HpVKJRKJlpeXCdLlymKxCIVCtVp9dnb2bioHGkJot9tRFJVIJGS5LRZLe3u7Uql0Op2/zuZGQwi3trZQFBWJRMTvE5PJJBQKlUrl9vZ2zgM+REMI7Xa7SqXi8XjT09OxWOzvCP8XjuNTU1NcLletVue8x9l+h4YQejyeyclJAMD4+Pjffn47nc6xsTEAgFar/XUfvy0PGkKI47hOp2toaOjo6DAYDPf391/n/Nnd3d3s7KxUKm1qatLr9Xn/1fzobKurqyMjIwiCqFSqhYWFcDj8x1QIIQyFQiaTaXBwkE6nazSatbW1Qs4qFA0hjEQi8/PzKIqWlpb29/cbDAaXy/VZLTw6Opqdne3r6yspKVEoFEaj8fUtnTcahLCo9+rV1dX6+vrm5ubOzo5AIJDJZG1tbUKhsLGxsbKy8vfnhsNhn8/n9XqPj48PDg6CwWBPTw+Kor29vbW1tYUbikZne3h4cDgce3t7h4eHJycnTCazvr5eIBBUV1ez2Wwmk8lgMBAEeXl5SaVSiUQiGo1iGBYIBPx+fzKZFIvFUqlULpd3dnay2exiV/8k+rVgMHh6enp+fn55eXlzc4NhGI7jyWQylUq9vLwgCMJgMMrLy9lsdlVVFZ/Pr6ura25uFolEfD7/04v+KfptsVgsHA5n0el0OpPJ0Ol0BoNRVlbGYrGqqqoqKiq+ZKGvRBMWkv+Q7xeFJioKTVQUmqgoNFFRaKKi0ERFoYmKQhMVhSYqCk1UFJqoKDRRUWii+gGq6iBYtO3SrgAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADr0lEQVR4nO3YTUsyWwDAcQ3ScBJMMhdNixSS0hnKWZQlJgiFLWqZuBJa9A2EVkKLFrUp/AZBSC0yKBdTYYJSNBDYIoxMMBFTm4xeKMYRz10Icbk8o6fS88Rl/suZw+E3h3k5jAQIdHt7azabvV6v0IC/WJtEIJ7neZ6XyWRCA/5igujfnCBaJpPJZLJyuYxSA5kgGsMwDMPe3t5QaiATRKtUqq6urqenJ5QayATRcrlcq9Xm8/lisYgSBFO9BxHH8Ww2m8lkkGkgq4fu7+9PpVKpVAqZBrJ66IGBAb1en0gkkGkgq4c2mUwEQVxeXv62d0g9tEKhMJvNFxcXDMMgA8HU4Is4OjparVZPT0/RaCBrgLZarTabLRKJpNNpJB6oGqA7OjocDkc4HKZpGg0IpsYbpunpaafTGQqFstksAhBMjdE4js/OzoZCod3dXQQgqGA23cVicX5+fnx8nGGYVm/wYYLaT2s0GpfLlcvlNjc3OY5r9To2Dv76fD6fQqHw+/2tW0LIvoDOZDJut9toNAaDwZZ5oPoCGgAQjUbtdrvdbj85OWmNB6qvoQEAwWBweHh4ZmYmGo22AgTTl9EAgK2tLZPJ5HQ6j4+Pmw6q38fHx8rKynfQAIBAIEBRlNVq3d7ebi6rTul02uv1yuXyb6IBAPv7+1NTU3q9fnV1lWXZJuL+WDgcdrvdSqVyaWnp+2gAAMMwCwsLEonE4/EcHR01y/efisXixsYGRVEkSfr9/kql8iM0AIBl2bW1NZIkh4aGfD5fPB5vCrQWx3E7Ozsul0sqlbrdbpqma8d/iq4VDocXFxc1Go3FYlleXj47O/vhhIVCIRAIeDye7u7uiYmJ9fX1QqHweVYKAGjKl7VSqezt7R0cHNA0rVarbTabxWKhKMpoNMJPUiqV4vE4wzCxWCwSiZAk6XQ65+bmSJL897CmoWuVy+XDw8NIJBKLxc7Pz0dGRgiCMBgMOp0Ox/Genh6VSoVhWHt7e7Va5Tju5eXl8fHx/v4+k8kkk8lEIlG7wSwWy+TkpMPh+OM1Nxn92fX1NcMw8Xj86urq5uYmnU739vZqNBqVSqVQKD7Rr6+vpVIpn88DAPR6/eDgIEEQFEWNjY0plUqhyVuF/oxl2WQyeXd3l8vlHh4enp+f39/feZ5va2uTy+WdnZ1qtVqr1fb19el0OoPBIJVKG87ZcnQr+n/9n/7NiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhUiWhU/QMA79mYpE8pmQAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADh0lEQVR4nO3YTUsyXRjAcR9RDBtRkkwtG4MUeqE3zIjGMAha1M6Faxd9gDat+gAuWxctora9oGCUBNMg4UJmMQjpIls4i7RGBTVopjrPQri5uV8eTser6VnMf+25/HkW45mjQ58vk8l4PJ54PE6wFiS97vOJolgul2maJlgLEgm6UqkMDAy4XC5wDWYk6FqtZrfb7XY7uAYzEnSz2aQoymKxgGswI0HLsmwymUwmE7gGMxL0x8eHXq/X60nWgkTyxUajUVEURVHANZiRoM1m88vLS7vdBtdgRoK2Wq2NRqPRaEBjcCNB9/f3Pz09VatVcA1mJGi3220wGERRBNdgRoKmaXpkZKRUKoFrMCNB+3w+v99fLBZbrRY4CCcStMFgmJiYyOfzgiCAg3Ai/IOYmZmRJCmXy8FqMCNEz8/PLywsZLPZ19dXWBBOhGiXy8UwDMdxLMuCerAiPz+srKxQFJVOpwE1mJGjGYZZW1tLpVIcxwGCcOrqpLaxsaEoyvn5ORAGuy7fMbe3t4eGhk5OTkDeWDHr9kwcjUa9Xu/x8fHDwwPIJmLV/e/e29uzWCw7Ozvdj8IMAC3L8tbWlsPh2N/f734aTgBohFChUIhEIlNTU2dnZyAD/zsYNEKIZdlwOMwwzMXFBdTMvwWGRgglk8nFxcXl5eVkMgk49vcg0QihRCIRCoUCgcDR0RHs5J8DRiOE0un0+vq61+uNx+OSJIHPR1+BRgjxPL+5uanT6WKxGMuy4PP/QQh9xeO/VqsdHBwcHh4ajcZoNBqJRHw+H9h08G34uaurq1gsRlHU6urq7u7u3d1d9zMfHx+/aqd/1Gq1Tk9PE4lEKpWam5sLh8NLS0vBYPCzl66yLOdyudvb25ubmy9Hd3p+fr68vLy+vuY4rtlsBgKB6enpsbGx0dFRmqadTucfbwYlSSqXy/f394VCQRAEnufr9XooFFIJ3QkhlMlkstksz/OCIBQKheHhYY/H43Q6+/r6KIrq3MQqitJutxuNRrVaFUWxVCq53e7JycnZ2dlgMMgwjKroH729veXz+WKxWCqVRFGsVCr1er3VanXeOI1Go9lsttlsDodjcHDQ6/X6/f7x8XGbzdZZ/j3oX3p/f++gZVnW6XQGg6G3t9dqtfb09Pzx8/8L9Gf7tovxbtLQaqWh1UpDq5WGVisNrVYaWq00tFppaLXS0GqlodXqXz44dGHF6aPIAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAADhUlEQVR4nO2Yz0vyYBzAt3JGOkyHawcHkR3awUYwISy0IomKoA5dunXpHHTtH4hOnT0WHYLyICOQoowYkRUrkH5JZEVZxoZiWDLweQ9C8L696cv7frfeg5/7Pnz4wp7v9mAIlMXFRYvFsrKyAqv9hToMDkVRtre3g8Hg4OAgoPYzkNGSJO3t7fX29jqdTkDtZyCj4/E4TdPd3d2Azt8CFq2qqizLgiB4vV4o51eARZ+dnSUSiY6ODpPJBOX8CrDoq6urTCbDcRyUsAJg0alUyu12t7W1QQkrABb98PDgcrlYloUSVgAmulAoZDIZhmEoigIRVgYmOpfLZbNZu90OYqsK2KQLhYLVagWxVQUmWtM0TdMIggCxVQUmGsdxHMcRQiC2qsBEm81ms9lcLBZBbFWBiSZJkiTJ19dXEFtVYKIdDgdFUaqqgtiqAhNtMpkYhnl6ekqn0yDCyoBtRJZl7+/v7+7uoIQVAIt2u92pVCqZTEIJKwAW3d7eznHc+fk5lLACYNEej4fn+dPTUwNeR7BogiAEQTg6OorH41DOr4D8R+zq6rJarZIkATp/C2S03+8PBAKxWOzi4gJQ+xnIaBzHBwYGjo+Po9EooPYzkNEYhg0NDQ0PD4uiqOvZBxztdDrHxsb29/fX19dhzT8BftGWz+enpqY6OztjsRi4vAzwpDEMI0lycnJS07SlpSW9zmydhrGwsIBh2Pz8vB5yvaIVRZmenm5paVleXgaX6xWNEJJleXR01Ov1RiIRWLOO0Qihzc3N8sbZ2NgA1OobjRCKRCI+n6+npyccDkM5dY9GCImi2N/fz/N8KBQCERoRjRDa3d2dmJigaXpubu76+vofbQZFI4QuLy9nZ2dtNtv4+Pja2tq/qIyLRghpmhYKhfx+v8vlmpmZ+euVady10AeyLK+urobD4fr6+pGRkWAw2NfX19jY+OeGb4gus7W1JYpiNBrN5XKBQMDn8wmCwPO8zWar+uy3RZeRJGlnZ0eSpIODg6amJo/Hw3Fca2sry7LNzc0Oh8NisRAEUSqVisViPp9XFCWdTn9zdJnn5+fDw8OTk5NEIpFMJm9ubt7f32mattvtv0Srqqooyn8R/UGpVEomk7e3t4+Pjy8vL9ls9u3tTdO0urq6hoYGkiQpimIY5v+K/kPgv6cNoBZtFLVoo6hFG0Ut2ihq0UZRizaKWrRR/ACNtnaECXxszwAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAADwAAAA8CAIAAAC1nk4lAAAD5UlEQVR4nO3ayUtqfRzH8eN1wsQKM5JoEBsoK2xCK6M0CItatYmGVQhF/0Bt2xU0UEH7IMJNk0QSYouDlNIgKUlncYzQJCWHQCXt4O9ZBM8iuM9zb/2+1oX72f/evDyI4jkSKFsLBAKTk5MSiWRpaemTKQIL6H+XTqdnZmYEAsH8/Pzna1lCr66uCoXC2dnZVCr1+Vo20Gazub6+fnx83OfzYQn+IID3+Pi4ubkpEAgMBkNJSQmeKJaX/h9bXl5ms9nr6+sYm7Bop9PZ3t4+Ojr69PSEMQv79jCZTDRNDw0NFRQU4OxivADv5nK51Gr1xMTE6+sr3jLglbZarR6PR6/XczgcvGUo9PPzM0mSWq22p6cHexwK7XA4Tk9PNRqNRCLBHodCX15ecrlclUoFEQdBZzIZl8ulVCqbmpog+iBoiqJub28VCkV+fj5EHwTt9Xppmq6srISIE0Bon8/HZrPLy8sh4gQQOhgMSqVSqVQKESeA0JFIRCwWQ3zYvQ0EHY/HRSKRSCSCiBNA6FQqxefz+Xw+RJwAQiOEWCwWi8WCiBNAaA6HwzAMwzAQcQIInZOTk0wmE4kERJwAQufl5cVisVgsBhEngNCFhYWhUCgYDELECSB0cXHxw8OD3++HiBNAaJlMJpPJvF4vRJwAQldVVVVXV1MUBREngNASiaSurs7tdns8Hog+1C+XxsZGp9N5cXEBEYdCq1QqtVp9dnYGEYdC19TUdHZ2kiRpt9uxxwHve+h0ukgkYrFYsJcB0b29vXq9/ujo6Pr6Gm8ZEM3j8QYHB2ma3tvbw5zGe5ft3RiGmZqaUigUFosFYxb2rimbzR4eHuZyuVtbW+FwGFsX4wX42RYXFwmCWFhYwBXMBjocDhsMBrlcbjQasQSz9HTr/Pxcr9drNBosb+4soRFCh4eHra2tfX19JEl+MpU9NELIaDQ2NDT09/efnJx8ppNVNEJoe3u7ubm5u7t7Z2fnw5FsoxFC+/v7Op1OoVCsra0lk8kPFL4AjRCy2WxjY2MCgWB6etput//u8a9BI4T8fv/c3JxcLm9ra1tZWbm/v//1s1+GftvBwcHIyAiHwxkYGNjY2KBp+ldOsRBC2L5dP7RwOLy7u2symcxmc1dXl1ar7ejoUKlUubm5Pzvy9ei3BQKB4+Njq9VKkmQmk2lpaVEqlbW1tRUVFWVlZe9udX8X9NvS6bTNZnM4HFdXV263m6IouVxeWlpaVFQkFouFQiGPx2OxWN8L/e9eXl5ubm4oirq7u/P7/aFQKBqNxuPxtyfW3xT9bgzDRKPRRCLxJ6HfDfyfNRD7i87W/qKztT8S/Q9AvSg7LHfzLwAAAABJRU5ErkJggg==)Activations are normalized by the neuron’s maximum activation. We’ll examine activations by neuron in more depth when we explore syntheti stimuli.Reproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) NotebookCurve detectors have sparse activations, firing in response to only 10% of spatial positions Receptive-field sized crops of an image. across ImageNet, and usually activating weakly when they do fire. When they activate strongly, it’s in response to curves with similar orientation and curvature to their feature visualization. \n\n![](./_next/static/images/image-96adbbbaeb9c2532d6e88407befe2c09.png)The images from the dataset that activate 3b:379 all contain curves that are similar to its ideal curve.\n\n---\n\nIt’s worth stepping back and reflecting on how surprising the existence of seemingly meaningful features like curve detectors is. There’s no explicit incentive for the network to form meaningful neurons. It’s not like we optimized these neurons to be curve detectors! Rather, InceptionV1 is trained to classify images into categories many levels of abstraction removed from curves and somehow curve detectors fell out of gradient descent.\n\nMoreover, detecting curves across a wide variety of natural images is a difficult and arguably unsolved problem in classical computer visionThis is our sense from trying to implement programmatic curve detection to compare them to curve neurons. We found that practitioners generally had to choose between several algorithms, each with significant trade-offs such as robustness to different kinds of visual “noise” (for instance, texture), even in images much less complex than the natural images in ImageNet. For instance, [this answer on StackOverflow](https://stackoverflow.com/questions/8260338/detecting-curves-in-opencv) claims “The problem [of curve detection], in general, is a very challenging one and, except for toy examples, there are no good solutions.” Additionally, many classical curve detection algorithms are too slow to run in real-time, or require often intractable amounts of memory.. InceptionV1 seems to learn a flexible and general solution to this problem, implemented using five convolutional layers. We’ll see in the next article that the algorithm used is straightforward and understandable, and we’ve since reimplemented it by hand.\n\nWhat exactly are we claiming when we say these neurons detect curves? We think part of the reason there is sometimes disagreement about whether neurons detect particular stimuli is that there are a variety of claims one may be making. It’s pretty easy to show that, empirically, when a curve detector fires strongly the stimulus is reliably a curve. But there are several other claims which might be more contentious:\n\n* **Causality** Curve detectors genuinely detect a curve feature, rather than another stimulus correlated with curves. We believe our feature visualization and visualizing attribution experiments establish a causal link, since “running it in reverse” produces a curve.\n* **Generality:** Curve detectors respond to a wide variety of curve stimuli. They tolerate a wide range of radii and are largely invariant to cosmetic attributes like color, brightness, and texture. We believe that our experiments explicitly testing these invariances with synthetic stimuli are the most compelling evidence of this.\n* **Purity:** Curve detectors are not polysemantic and they have no meaningful secondary function. Images that cause curve detectors to activate weakly, such as edges or angles, are a natural extension of the algorithm that InceptionV1 uses to implement curve detection. We believe our experiments classifying dataset examples at different activation magnitudes and visualizing their attributions show that any secondary function would need to be rare. In the next article, exploring the mechanics of the algorithm implementing curve detectors, we’ll provide further evidence for this claim.\n* **Family:** Curve neurons collectively span all orientations of curves.\n\n[Feature Visualization\n---------------------](#feature-visualization)[Feature visualization](https://distill.pub/2017/feature-visualization/) uses optimization to find the input to a neural network that maximizes a given objective. The objective we often use is to make the neuron fire as strongly as possible, but we’ll use other objectives throughout in this article. One reason feature visualization is powerful is that it tells us about causality. Since the input starts with random noise and optimizes pixels rather than a generative prior, we can be confident that any property in the resulting image contributed to the objective.\n\n![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_342.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_388.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_324.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_340.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_330.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_349.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_406.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_385.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_343.jpg)Each neuron’s ideal curve, created with feature visualization, which uses optimization to find superstimuli.Reading feature visualizations is a bit of a skill, and these images might feel disorienting if you haven’t spent much time with them before. The most important thing to take away is the curve shape. You may also notice that there are bright, opposite hue colors on each side of the curve: this reflects a preference to see a change in color at the boundary of the curve. Finally, if you look carefully, you will notice small lines perpendicular to the boundary of the curve. We call this weak preference for small perpendicular lines “combing” and will discuss it later. \n\nEvery time we use feature visualization to make curve neurons fire as strongly as possible we get images of curves, even when we explicitly incentivize the creation of different kinds of images using a [diversity term](https://distill.pub/2017/feature-visualization/#diversity). This is strong evidence that curve detectors aren’t polysemantic in the sense we usually use it, [roughly equally preferring](https://distill.pub/2020/circuits/zoom-in/#claim-1-polysemantic) different kinds of stimuli.\n\nFeature visualization finds images that maximally cause a neuron to fire, but are these superstimuli representative of the neuron’s behavior? When we see a feature visualization, we often imagine that the neuron fires strongly for stimuli qualitatively similar to it, and gradually becomes weaker as the stimuli exhibit those visual features less. But one could imagine cases where the neuron’s behavior is completely different in the non-extreme activations, or cases where it does fire weakly for messy versions of the extreme stimulus, but also has a secondary class of stimulus to which it responds weakly.\n\nIf we want to understand how a neuron behaves in practice, there’s no substitute to simply looking at how it actually responds to images from the dataset.\n\n[Dataset Analysis\n----------------](#dataset-analysis)As we study the dataset we’ll focus on [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) because some experiments required non-trivial manual labor. However, the core ideas in this section will apply to all curve detectors in 3b.\n\nHow often does [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) fire? When it fires, how often does it fire strongly? And when it doesn’t fire, is it often strongly inhibited, or just on the verge of firing? We can answer these questions by visualizing the distribution of activations across the dataset.\n\nWhen studying ReLU networks, we find it helpful to look at the distribution of pre-activation values. Since ReLU just truncates the left hand side, it’s easy to reason about the post-activation values, but it also shows us how close the neuron is to firing in other cases.Looking at pre-activation values also avoids the distribution having a dirac delta peak at zero. We find that [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) has a pre-activation mean of about -200. It fires in just 11% of cases across the dataset, since negative values will be set to zero by the ReLU activation function.\n\nIf we look at a log plot of probability, we see that the activating regime follows an exponential distributionThe observation that neural network activations generally follow an exponential distribution was first made to us by Brice Ménard, who observed it to be the case over all but the first layer of several networks. This is mildly surprising both because of how perfectly they seem to follow an exponential distribution, and also because one often expects linear combinations of random variables to form a Gaussian., corresponding to a straight line in the plot. One consequence of this is that, since probability density decays at e−xe^{-x}e−x rather than e−x2e^{-x^2}e−x2 of a Gaussian, we should expect the distribution to have long tails.\n\n![](./_next/static/images/exponential-pdf2-088c43850b29bc9780fe5c751bf169ee.jpg)By looking at pre-ReLU values for 3b:379 activations, we see that both positive and negative values follow an exponential distribution. Since all negative values will be lifted to zero by the ReLU, 3b:379 activations are sparse, with only 11% of stimuli across the dataset causing activations.To understand different parts of this distribution qualitatively we can render a quilt of images by activation, randomly sampling images that cause [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) to activate different amounts. The quilt shows a pattern. Images that cause the strongest activations have curves that are similar to the neuron’s feature visualization. Images that cause weakly positive activations are imperfect curves, either too flat, off-orientation, or with some other defect. Images that cause pre-ReLU activations near zero tend to be straight lines or images with no arcs, although some images are of curves about 45 degrees off orientation. Finally, the images that cause the strongest negative activations have curves with an orientation more than 45 degrees away from the neuron’s ideal curve.\n\n←more **negative** activationsRandom Dataset Images by ![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379 Activationmore **positive** activations→![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/5/attr.jpg)-800+![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/5/attr.jpg)-700![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/5/attr.jpg)-600![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/5/attr.jpg)-500![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/5/attr.jpg)-400![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/5/attr.jpg)-300![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/5/attr.jpg)-200![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/5/attr.jpg)-100![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/5/attr.jpg)0![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/5/attr.jpg)100![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/5/attr.jpg)200![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/5/attr.jpg)300![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/5/attr.jpg)400![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/5/attr.jpg)500![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/5/attr.jpg)600![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/5/attr.jpg)700![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/5/attr.jpg)800![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/5/attr.jpg)900+![](./_next/static/images/caption-0b271f686fc8100bec4f98a3c875b201.svg)Load more examplesQuilts of images reveal patterns across a wide range of activations, but they can be misleading. Since a neuron’s activation to a receptive-field sized crop of an image is just a single number, we can’t be sure which part of the image caused it. As a result, we could be fooled by spurious correlations. For instance, since many of the images that cause [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) to fire most strongly are clocks, we may think the neuron detects clocks, rather than curves. \n\nTo see *why* an image excites a neuron, we can use feature visualization to visualize the image’s attribution to the neuron. \n\n[Visualizing Attribution\n-----------------------](#visualizing-attribution)Most of the tools we use for studying neuron families, including feature visualization, can be used *in context* of a particular image using attribution. \n\nThere is a significant amount of work on how to do attribution in neural networks (eg. ). These methods attempt to describe which pixels or earlier neurons are responsible for causing neurons to fire. In the general case of complex non-linear functions, there is a lot of disagreement over which attribution methods are principled and whether they are reliable . But in the linear-case, attribution is generally agreed upon, with most methods collapsing to the same answer. In a linear function of xxx, w⋅xw\\cdot xw⋅x, the contribution of the component xix\\_ixi​ to the output is wixiw\\_ix\\_iwi​xi​. The attribution vector (or tensor) describing the contribution of each component is (w0x0, w1x1, …)(w\\_0x\\_0, ~w\\_1x\\_1, ~\\ldots)(w0​x0​, w1​x1​, …).\n\nSince a neuron’s pre-activation function and bias value is a linear function of neurons in the layer before it, we can use this generally agreed upon attribution method. In particular, curve detectors in 3b’s pre-activation value is a linear function of 3a. The attribution tensor describing how all neurons in the previous layer influenced a given neuron is the activations pointwise multiplied by the weights.\n\nWe normally use feature visualization to create a superstimulus that activates a single neuron, but we can also use it to activate linear combinations of neurons. By applying feature visualization to the attribution tensor, we are creating the stimulus that maximally activates the neurons in 3a that caused [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) to fire. Additionally, we will use the absolute value of the attribution tensor, which shows us features that caused the neuron to fire and also features that inhibited it. This can be useful for seeing curve-related visual properties that influenced our cure neuron, even if that influence was to make it fire less.\n\nCombining this together gives us FeatureVisualization(abs(W⊙hprev))\\text{FeatureVisualization}(\\text{abs}(W \\odot h\\_{prev}))FeatureVisualization(abs(W⊙hprev​)), where WWW is the weights for a given neuron, and hprevh\\_{prev}hprev​ is the activations of the previous hidden layer. In practice, we find it helpful to parameterize these attribution visualizations to be grayscale and transparent, making the visualization easier to read for non-experts . Example code can be found in the notebook.\n\nWe can also use attribution to revisit the earlier quilt of dataset examples in more depth, seeing why each image caused [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) to fire. You can click the figure to toggle between seeing raw images and attribution vector feature visualizations.\n\n←more **negative** activationsRandom Dataset Images by ![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379 AttributionShow imagesmore **positive** activations→![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-800/5/attr.jpg)-800+![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-700/5/attr.jpg)-700![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-600/5/attr.jpg)-600![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-500/5/attr.jpg)-500![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-400/5/attr.jpg)-400![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-300/5/attr.jpg)-300![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-200/5/attr.jpg)-200![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/-100/5/attr.jpg)-100![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/0/5/attr.jpg)0![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/100/5/attr.jpg)100![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/200/5/attr.jpg)200![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/300/5/attr.jpg)300![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/400/5/attr.jpg)400![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/500/5/attr.jpg)500![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/600/5/attr.jpg)600![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/700/5/attr.jpg)700![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/800/5/attr.jpg)800![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/0/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/0/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/1/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/1/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/2/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/2/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/3/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/3/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/4/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/4/attr.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/5/img.jpg)![](https://storage.googleapis.com/distill-curve-detectors/quilt/900/5/attr.jpg)900+![](./_next/static/images/caption-0b271f686fc8100bec4f98a3c875b201.svg)Load more examplesReproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) NotebookWhile the above experiment visualizes every neuron in 3a, attribution is a powerful and flexible tool that could be used to apply to studies of circuits in a variety of ways. For instance, we could visualize how an image flows through each [neuron family in early vision](https://distill.pub/2020/circuits/early-vision/) before 3b, visualizing the image’s activation vector and attribution vector to curve neurons at each family along the way. Each activation vector would show what a family saw in the image, and each attribution vector would show us how it contributed to activating [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379).\n\nIn the next section we’ll look at a less sophisticated technique for extracting information from dataset images: blindfolding yourself from seeing neuron activations and classifying images by hand.\n\n[Human Comparison\n----------------](#human-comparison)Nick Cammarata, an author of this paper, manually labelled over 800 images into four groups: curve, imperfect curve, unrelated image, or opposing curve. We randomly sampled a fixed number of images from [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) activations in bins of 100We did not cherry-pick from these bins. The data in this paper is from our first sampling of images.. While labeling, Nick could only see the image’s pixels, not additional information such as the neuron’s activations or attribution visualizations. He used the following rubric in labeling:\n\n* **Curves**: The Image has a curve with a similar orientation to the neuron’s feature visualization. The curve goes across most of the width of the image.\n* **Imperfect Curve**: The image has a curve that is similar to the neuron’s feature visualization, but has at least one significant defect. Perhaps it is too flat, has an angle interrupting the arc, or the orientation is slightly off.\n* **Unrelated**: The image doesn’t have a curve.\n* **Opposing Curve**: The image has a curve that differs from the neuron’s feature visualization by more than 45 degrees.\n\nAfter hand-labeling these images, we compared our labels to [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) activations across the same images. Using a stackplot we see that different labels separate cleanly into different activationsInterestingly, during labeling Nick felt it was often difficult to place samples into groups, as many images seemed to fall within the boundaries of the rubric. We were surprised when we saw that activations clearly separate into different levels of activation..\n\nStill, there are many images that cause the neuron to activate but aren’t classified as curves or imperfect curves. When we visualize attribution to [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) we see that many of the images contain subtle curves. \n\n![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAYAAACM/rhtAAALYElEQVR4nFWYWY8cR47Hf2REZh196ZZlAfbszKOxwO4X2qd93K86+7CHAFsLG5LalnWMpFZLXZUZQXIfIjLbLqDR1Z1ZlQyS/4OUf/v3/wj3wL3iHpgZx+NEKTPVDA/HzDAzREBEEFFEhIgAQFVJKSECpVTmeUJwQDAzDocDpZR+r+DuzPPEPM+AsNmMpJRxdyICEaB/d/769QYAkfY/M2OeZ6ZpQpNCD8q9BboElXNmu932gIWcUw8a3A2zCu6oKjlncs5EBEkVd2cYRoZhBmAYMqpKBNQ6IxKoCiDkR48ekVJiGDIiinsLcJ4L5hWPWIMu89x+19IDdsIMUcVEAHAzSi3UUglvByplArRdXw+UmeeZiGCaZkTAPUjaKmL9s3m325FSYhzHfpOz2WyptWJeMTfcg1IK0zRxOBy4ubnBrOI+E6rwh3JHBCpKTgl6OcMHZDlAz3CZjrgbQG8PoX8EkSBcCBEyOO5CKYUA3ByI9sDeb2BrKVNKa8nGcYOZ9RYRRGCeCyknMMPCcXdSbiUUkd7PlePRgYS7r4G6O3jr9dOTE7bbLVkkrQ3fshBr35RamH1er6sqwzD0fglqrdRa12utt1qDG4FYID34JUB3J+fEdrvh+rpg1g+REicnJ5yd7BlyYuiHyp8/fURUAWmZQ8m5ldy99VMppQfSQNRO7YQ70lEsDeKExxqQWSX+kN0GLsUsVnAMQ2bcjAzDwHa3BXOsVj5++MD19TX55usXNGVUBEQJgVJSK7kbcy3U0k6qgLkzzXNHqhEepJzX8h0OB8DJQ1rvaehvQaakqAYpKeM4gATzdODL9RX/+PCBq4+fcPO1onk6zuQxGIYBIXCEMCOqIxq4G9VaKYXAzCllblxWCmUuECAqKwN4+JqxiCClzG63JyVZW+Pjx3/w22+/MU0Tx69fIBxUUM0IsNvtGDcbcuOtPxAky9/0v70/yHEPPIwWEaQkVO3AcoAgpQQWeDj7/Z6UMjknSpk5HCZ+/fWSw82BT+8/UOuR8PZdkjN37t/jdH8CATklUs6dDbSdzMw6/7e+YgnKW9+tNJIS0oGx2WzX/7s7tRbcnGHInF9c4O5cXl7y/PlPHL5cc7i+Ynnl3Z48Dty5e4dxHNluNxAQ5tRq1FrJ7hUrTq2dXpp0MOQEOOaO2YJWBxTVRjnDMKzIXHhut9uhKphVfvv1Vz59+sRPz/6HMh3opEceRh49+ZaTk1Z20c6RbsxTwTogRYQ8zwVUiShNpjpxhmdKnZjLTCmFMs9YDySlRje73Q73Vn5V7VRjXF194s2bN7z4v+cNYHVGN3vGzYb7D+6y2W64d+9+7+XAbCaAYRjIKePVUW30l6fpAJp7mcARVKFawq1SO81M05Fq7WQpJVSVaZo4OTlhu9mQh4Hj8cizZ8/45flPYAYi5M2WJ0+/Y9wMqArn56ddtYSp63urAKQ0oElxhZy7IBynCdHashay6mWYYWG9rGDR5M7MGceRnDPDMDAMA4fDgXevXnFzc8PHjx/Imy0n+z0XF3cYxpHTszPAV0Q3yfPOn4FZujUhaYC0tin5eDyiKZHzgKQMOIoSLqCN99ydcchYTZg5OaUmQznx/v1bfv/9d169fImI8OjxY/7yl+/JeWS/P2m8J4J3EDVb55QyMeaMRwJuXU1OGek2TUTIojTcChDeVEVSY3yvqEKEEA7bzYaz01NOT08xM168fMHz5z/hpfLg8WMuLi745psnf/J2Zo51+VzMSCPvBsnlFeGYBRIVEWn8CmQERLtgK0hAEsEl4Tg5Z1Rb8I08R75+/crbt2+5vLxkHEbO79/nybdP2W53rRIhoIu++1q+5jlbJgPr163rsQGlubLOzUSQe/zNwC6EDbgJWRObzabboRMignfv3vL3v/8n9XDDgyff8Ne//o2Tk33LcgSlVLwGmlPnxWbZNnkgxCmlrLTksWS5rvquYyJp6kkScotMu51v1Vbaz4LWcRyJCN6+fcvr315Tj0e2+xMeP37M2dkZKSVqtV5CY55mtMuiWW3uOlpQS4Appx5crMGJCOaG0Oy9R5Bvvn5t/DOOJB16DwbhjnbYT/ORN2/e8OOPP3Jzc8P3f/snHjx4wIMHDzk/v6DMM1efr3APRJTNdoN7C3bpu1obQzTj4PhsBK0FarV+ryLMmFhvBSO/f/+enDOb3Y5xGLt2DqSkSDqn1JmXL1/y03/9N7rf8cMPP/D06beAoJK4uroCAtHW2PM0d6vVpLPWSilzh4JgVpodS4tHvB24UsoQyyE67agoSRODJpIqURuKdvszvBqvL1/z7vfXbHY7vv3uey7OzxnyiIczTzNIQ+NcChZBqFDMCIdqFScIbebVATSIkG5+m3NJGpgpEc6QtwhCrdYM6+m+WetaK4frL5ydnXF2csp8nHj27H+5+nzF+dk5//wv/8qTJ08YxgEPiFobwlOiekVqJSGQwaJSw5GkSDjJExYVCSCW7Ak5KwQkHYicMDNO9ieklCmlNvc+jmP3gk2chzwAcH19zZfrz4gIp2en7Pf7ZggWe9YdTkQQHni37m7eyT3WMsUio8Hi4bo7d3ylIFn1XLWRe0qJbNU42oH9fsf9e/cxMy5fveLl5QvyOPL06VPu33/AdDjyZnqzzsKNPhz6+6+Hm64QpTd4NCIWQQikj50qbV6pVXBXEGWTbifDUmoL3AyzRL68fMXF+QXn5+eoKr/88jOvfn3JqCOPHjxiN244Hg6IapvWPHBibeSg9dp0c6DU2jcIQgCltplkHBPjMIIEpRoRkLMQkUkp4UlRgloqVWdyHtbM5j9uCK6urvh8/Znw4OT8lM1ms0pRSmnVyOjbhoXUl7LIsp7Qvh5R6aEuL1mEbTUm9M9Ll5potuJ29fH9d9+z2+34+PEjP//8M4fphgf3HvLo4UO2+13/gubNrI+lSXNDrwealBxKHYY2b5TmL1UTkmSVulX8c+6S532MaIcZckuACLi1MqeS0c12i7vz5csXDtOBnJoRXXYpIgIafaAH0BUYrb9o5U9pBQBxC5Dlfu8ELZ37QLqVu52dU9Lb0aNPhPnmcMPr1695/+4d++2Whw8fc/fuXYaUsIiWtVJxC4bdjnHMzPOip7qWO6dEbtanFTWC/qa5lwgyibBA0rLlki6HRsrSd0MVq33dohP5xYsXTNORe3fu8mjR1uVBtYI5EQphzSVbIqe0LovagNcyuOxYRBVNinnjlmXrZeY0/6nsduM6x5RaEIHNsGlbraz9uUGeponUDehiDharE269bO0h0jMjBEqbXyKiOQtj3S6skBCIfqN0x+SNSOlDYkv0HzhzAVtEe58RODs7Y7ffM00TtdZm6VNzzy0bCZXAq2Gi5HEgiVDMcXHwuCXWnCjV8D6VaUjLZs9ItYqKUMbCkBJ5negcUdiO296HBfdoStLse771b3Y7ixCBBKjIn9AoK9sttNGbvVNJuxbdvvWyq66oDvO1YrJQFNIVhbZlSEre7/ZsNhuGPLQLvZ9KqX3T2dG89GW08mvOpAiiT3qKrA9cnLTI7YZ2WQSknNr2y2rbTmhe3UwXazwgqeBOW2DWaohNhAgiDVVDz+iy/4uOaOmam1UZcxvcg4ZKicQ4DkzTtPZr0Iznmrme6blU5mnCkxG05VXOmZw2fdMBYGRV7ZaJNc3090vJVj5clugIEiCiCz4aOUesWVtUQSJWd7wAwb2ZhFqtjxeNrty9P6ODJ6L5QVUlp5bqNbjmIPGYwel7v76ei0qOzDAOIBnxIGTZL0pbtQXrzOFuEI4uKJZWzWkuTZt7Ww01cKdvuJxtW5Yq2mbPNVNLVnNqm9a2OA6kmQ8cqBGgXT16xhxtGhvyp+1YUki5LzDklo6st1B1xwNs4XaEFMqgI/8PUvBPKeiBIsQAAAAASUVORK5CYII=)![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAIAAAADnC86AAAEiUlEQVR4nO3XzUsbWxgH4ExOUmOS1ohoFwWxWFKhkEWRuFA3Brtq6a7gQhSk4MqN2PYPKBRX7urORVGMCxFp0Y0LCYJSQdq0tioowUjMZJIYJ3Nmzvd0ccBGrwa/esu9+O4GJvPMe/Kbd84oQgjH3yjnX1Fv4Bv4Bv6PwJRS27b/IMw5Lz0UQgghbNt2Op2KovwpWAKcc9u2Dw8PIYS2bXPOGWONjY3X37Ft2wghXddN08zn86qq6rru9/vdbvfCwkJTU5PX602lUn6//8xLiIvX5uZmPp9HCEEIs9msaZqEkM+fP/f29t65c0dRFKfTCQAYGxs7PDw86yIXg3O5nK7rlFJCiGEYpmnqur66utrW1gYAkJ7L5Xr27BljrFgslrnUeWHO+d7eXjqdppTm8/n19XVCyJcvX4LBoKIoskuPxzM0NAQhFEJ8/foVIcQ5vxLMGGtqakIIEUJUVYUQapoWiURkaBVFAQC8e/eOUiqEIIQkEolYLAYhZIwxxi4JFwqFFy9eHBwccM53d3cxxp8+fQIASNLtdg8PD8vmOOf7+/uSz2azwWCQMYYQugxsmmZHR0exWMzlcqqqMsYePXp01GVra2s8HmeMxWKx79+///jxo3SRGhoaNjY2CCEXhgkhfX19lmVpmpbNZguFgpwJiqLU1NR8+PABIZTP51+/fo0xppRyzi3LOvpfBwYGwuHwhZeaUvr06VMIYSaTMQxjampKNupyuQKBwO7ubqFQSKfT0Wg0Ho/39/czxjjnlFK51GVidSYsfz83N4cx1jTNMIzx8fEjNRQKJRKJiYmJtbW1ZDLJGBsaGvL7/ZlMRjaXTqfLk+U6ljeOMcYYT05OyuX1+XyhUOjjx49dXV1LS0vJZBJjbFkWxtjn871586ZMhs8La5pGKTUMI5VKSdXlcj18+BAhhBBaXFxsa2sLh8OZTIYQYlnWq1evHjx4YBjGOdXTYUIIY6xQKBiGIdPkdrsjkYhpmuFweH19nXOeSCTu3r07MTFBCIEQQgiDweDOzs6VYM55sViklLa0tMgR2NDQYFnWyMhIMplsbm7e2toSQnz79q27u9s0TRmr85Onw5xzwzAopW/fvpWz1+v17u/vd3Z2bm1tcc5TqVRPT48cF+/fv19ZWbkoKUsRxz9hGGOU0oODg/r6eofDAQDQdT0ajVJKFxYWxsfHAQCGYdy+fVu+HDHGHo/nMm/WEzciB/KTJ08AABUVFdPT06qqapqmqmogEKirq5MJuFyXpXUMZowRQubn5wEAAIBQKCSEGB0dxRgLIZaXlwOBwODg4PXDspv79+8DACorKwkha2trkpHxwRi3t7fL2XSdMGNsZmbm1q1bHo9ndnZWJghCWBpaCOGFntez6ne45Mbs8ePHP3/+rK2tTSaTRzlACJUmSJ5ZZgd5njqWas55TU2N0+nc3t42TfPevXulGXQ6f+8Mbdu+IuwqPZBP8MuXL6urq6urq4/d4HHmiqrjxPa2qqrK5/M9f/78n+ddXSoHOxyOTCazurp6vcapdXJy/Wv1f/xaLF+/ABHwuKPEJ7QbAAAAAElFTkSuQmCC)![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAYAAACM/rhtAAAJ5klEQVR4nHWYy5Lsxo2GPwDJa1Wf7iMdTXium/HCC4ffwO/iR/MreeHVzMZhOyRrNJL6XsXMBLxIsordOocRjKpikkjwB/DjR8mf/vCHABDa8f7TAV1/B2BcD999N6Csn7Ge+yNWO7r7vtlmd10uDwS3v/896ds//vFizHc3pN0mm2HZrW9rslsrO6f3Dm7PpZ29vR3Zvdgvntkj9blDPnPuDSlgCLFe9d31bXPd3b+3sz/eIy6AipA2Q1v44t0m29vtQ783Frur28b7sG3P7sO5d+i9PdZntqil/Y1f+vwccrG75hf83ubZe6Te78XuGfnC77ShVbcL67kh937DrRA6GioF45WKoBxwdLX1uXxS3iIsO9v7FNv7lN7nl33GyIZUXdcT0AP3wCuVM4ZSUYwjlbRzcrO1T5MNTd+d6Z2zl4jsHdvTiexuLlypwS5rgmBU4OY/foMe/g1BCPSSb8Yv87DyNse39fdr295b5X+xquruBfYc5wSCU4BRZko8kynUFd09+htKxtu8vNraotNWGl21l017cvySk8rbXNyudQQzEPd/Z15efsGVX6KnfQgb4oqSEKzFRQ1TxWR6i+Dnqmsfhj0hBw3+W6B7+IFAL2kQvK1EB/LupRMdxozQoanHug5MkdVrEQERopsaD25evod/S9zt97bJdu/muOG8slCAmxWrDaUth9t9B4IJPRyQD0fEDK9O9ULkjNbVOQWPwIlrFX+Oo/ZFs51bC9zyqQCyVm4H1JWwr/Z6EiNKj84HPCVkHiFZyz1xqgSWOlTjsrEANnSkxFsHt1B+jse2frrPsQ4I4kIt+XLvgHHAugMxTrgZDD0kJRTMBCKQCEwNRYhSkRBcYg2RXhF83y/fE+ieIzcHy+65xnuKMhAYNhyxfoZ+IFIiRPGuQ00ICWotRK14LUhIAyRWilNwEUja0m8vhZS3ebcvDlmZblM9sXPMMXru0OlIzD0xdiwIaRyQLjUbFcSDcjrjpSA1UBHEFOsHqhoiQnSCKei4q2Jdw7WR5jWx3/bczfnMRugDxkSvt3Az4vNAJIU+YcmQpEh1pAZ+OlMCxMHGEUxRUxCBoUdScxCDEJAxXXNwn1vbsQ/pVpWFa+iNA4mZah1LCqITtO+QzuiGHryynE/EuaIhKIKrk/oeHUcwA2lCI8QJb9/rUvBw8uuJtKG2tZnWXpSeuKi8/VmAnhFjhjTjw4BPPXGY6G6PiBosheX0StRMJVBJhCUYDe0EUcFLJUohsqM4EeC1cQJBC3H2pmau+m3LtCtd7FOgoTqhzISNxDSiY08cZ/TjLdJ3lKcX1B1RQ0zQCEQmUEM6ASvU5YwsBWoguaLRQto2EWIjbI+rom6k21iuUi45eK1iRZlJfEDniWU07DDBOGE3M5nCzz/+zNRP2DigDCR3dCmETpRI1PMZy5m8PGMejVrCCQc3RTEknBC78FxyNi5TKvmN0LwOSQk4Yv0HfD7gc4/MCTkcCDHKcuacX5nGnv7mptHJqSAPr+QsnD70yPED3csT9fsnBukIMnig3naK0rjBA6AiJoiverDlV8Nq31G24kgc8O6ITxNyGPGpQw8DrkJZzizLCe0T03xD7gy/mXj+/lvG5wVNSvr3b9BPX2NPT2TP+D++BYfWOBqC4lfJGnKVJjqstXpVF1v7AsUwbiHdIscjcZjxeUSPBzAln094WRjHiXk6ILVVXpcmut/9lvP9C3QJ+9Ud8fUB/utf8P/8CsQRr0T4WplrkUQbHiJirQAlBfVNW6sX31tYQybKmLDDiB5nZBrBoJYKAX3qURLxuhAlsyxnnl5eufvt73Bd8LLQK4QmSjkRCTKVYeXDkEZgZgbJmi+qiCqiQtoc2o4W8omOD3g3kvuE3Nzgt0fSNBIW1FJQoNcORajPz9RSSF4ZlwW/f8C/+z9ufeHh8Z783f/TH+6wn5/wf/xE5AIeIEqIIhq4gJmuUWxyC1mreKOapoZHnAN1GNDjkX4ciA9H9DBQLYiSwQvmQi1OOS+kkrFS8VIxr8yaSMVZstD7idc//w88nNDXJ5a//o0BAQE3aWzrDmqUcESkdRMCxFoVX9VJD8zYeMN5TOjU42OPHHpk7ImSMQ+iBJEX4rQguc2DUjLqji4ZC+AUpFHxU0F++pH88IR1MJ8znQrujmsgJoQlioOJ4DjqFSTwWptgbbD2KEdEJ+rU0d8eyJLQw4hMA+EVyoJ4C7HkjHpGBKIGXgtaKqkWCEGS4mXBU0d/OpG0EFUxhyKKjIaihBimibM4wzgQecGXE6IQEZtYSMBA0DfNNnS4AFNPOs44AnmhvrxCDsgFrd7alQdURyKI6ng0+mh5lBBR0CYKRKwpldTGg/AgR4WojNOB5bQQZaE3QZRWJG2WuiE4wJCodzN2PHCOzPDpFjCkZvzlRLl/BBJWHS8ZKRX3CrUJuQhBXFryA4gi0rjOA9RbQwintUI1upQotcKyQK6oJXRTM+qk4AjMME3EccD7HlSQT1/BYPjzmXh6Yfn5ASuVhEIu+JJb7tbmoNdARFCzVqEq7ZSmkEUaAQeBe5A2FGuB6kRcpVeskssDkjNhhwMxz9QxcZLCfHOgu/uAvLzgL8/Up2f0nOlq4JHR2oYcEDRWmeQgQ7ciJ2vlNQcVhXBCpPGdQHgjZXGhE+PsBUk9ZoZre16SkqLrYRhItwfOUZBpJH36iJ8WysMzPL0i5wWrUM+lVaxXLDvbnKgiK0KgYhCVus2QASGNMrYGLxINeUA0ESY4yjj1qKX2MgoiRurmpudeaRU3ffMRl0K9f4TnV9K5UE8ZsqMeVC/kCBKCRFBphCqmRARFtibbZg9TRZNCtDQo5NYlrKnnGk6tkPqOWivFK6nvidRCrToNRArO+YTdTNjQ4Y+P1KdH5FSQUrEMUq/dusoqMERRM1QTmhJmikqbcMw6LHWESHO8FkSa9hMgNHCF2itLZ/jXd23cVMGtTX2GkJZxoHhmvL2j+/gBloX6wyNpqchrJs4ZakFKXqVOJak2FSzrlOzgUnHXtRNcGhaIIKJo31HKAipUc0zaqJkkIeOE/epryssL6iDVKQmyBclR5DijX90iAsuP9+ipFQIlE56hZvDW1FUF8WD1Ym1B6/fwS+jcK4RCMmqAryp7EYGvDvB8QpMiIXSlUv73bzCPuCrn84lwp4igxaG7u0Onnrh/wB8e6T3w1zNRcqu2qIQEQaAhrKryWgSrRg9Zv0c0ghZZB7JWqZIGupuZ6V+/QbsOEUVyIddMiJOLU6qR0g2DTfDNJ1K6aRyo2Tk9PqC1NkGYK5EzUdvkawgqinv7o5JLGOWCoFr7O909UAWxdS2tSZvaZH3+63dYqYgkqhmIYKlj/vV/E3cfOf3le+pPP9B1M/8EFW0hC7vRzDEAAAAASUVORK5CYII=)![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAIAAAADnC86AAAGFUlEQVR4nO2WS2gTWxjH591kMI9Sk5bEkKSJFBNoQl+LiiaKrxIoVNpNFzWiKwuCCIKKC8GC4iILBelGKqbSLMSKKKhgUDE2ldIglkotipW0teZZMjPJnHNm7mK09qbxWr330o3/1cycmfOb731wSZKwjRCxIdQ/4D/gP+CNA8uyvDFghNCP2JIkIYT+e3ChUNBqtbIs8zxftgQAGB4ebmpqMplMjY2Na7+VZblCY5bWIQghhHDXrl1arRYhBABQnguCcOjQIZZlGYbp6emZnp7O5XK5XK5YLAIAEEKpVEqn012+fBkhVLbnD8GrX4UQLiwsIIQoivJ4PJIkiaLY19fHMAxJkh6PZ2ZmBgCQy+Wy2awoihDCeDzucrkYhqEoymq1/hpYMY5l2WAwyPN8Pp/3er0LCwuZTEZBqtXq0dFRCKEgCMlkUhRFhNDLly9ZliUIgiAIlUpVVVVltVohhD8Hr36J47jl5eWWlpb29vZisYgQCoVCOI4TBLF//36O4ziOm5+fhxACAKLRqFqtVlZJkrTZbL29vSaTiWXZlej8E3glihqNZnp6muf5Dx8+QAgvXbpEkqSyaTgcLhaLExMTmUwGIXTt2rWamhoFSRCEw+E4ceLEwYMHjx49umfPHpVKxXFcmbfLwSvLSnb09/fv27evWCy6XC4cx3Ec3717tyAImUzm/v37pVJpaGhIr9crSwzDuFyukydPnjlzxmazhUKhUCgUiUS0Wi3P82VGU2VJjuO4cgEh9Hq9d+/edTgcdrt9fn4ex/EXL160tbWNj48bDIbm5uaOjo5oNIphGEEQPp/P6XTW19ePjIzQNB2NRm/cuCEIAsdxNE0DABiG+Uk5QQitVmsikcjlcuFw2Gw2a7VanU4nCEKpVBoZGfny5cu9e/domsZxnKZpv98fCAT6+vqCweDQ0NDc3FxXV9fZs2dnZ2dXp4soiqsplRvI8PBwW1vb1NRUsVhcXFxUqVTpdBoA8Pz58+7u7qtXr3Z2dmIYVl1d7fF4amtrzWaz2Ww2mUxjY2M6ne78+fPxeLy+vp4gvu9PUX/3bpm5VqvV5/MhhJaXl2maJkly27ZtCKFkMhmLxQAA27dvJwiCoiiz2azX69vb21tbW69cuTIzM/Po0aMDBw6k0+m1xbNW5WAAgFqtdjgc2Wx206ZNJpMJQri0tDQ1NZXP510ul5LVdXV1brf79OnTW7dudblcdrud5/lSqXT8+PHe3t6fUsvBHR0dkiQtLS2dO3dOq9XW1tZKkrS4uKiUk9IWKIrSarUWiyUYDN65c+fhw4d2u50kycnJyfUYuqLvMXj27NmrV6+2bNmyefNmjUYDAPj06VM6nRZFkWVZvV5fKpVwHCdJsrW1tbGxMZfLiaLo9/t7enowDBscHMR+ZW5+B+/cuTORSMiyfP369WQymc1mBUGgKMpgMPh8PgCATqdTbm02W39/f2dn5+fPn1+/fj0wMGAymWZnZ1dX43rBPM87nc7bt28nk8lUKmW1WkmSZBhGpVIdO3bs/fv3SmduaWlpaGh48OBBLBbbu3evw+FwOp2yLI+OjhYKhcnJyXVSMexbVouiaDQa7Xa7cosQEkUxn88nEgmapmmaVqjj4+O3bt1yOBx6vX5iYoLneY7jlB5+4cKFX4oxLkkShmFv3rwxGo1Go1H5G1mWlb5vsVh4nqco6siRIzU1NYVC4dSpU+/evQsEAoFAYHBwkKZpbG2Nrt/VAwMDK1Ts2xEnEonwPK9Wq7u6utxu99zc3I4dO9RqdVNT09OnT+PxeDKZJEmSJMlfpWIY9tXiMgEAIIR1dXWCIJAk+fbtW47jIpHIzZs3x8bGNBoNRVHKpPoN5Fet9T6EkOf55uZmvV5vsVgeP37c3d0di8UQQk+ePGloaFB+a/3hrKjK8ziVSrEsW1VVdfjwYYRQJpO5ePEiQghC+PHjx7VT/TdUwdUQQr/fjxAqlUrhcFiWZbfb/fsu/YEqgCVJMhgMGo1mcnKyurpaluUK0/Rfq0IZKGcXr9erwJSjhSzL6+9K61HlrMYwTJKkf5W0P9MPt/5fqf8E/r+1YeC/AL+tvkcUDomQAAAAAElFTkSuQmCC)![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAYAAACM/rhtAAAJWElEQVR4nF2ZMXYrS2+EvwJ6qBc7+BfhxIEXYEdemAMfL/hZ1Ex3A38AzFDXvEeHEjkcooFCVQFX//s//51mhrsjFyAiAjNDJoTITNZa7L3JCCQRIWSBZAC4C8k4/EDDEYa7iEiCYIyBVPciE4DhAymJEGQw1wSSqCjqGpOQBB1QAu4Dd8fUl2WS/RP1J+YVGGTdbieyJAe4hJnXPSyIDCCR6hCSGHLMrV8TboYNgwQpkYwkGeM4ngttOGBEJELsHUTUSSIhU+ydZEAwK3ASDIYZww0EKUgFiWEmJK/735VSZc/NMTPGGM9zVQ/AgGQgdQ4gs5KbuZkryZ2sHV2GYM3NnJM1N0kSuSszLl5jVEnGwO6MkyDDhzHcMXOOo8p/jKMzbE/5zT4ZlVT327mxLUKCENgm9oaEtZLMIBPWWsxrcp0Xay2CKDwBZkZ+bSKTcRyYGSZhAjMxfDDGYLjz+nohidd44W51rRmiApLpj78HGews8KbqC/eOft4VTAQ/3yfXeXLNWc2SSXZwxzGAv5AZc87OimNeP8cxOI4X7s7r9YVJHMeB0B+Z8+F1zy5vZjIkJ3KzSYjKypyL2JtrTuacRATnz8m8JmstMpLIIFKMUV0vGWbwen0xxmj87s5SNcwY3hgUrsKiOttIrL3YO1AmSIXBuzv3muwdrL2rjHtxXdeTwTVXBZdBppGRJMFcTubimhcm4/q6nkCOY2AdUGHqQzObXTSWSUoEyd6bHVH568+M2Vmaa3LOyXktzvN8ypuZffNqfaWqS48DqbIYwNwb7YmfJ3JDbox1YGM0VgvP13UCcOYJ2fjt7zi+js50QUIS4zpP5ppcc3Lt4JqLtYPYm7WrS82sO13IDRdgwuzVNKKnWfZedeB5dMl2/axNWlaTIMZwSFWm3UkJxgeTOwICxt6b2EFGwk7quw1scAyKKgAkbAiZ4Q3qMQ7crWBtdY0kzI3o7v+tHL8pRGb1vmDRZZ/RggC56/dxp9nNCBMvrDMRwGilqBs9ZGsN/OGYqpxuhrmX5GVi7gV06X7CpMawmGsSiFzRTZesvclddKUUCMbrry8idvGRJzs2YxtxAAQyKxl8Tt+llpcSOPAEbd21H9Ww1t+5FjsDpZcsRDVEZIlAJg99VU5uvW42N3NGZklZJruBbbrlS0AQkQUHaC4U/ZUI8M6imzPMAfi5LmyvwmoHvHMTu9TrNhzspPn5oyTuTpK4qttyVzmyL8rHEhQNzLlYWScVCZFEeR4ig7U2GQ4eZBSpn/O6v7f5rQ1IN5akSgR3fIap5HhUuYowJcMPYS6yP1CB0MHZw/6ZiSJIREawooyFruuRr6KPki+Z/miUYUINgddxVIVkgDDd1GYMjcaTewv6pwmQCIJoV+NjoVOYiTWDc56sNR9djiwyMhMGRRXKbiT9YQw4DkZCytjZpK1AQJghc4hguDky47gtj3sZyfI8JWmKlrwyEmst5txc5/lwXYrH6FqfngwQmAtnQGRVRiI9SBlpScYmqabKp/y7Mmhqemi34U/3GZlBLIhdAZ7nxff7zXVO1pr8/JwPEUsiTZhUhtUdUY7Gre5rUvOsHqhIpUymu2FuV1+vDZkYZhAg/2Aks3T5vE7O6+Lvv//m+/vN+/uH86zSrrUfXLkMUuRR2b8tVrmUdjdNQ0XyzjEGslKTz0OYfRp0VKaq/uX/2tpHcJ4n7/cP3+833+9v3u+fyt6cj1LUtGCYD8yN8XoVTOzjXDSqISrYUWUfA9oPahhG+dEgSQNhddCbfCOzZpDG2N67gvv+5v3z5v3bbvUhPlbKcD+w4RzjeMzpPUhVlu7MjefZfr1nFN9aE+odV/l0t7b8ySaYazXevvk531XSWVgr7wcuZxwDmSP3x777qIDHy3Hpcc23TL6OoyRsGJmlOocf1ZCKopvGslBnsCli72Dvslvn+cPPz5ufn5M5Nzs2oGfAMQP3V5Xubiy7seUcbnxUqkcAN8YxfinF7eCzqMmthjN4hrURGcQulVjtDd/vH67r4rpqQLrdjnFnqDhyjIHdAd8ldKsAj8EYPc2pgr8bQh1gtFlVT4JG9UKZ4XqMa05ArFUD0bwurrOCm2t1cIElRbYypDtbRzka/xWgqQM8HtwZhlkbXtmjLiMh1FLZBuHTpCWL4zov9oY5L+a8OouL3fPBLVfIIW8p+ng6dXB3MDcH3o7GWl3zVuMebctOdZmjvUJkS2c+PnJ8f79Zu9YOq4elaPKt01RbuXdgBiifTcRvE2pN1o8hIGqtgcB4iDqz548dd2x9eZCNzDuT4/3+4dqLvapDadFPKFvV9JNQHCBAxVFlMIpcSxkM/zV0R9Brj8pYRhBUF8Tt/yLAjDaE3cVGRkFrnOfJtRfR7WOZxD31oZ6VxR+PbEA38xcE2j17d+HjEFtXN+w+BL+yeHfrcyIT2iKzDjdqtNyFEd07mP2A+cHJM2N4C7lXduKJDZSsNAZF/GQtjnIFO248V7a5tT4S9cLAG3876feCUQ3RIq0brBtXPhhDNybA8D5sotidofpnmQwFU+B489lmr/lYNj2er8v6qIZh6uBis3aNqR3gfvCU1I3275WEO4ZxYIQHkUbGIlcdwuVEFiFHG84VzWcZvRTYZORjjIeMlIq+rJovUmzuz6wqcQ3om8wNBK4GtmpHIoP0It0ViTJIW9gxmGGYSjF8V4DEwVYAZxFxBLH2Y+9NhsjmTi9XnoIdXHuWYQXUI8W4ft6siLJK1AIpmw5u/bRIRgTDk7mjuG+vcuA9T1iPDWe7FpTPWk/3XGxGjsDM2//tXibSO6HdVFYM4SnG//39Xdkyb/AWnci88JGGsIdabDtKYVlrkAqubUhYrfBcYI71VmIcIlO9CTuQGa/j9Qz99+P1lYwxcLdafZgxzA7cSjtdzrhL1jsS9axSM8tRJlNGuiM/cCtnc69IZIZGuWj18tKOIqVaYPZ+8PVq9TGym3Ecg2OU0f36+qte+4///C/cB8PbCDzOd+Cj3LG51Zzr3l8sQjXQ20MbTde3xHWwdrvtezOhT+PlLyWC4mvdjWk1bY9//bd/f5yGblXSZ9l9D6ju9ut1aiP7//j7we094/b/EpgeSv8M5jfvwqPXxesVcPSf41/+8Y8+HSXk7XN6W03IsOyBqAP4OLlbdfpqUSz5ZFW9MX1Ceog/PkuEas6k8Jt0g9Vn/wnmf2xgfNjykgAAAABJRU5ErkJggg==)![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAIAAAADnC86AAAGNklEQVR4nO2WS2gTWxjHZ85knkna1Da2NiSF+kyQgItSba3aLopYtJTgq11KXbRIxS7EJyoUuqko4sKdShEMBRdSrCCCBlsfGBsrUivWGKuRmOYxk8k8zpmZu5g2t8rV2+v1cjf9L2YxnHN+5/u+8//OwXVdx/4Pgf+FugReAv8+vXz50u12G4ZR+GP57YxEIuF0Oj9//qxpmq7rV65cqaurs9vtuVzu9u3bu3btmhun/5IghJs3b0YIKYrS3t6u6/qZM2cCgUA4HKZpur6+nmVZiqKKiorcbnc4HBYEgeO4zs7Owgq/CM7n858+faIoShCE4uLi5uZmr9dLkuT27dsJgli1atX+/fv9fr+mafl8PhaLQQhpmq6oqEAI/TOwmTeE0MjIyN69e3t6ei5fvkwQBMuyDMPYbLZDhw5ZLJb37987HI7p6WkIoaIo5sQPHz4ghDiOs1qt/wCsaZosy7Ish0KhQCCwbds2iqIoinK5XARBkCQ5PDxcXl6uKMq+fftmZ2chhLIsq6oqSZK5gqIoCCGbzUbTtKqqiwJrmqYoSjQa3bFjh91ux+cFAHC5XNFotL+/H0JoYjRNgxBCCBcmSdf1bDYLIXQ4HCRJZrPZvwcjhKanp+vr6xmGAQAAAHAcJ0ny2LFjPM9LkiQIgqIosixr89J1vRBTQbIsRyKR1atXkyQ5MjJi/vyhnTRN6+3tDQaDiUTC9B9BEKdPn+7t7QUAEASBECJJsrChhRMBALquAwAwDBNFcWxsLBKJuFyuWCyWSCR+5mNZlrds2fLixQt9/tL0+Xy3bt3yeDypVAoAwHGcmT1RFCmKIkmywCYIwoye5/l4PH7z5k0AAM/zTqfTMIxgMNjR0YHj+F+Ap6amNm3axPO8ruuGYdA0ffHixfb29ng8LggChmE2m80sM4SQoiiEkEkSRTEej4fD4XA4/O7du3w+n8lk/H7/unXrJEl6/fo1AGB0dFTXdYIgvgEbhjE4ONjd3S3Lskk9ePDgwMCApmmCIJSXlxMEUVRURJKkOdgwDEEQOjo6du/efeHCBZ/PNzY2dvz48VgsVltbm0qlGhsbIYShUOjhw4eiKOI4bs7CMAwvJNMwjLNnzw4MDKiqatZpdHR0w4YNHz9+pGnaDFHXdafTCSEEAMiy3NbWZrfbz58///z589ra2jdv3qxYsWJmZubVq1cTExOlpaXPnj2bmJiQJMkwDIqiLBZLS0vL4OAgjuNzYMMw+vv7L126lMlkIISVlZXj4+NWqzWRSExPT/t8PvOwMAyTSqXKysqi0WggELhx4wbP8/fv32cY5smTJxRFhUIhSZJMa5lncPny5Rs3bqyqqtq6dWt1dbXX6zXjnAMfPXr02rVrkiQpirJy5crx8XFBEMzMkCSZy+VwHLdarW/fvn38+PGjR4+CwWBnZ+fQ0JAoiizLqqpqGAZCiGVZn88niuL69et37tzJsmxDQwNFUaYhF5bVgmHY5OTk1atXBUHQNK2mpubBgwfpdJokScMwOI7LZrOaplmtVoIg7t69e+rUKQDA4cOHp6am0uk0RVGiKJaUlFRXV9fV1fn9fpvN1tjYyLIsQRDfOe17v3q93rKyMoZhampqIISzs7PJZDKXy+VyOVmWk8lkNpt9+vQpwzBWqzWTybjdbtO+e/bs8fl8Bw4cCIfDX79+VVXV7FzaAv2oO2F9fX0ej4dl2ZKSEkmS0uk0z/NmixcEYXh4OJlMzszMVFVVURTFcVxra2t3d7fFYqFpuq2tzTyJaF4/IX0PbmpqWrZsGU3TkiTl8/l8Pl/ovTzPJxKJpqYmjuP6+vqGhoYYhjG3ODk5aXZpMywIYeHaWSzY5XIVFxen02lz44VvJBJJJpMQwpMnT3IcV1lZeefOHY/H43A4zp07tzCTi4/yG3Bpaen169cXXp+qqiKEUqlURUVFT08PhPDEiRNr165tbm4+cuSIGaU58teQc2DzBVNIlLmWJEkIoZaWFrvd3tXVhRDy+/1r1qzJZrNmUX+ZV9CfnesvD3xDQwOGYa2trV1dXWaLnmt4PzLJovWz5y1BEPfu3fvy5QsAwG63F5D/noot7NU/kj5/s/5e/f2K/wV1UeD/SEvgJfAS+LfpD/vULS09cIuGAAAAAElFTkSuQmCC)Dataset examples that activate 3b:379 but were labelled “unrelated” by humans often contain subtle curves that are revealed by visualizing the image’s attribution to curve neurons. Nick found it hard to detect subtle curves across hundreds of curve images because he started to experience the [Afterimage effect](https://en.wikipedia.org/wiki/Afterimage) that occurs when looking at one kind of stimulus for a long time. As a result, he found it hard to tell whether subtle curves were simply perceptual illusions. By visualizing attribution, we can reveal the curves that the neuron sees in the image, showing us curves that our labeling process missed. In these cases, it seems [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) is a superhuman curve detector.\n\n**How important are different points on the activation spectrum?**\n\nThese charts are helpful for comparing our hand-labelled labels but they give an incomplete picture. While [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) seems to be highly selective for curve stimuli when it fires strongly, this is only a tiny fraction of cases where it fires. Most of the time, it doesn’t fire at all, and when it does it’s usually very weakly. \n\nTo see this, we can look at the probability density over activation magnitudes from all ImageNet examples, split into the same per-activation-magnitude (x-axis) ratio of classes as our hand labelled dataset.\n\nFrom this perspective, we can’t even see the cases where our neuron fires strongly! Probability density exponentially decays as we move right, and so these activations are rare. To some extent, this is what we should expect if these neurons really detect curves, since clear-cut curves rarely occur in images.\n\nPerhaps more concerning is that, although curves are a small fraction of the cases where [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) only weakly fire or didn’t fire, this graph seems to show that the majority of stimuli classified as curves also fall in these cases, as a result of neurons firing strongly being many orders of magnitude rarer. This seems to be at least partly due to labeling error and the rarity of curves (see discussion later). But it makes things a bit hard to reason about. This is why we haven’t provided a precision-recall curve: recall would be dominated by the cases where the neuron didn’t fire strongly and be dominated by potential labeling error as a result.\n\nIt’s not clear that probability density is really the right way to think about the behavior of a neuron. The vast majority of cases are cases where the neuron didn’t fire: are those actually important to think about? And if a neuron frequently barely fires, how important is that for understanding the role of a neuron in the network?\n\nAn alternative measure for thinking about the importance of different parts of the activation spectrum is *contribution to expected value*, x∗p(x)x\\*p(x)x∗p(x). This measure can be thought of as giving an approximation at how much that activation value influences the output of the neuron, and by extension network behavior. There’s still reason to think that high activation cases may be disproportionately important beyond this (for example, in max pooling only the highest value matters), but contribution to expected value seems like a reasonable estimate.If one wanted to push further on exploring the importance of different parts of the activation spectrum, they might take some notion of attribution (methods for estimating the influence of one neuron on later neurons in a particular case) and estimate the contribution to the expected value of the attribution to the logit. A simple version of this would be to look at x∗dlogitdx∗p(x)x\\*\\frac{d\\text{logit}}{dx}\\*p(x)x∗dxdlogit​∗p(x). \n\nWhen we looked at probability density earlier, one might have been skeptical that [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) was really a curve detector in a meaningful sense. Even if it’s highly selective when it fires strongly, how can that be what matters when it isn’t even visible on a probability density plot? Contribution to expected value shows us that even by a conservative measure, curves and imperfect curves form 55%. This seems consistent with the hypothesis that it really is a curve detector, and the other stimuli causing it to fire are labeling errors or cases where noisy images cause the neuron to misfire.\n\nOur experiments studying the dataset so far has shown us that [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379) activations seem to correspond roughly to a human labelled judgement of whether images contain curves. Additionally, visualizing the attribution vector of these images tells us that the reason these images fire is because of the curves in the images, and we’re not being fooled by spurious correlations. But these experiments are not enough to defend the claim that curve neurons detect images of curves. Since images of curves appear infrequently in the dataset, using it to systematically study curve images is difficult. Our next few experiments will focus on this directly, studying how curve neurons respond to the space of reasonable curve images.\n\n[Joint Tuning Curves\n-------------------](#joint-tuning-curves)Our first two experiments suggest that each curve detector responds to curves at a different orientation. Our next experiment will help verify that they really do detect rotated versions of the same feature, and characterize how sensitive each unit is to changes in orientation.\n\nWe do this by creating a **joint tuning curve**In neuroscience, tuning curves — charts of neural response to a continuous stimulus parameter — came to prominence in the early days of vision research. Observation of receptive fields and orientation-specific responses in neurons gave rise to some of the earliest theories about how low-level visual features might combine to create higher-level representations. Since then they have been a mainstay technique in the field. of how all curve detectors respond if we rotate natural dataset examples that caused a particular curve detector to fire.\n\nEach neuron has a gaussian-like bump surrounding it’s preferred orientation, and as\neach one stops firing another starts fire, jointly spanning all orientations of curves.\n\nNeuron Responses to Rotated Dataset Examples![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_406.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_385.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_343.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_342.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_388.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_340.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_330.jpg)![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_349.jpg)![](./_next/static/images/tuning-transparent-9cc97705d64eb8ff524340404aa2c992.png)We collect dataset examples that maximally activate neuron. We rotate them by increments of 1 degree from 0 to 360 degrees and record activations.The activations are shifted so that the points where each neuron responds are aligned. The curves are then averaged to create a typical response curve.![](./_next/static/images/tuning-methodology-40a83ba137fc22ece3bd25a75f9da797.png)While tuning curves are useful for measuring neuron activations across perturbations in natural images, we’re limited by the kinds of perturbations we can do on these images. In our next experiment we’ll get access to a larger range of perturbations by rendering artificial stimuli from scratch.\n\n[Synthetic Curves\n----------------](#synthetic-curves)While the dataset gives us almost every imaginable curve, they don’t come labelled with data such as orientation or radius, making it hard to answer questions that require systematically measuring responses to visual properties. How sensitive are curve detectors to curvature? What orientations do they respond to? Does it matter what colors are involved? One way to get more insight into these questions is to draw our own curves. Using synthetic stimuli like this is a common method in visual neuroscience, and we’ve found it to also be very helpful in the study of artificial neural networks. The experiments in this section are specifically inspired by similar experiments probing for curve detecting biological neurons .\n\nSince dataset examples suggest curve detectors are most sensitive to orientation and curvature, we’ll use them as parameters in our curve renderer. We can use this to measure how changes in each property causes a given neuron, such as [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed3b_379.jpg)3b:379](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379), to fire. We find it helpful to present this as a heatmap, in order to get a higher resolution perspective on what causes the neuron to fire.\n\n![](./_next/static/images/figure-627354492d2a9c4cfdee9cceb3a1a5fe.svg)0σ![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAAyCAIAAAD9fhrKAAAAA3NCSVQICAjb4U/gAAAAX3pUWHRSYXcgcHJvZmlsZSB0eXBlIEFQUDEAAAiZ40pPzUstykxWKCjKT8vMSeVSAANjEy4TSxNLo0QDAwMLAwgwNDAwNgSSRkC2OVQo0QAFmJibpQGhuVmymSmIzwUAT7oVaBst2IwAAAJmSURBVHic7dTRjew4EENRsuSOYCN5CWz+SXVxPyS1HzaDAu7BwKClstvQAPSff/51fVyP6uN6XB/5cX1Uz1/r5XrkPfCoHvtR1e9Wteyl+tilWlqPveSlKtdSPa5SPXelVGdXXiq76uYd6o7trBNslVWVG2TfRctOWZbqhJRUjpWTFadPUCqpxPv63vZZVJxU38UdWjufrY4TJ+5Uy31zzm21HLmjsyJHJ7T0y3G1FDlWV7UcK+VWtdV2yl1qO3bffMJ79feX11npUi99z+LeVS/1usNrD2vvfpeyH9mh7m6pl7LUPgNdOySlfah7JRV5j0WldlJS7RPdISrJ2bd7SxU577X03joqqVqWa5/ZWbRPdrWsfdiq+D7oPayo2rpvu09J/f6Ez39Abqnz5sSREne0Z+6AOr5BZzc37/XWHj7XJCfnjuW+Id35jXXUSrdaN+9nk/N3vmE/lVa6Eyk6AzkP7ueUdGsPdCdJoj1zwh3IGfh/bimtnK94d98D6HMA+X31nTwfm/ec9m/1Xyf3e8M36vicpdTR9w0uAcAQFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AY/wGpbuP1OXacNwAAAABJRU5ErkJggg==)Standard Deviations25σWe find that simple drawings can be extraordinarily exciting. The curve images that cause the strongest excitations — up to 24 standard deviations above the average dataset activation! — have similar orientation and curvature to the neuron’s feature visualization.\n\n![](./_next/static/images/image-526ffa674c525ecfd0ac9ccf3b02648c.png)0σ![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAAyCAIAAAD9fhrKAAAAA3NCSVQICAjb4U/gAAAAX3pUWHRSYXcgcHJvZmlsZSB0eXBlIEFQUDEAAAiZ40pPzUstykxWKCjKT8vMSeVSAANjEy4TSxNLo0QDAwMLAwgwNDAwNgSSRkC2OVQo0QAFmJibpQGhuVmymSmIzwUAT7oVaBst2IwAAAJmSURBVHic7dTRjew4EENRsuSOYCN5CWz+SXVxPyS1HzaDAu7BwKClstvQAPSff/51fVyP6uN6XB/5cX1Uz1/r5XrkPfCoHvtR1e9Wteyl+tilWlqPveSlKtdSPa5SPXelVGdXXiq76uYd6o7trBNslVWVG2TfRctOWZbqhJRUjpWTFadPUCqpxPv63vZZVJxU38UdWjufrY4TJ+5Uy31zzm21HLmjsyJHJ7T0y3G1FDlWV7UcK+VWtdV2yl1qO3bffMJ79feX11npUi99z+LeVS/1usNrD2vvfpeyH9mh7m6pl7LUPgNdOySlfah7JRV5j0WldlJS7RPdISrJ2bd7SxU577X03joqqVqWa5/ZWbRPdrWsfdiq+D7oPayo2rpvu09J/f6Ez39Abqnz5sSREne0Z+6AOr5BZzc37/XWHj7XJCfnjuW+Id35jXXUSrdaN+9nk/N3vmE/lVa6Eyk6AzkP7ueUdGsPdCdJoj1zwh3IGfh/bimtnK94d98D6HMA+X31nTwfm/ec9m/1Xyf3e8M36vicpdTR9w0uAcAQFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AY/wGpbuP1OXacNwAAAABJRU5ErkJggg==)Standard Deviations25σReproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) NotebookWhy do we see wispy triangles?\n\nThe triangular geometry shows that curve detectors respond to a wider range of orientations in curves with higher curvature. This is because curves with more curvature contain more orientations. Consider that a line contains no curve orientations and a circle contains every curve orientation. Since the synthetic images closer to the top are closer to lines, their activations are more narrow.\n\nThe wisps show that tiny changes in orientation or curvature can cause dramatic changes in activations, which indicate that curve detectors are fragile and non-robust. Sadly, this is a more general problem across neuron families, and we see it as early as the [Gabor family](https://distill.pub/2020/circuits/early-vision/#group_conv2d1_complex_gabor) in the second layer ([conv2d1](https://microscope.openai.com/models/inceptionv1/conv2d1_0%22%3Econv2d1)).\n\nVarying curves along just two variables reveals barely-perceptible perturbations that sway activations several standard deviations. This suggests that the higher dimensional pixel-space contains more pernicious exploits. We’re excited about the research direction of carefully studying neuron-specific adversarial attacks, particularly in early vision. One benefit of studying early vision families is that it’s tractible to follow the whole circuit back to the input, and this could be made simpler by extracting the important parts of a circuit and studying it in isolation. Perhaps this simplified environment could give us clues into how to make neurons more robust or even protect whole models against adversarial attacks.\n\nIn addition to testing orientation and curvature, we can also test other variants like whether the curve shapes are filled, or if they have color. Dataset analyses hints that curve detectors are invariant to cosmetic properties like lighting, and color, and we can confirm this with synthetic stimuli. \n\nCurves are invariant to fill..![](./_next/static/images/fill-add8286fe69f38527c20a705deab1161.png)..as well as color![](./_next/static/images/color-7b91c7ed24bb25d3a6bca4f1d71144cc.png)0σ![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAAyCAIAAAD9fhrKAAAAA3NCSVQICAjb4U/gAAAAX3pUWHRSYXcgcHJvZmlsZSB0eXBlIEFQUDEAAAiZ40pPzUstykxWKCjKT8vMSeVSAANjEy4TSxNLo0QDAwMLAwgwNDAwNgSSRkC2OVQo0QAFmJibpQGhuVmymSmIzwUAT7oVaBst2IwAAAJmSURBVHic7dTRjew4EENRsuSOYCN5CWz+SXVxPyS1HzaDAu7BwKClstvQAPSff/51fVyP6uN6XB/5cX1Uz1/r5XrkPfCoHvtR1e9Wteyl+tilWlqPveSlKtdSPa5SPXelVGdXXiq76uYd6o7trBNslVWVG2TfRctOWZbqhJRUjpWTFadPUCqpxPv63vZZVJxU38UdWjufrY4TJ+5Uy31zzm21HLmjsyJHJ7T0y3G1FDlWV7UcK+VWtdV2yl1qO3bffMJ79feX11npUi99z+LeVS/1usNrD2vvfpeyH9mh7m6pl7LUPgNdOySlfah7JRV5j0WldlJS7RPdISrJ2bd7SxU577X03joqqVqWa5/ZWbRPdrWsfdiq+D7oPayo2rpvu09J/f6Ez39Abqnz5sSREne0Z+6AOr5BZzc37/XWHj7XJCfnjuW+Id35jXXUSrdaN+9nk/N3vmE/lVa6Eyk6AzkP7ueUdGsPdCdJoj1zwh3IGfh/bimtnK94d98D6HMA+X31nTwfm/ec9m/1Xyf3e8M36vicpdTR9w0uAcAQFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AY/wGpbuP1OXacNwAAAABJRU5ErkJggg==)Standard Deviations25σ[Synthetic Angles\n----------------](#synthetic-angles)Both our synthetic curve experiments and dataset analysis show that although curves are sensitive to orientation, they have a wide tolerance for the radius of curves. At the extreme, curve neurons partially respond to edges in a narrow band of orientations, which can be seen as a curve with infinite radius. This may cause us to think curve neurons actually respond to lots of shapes with the right orientation, rather than curves specifically. While we cannot systematically render all possible shapes, we think angles are a good test case for studying this hypothesis.\n\nIn the following experiment we vary synthetic angles similarly to our synthetic curves, with radius on the y axis and orientation across the x axis.\n\n![](./_next/static/images/angle-b47d92ce646ca9874fb9b9ea026028bc.png)0σ![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAAyCAIAAAD9fhrKAAAAA3NCSVQICAjb4U/gAAAAX3pUWHRSYXcgcHJvZmlsZSB0eXBlIEFQUDEAAAiZ40pPzUstykxWKCjKT8vMSeVSAANjEy4TSxNLo0QDAwMLAwgwNDAwNgSSRkC2OVQo0QAFmJibpQGhuVmymSmIzwUAT7oVaBst2IwAAAJmSURBVHic7dTRjew4EENRsuSOYCN5CWz+SXVxPyS1HzaDAu7BwKClstvQAPSff/51fVyP6uN6XB/5cX1Uz1/r5XrkPfCoHvtR1e9Wteyl+tilWlqPveSlKtdSPa5SPXelVGdXXiq76uYd6o7trBNslVWVG2TfRctOWZbqhJRUjpWTFadPUCqpxPv63vZZVJxU38UdWjufrY4TJ+5Uy31zzm21HLmjsyJHJ7T0y3G1FDlWV7UcK+VWtdV2yl1qO3bffMJ79feX11npUi99z+LeVS/1usNrD2vvfpeyH9mh7m6pl7LUPgNdOySlfah7JRV5j0WldlJS7RPdISrJ2bd7SxU577X03joqqVqWa5/ZWbRPdrWsfdiq+D7oPayo2rpvu09J/f6Ez39Abqnz5sSREne0Z+6AOr5BZzc37/XWHj7XJCfnjuW+Id35jXXUSrdaN+9nk/N3vmE/lVa6Eyk6AzkP7ueUdGsPdCdJoj1zwh3IGfh/bimtnK94d98D6HMA+X31nTwfm/ec9m/1Xyf3e8M36vicpdTR9w0uAcAQFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AY/wGpbuP1OXacNwAAAABJRU5ErkJggg==)Standard Deviations25σThe activations form two distinct lines, with the strongest activations where they touch. Each line is where one of the two lines in the angle aligns with the tangent of the curve. The two lines touch where the angle most similar to a curve with an orientation that matches the neuron’s feature visualization. The weaker activations on the right side of the activations have the same cause, but with the inhibitory half of the angle stimulus facing outwards instead of inwards. \n\n![](./_next/static/images/figure-f4e54f9da80ab5da3ac54a096ca11be4.svg)The first stimuli we looked at were synthetic curves and the second stimuli was synthetic angles. In the next examples we show a series of stimuli that transition from angles to curves. Each column’s strongest activation is stronger than the column before it since rounder stimuli are closer to curves, causing curve neurons to fire more strongly. Additionally, as each stimulus becomes rounder, their “triangles of activation” become increasingly filled as the two lines from the original angle stimuli transition into a smooth arc.\n\n0σ![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAAyCAIAAAD9fhrKAAAAA3NCSVQICAjb4U/gAAAAX3pUWHRSYXcgcHJvZmlsZSB0eXBlIEFQUDEAAAiZ40pPzUstykxWKCjKT8vMSeVSAANjEy4TSxNLo0QDAwMLAwgwNDAwNgSSRkC2OVQo0QAFmJibpQGhuVmymSmIzwUAT7oVaBst2IwAAAJmSURBVHic7dTRjew4EENRsuSOYCN5CWz+SXVxPyS1HzaDAu7BwKClstvQAPSff/51fVyP6uN6XB/5cX1Uz1/r5XrkPfCoHvtR1e9Wteyl+tilWlqPveSlKtdSPa5SPXelVGdXXiq76uYd6o7trBNslVWVG2TfRctOWZbqhJRUjpWTFadPUCqpxPv63vZZVJxU38UdWjufrY4TJ+5Uy31zzm21HLmjsyJHJ7T0y3G1FDlWV7UcK+VWtdV2yl1qO3bffMJ79feX11npUi99z+LeVS/1usNrD2vvfpeyH9mh7m6pl7LUPgNdOySlfah7JRV5j0WldlJS7RPdISrJ2bd7SxU577X03joqqVqWa5/ZWbRPdrWsfdiq+D7oPayo2rpvu09J/f6Ez39Abqnz5sSREne0Z+6AOr5BZzc37/XWHj7XJCfnjuW+Id35jXXUSrdaN+9nk/N3vmE/lVa6Eyk6AzkP7ueUdGsPdCdJoj1zwh3IGfh/bimtnK94d98D6HMA+X31nTwfm/ec9m/1Xyf3e8M36vicpdTR9w0uAcAQFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AYFBaAMSgsAGNQWADGoLAAjEFhARiDwgIwBoUFYAwKC8AY/wGpbuP1OXacNwAAAABJRU5ErkJggg==)Standard Deviations25σ![](./_next/static/images/figure-c0219df7646daa4dfdae66630c9d97fa.png)We transition from angles on the left to curves on the right, making the stimuli rounder at each step. Each step we see the maximum activation for each neuron increase, and the activation “triangle” fill in as the two lines forming the original angle becomes a single arc.Reproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) NotebookThis interface is useful for seeing how different curve neurons respond to changes in multiple stimuli properties, but it’s bulky. In the next section we’ll be exploring curve families across different layers, and it will be helpful to have a more compact way to view activations of a curve neuron family. For this, we’ll introduce a *radial tuning curve*.\n\n[Radial Tuning Curve\n-------------------](#radial-tuning-curve)![](./_next/static/images/figure-a8979e29995a4c7943d325fb24cddf22.svg)Hover to isolate a neuron[The Curve Families of InceptionV1\n---------------------------------](#the-curve-families-of-inceptionv1)So far we’ve been looking at a set of curve neurons in 3b. But InceptionV1 actually contains curve neurons in four contiguous layers, with 3b being the third of these layers.\n\ninputsoftmax01[2](#conv2d2)5a4d5b[3b](#3b)4e4c4b[3a](#3a)[4a](#4a)[### conv2d2](#conv2d2)“conv2d2”, which we sometimes shorten to “2″, is the third convolutional layer in InceptionV1. It contains two types of curve detectors: concentric curves and combed edges.\n\nConcentric curves are small curve detectors that have a preference for multiple curves at the same orientation with increasing radii. We believe this feature has a role in the development of curve detectors in 3a and 3b that are tolerant of a wide range of radii.\n\n![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2Fnick-personal%2F1XjsJAYh6F.png?alt=media&token=5b85d1d5-2692-49c9-8976-31dd4f3b1d6b)Combed edges detect several lines protruding perpendicularly from a larger line. These protruding lines also detect curves, making them a type of curve detector. These neurons are used to construct later curve detectors and play a part in the [combing effect](#combing-effect).\n\nLooking at conv2d2 activations we see that curves respond to one contiguous range like the ones in 3b, but also weakly activate to a range on the opposite side, 180 degrees away. We call this secondary range **echoes**.\n\n[### 3a](#3a)By 3a non-concentric curve detectors have formed. In many ways they resemble the curve detectors in 3b, and in the next article we’ll see how they’re used to build 3b curves. One difference is that the 3a curves have echoes.\n\n[### 3b](#3b)These are the curve detectors we’ve been focusing on in this article. They have clean activations with no echoes.\n\nYou may notice that there are two large angular gaps at the top of the radial tuning curve for 3b, and smaller ones at the bottom. Why is that? One factor is that the model also has what we call [double curve detectors](https://distill.pub/2020/circuits/early-vision/#group_mixed3b_double_curves) which respond to curves in two different orientations and help fill in the gaps.\n\n[### 4a](#4a)In 4a the network constructs many complex shapes such as spirals and boundary detectors, and it is also the first layer to construct 3d geometry. It has several curve detectors, but we believe they are better thought of as corresponding to specific worldly objects rather than abstract shapes. Many of these curves are found in 4a’s [5x5 branch](https://microscope.openai.com/models/inceptionv1/mixed4a_5x5_0), which seems to specialize in 3d geometry.\n\nFor instance, [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed4a_406.jpg)4a:406](https://microscope.openai.com/models/inceptionv1/mixed4a_0/406) appears to be a upwards facing curve detector with confusing secondary behavior at two other angles. But dataset examples reveal its secret: it’s detecting the tops of cups and pans viewed from an angle. In this sense, it is better viewed as a tilted 3d circle detector.\n\n![](./_next/static/images/figure-8ef6cae56d99ff779e74142bb54e2b8e.svg)We think [![](https://distill.pub/2020/circuits/early-vision/images/neuron/mixed4a_406.jpg)4a:406](https://microscope.openai.com/models/inceptionv1/mixed4a_0/406) is a good example of how neural network interpretability can be subjective. We usually think of abstract concepts like curves and worldly objects like coffee cups as belonging as different kinds of things — and for most of the network they are separate. But there’s a transition period where we have to make a judgement call, and 4a is that transition.\n\n[Repurposing Curve Detectors\n---------------------------](#repurposing-curve-detectors)We started studying curve neurons to better understand neural networks, not because we were intrinsically interested in curves. But during our investigation we became aware that curve detection is important for fields like aerial imaging, self-driving cars, and medical research, and there’s a breadth of literature from classical computer vision on curve detection in each domain. We’ve prototyped a technique that leverages the curve neuron family to do a couple different curve related computer vision tasks.\n\nOne task is *curve extraction*  , the task of highlighting the pixels of the image that are part of curves. Visualizing attribution to curve neurons, as we’ve been doing in this article, can be seen as a form of curve extraction. Here we compare it to the commonly used Canny edge detection algorithm on an x-ray of blood vessels known as an angiogram, taken from  , Figure 2.1.\n\n![](./_next/static/images/extraction-0ef56d9ac6e115503817ef6a63a57700.png)Angiogram example from , Figure 2.1.The attribution visualization clearly separates and illuminates the lines and curves, and displays less visual artifacts. However, it displays a strong [combing effect](#combing-effect) — unwanted perpendicular lines emanating from the edge being traced. We’re unsure how harmful these lines are in practice for this application, but we think it’s possible to remove them by editing the circuits of curve neurons.\n\nWe don’t mean to suggest we’ve created a competitive curve tracing algorithm. We haven’t done a detailed comparison to state of the art curve detection algorithms, and believe it’s likely that classical algorithms tuned for precisely this goal outperform our approach. Instead, our goal here is to explore how leveraging internal neural network representations opens a vast space of visual operations, of which curve extraction is just one point.\n\n[### Spline Parameterization](#spline-parameterization)We can access more parts of this space by changing what we optimize. So far we’ve been optimizing pixels, but we can also create a differentiable parameterization that renders curves, similar to explorations by and . By backpropagating from the attribution through the input into spline knots, we can now *trace curves* — obtaining the equations of the best fitting spline equations that describe the curves in the image. \n\n![](./_next/static/images/figure-cfc2527f7b93cbbf2e188db004531aa6.svg)We created an early prototype of this approach. Since curve neurons work in a variety of settingsAs we explored in the article, curve neurons are robust to cosmetic properties like brightness and texture., our spline parameterization does too.\n\nOcclusion![](https://storage.cloud.google.com/distill-curve-detectors/three_examples/curve/source.png)Our splines can trace curves even if they have significant occlusion. Furthermore, we can use attribution to construct complex occlusion rules. For instance, we can strongly penalize our spline for overlapping with a particular object or texture, disincentivizing the spline from connecting visual curves that are occluded by these features.Subtle Curve![](https://storage.cloud.google.com/distill-curve-detectors/three_examples/subtleS/source.png)Since curve neurons are robust to a wide variety of natural visual features, our curve tracing algorithm can be applied to subtle curves in images.Complex Shapes![](https://storage.googleapis.com/distill-curve-detectors/three_examples/crest5_v3/source.png)![](https://storage.googleapis.com/distill-curve-detectors/three_examples/crest7_v3/source.png)![](https://storage.googleapis.com/distill-curve-detectors/three_examples/crest8_v3/source.png)![](https://storage.googleapis.com/distill-curve-detectors/three_examples/crest9_v3/source.png)Reproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) Notebook[### Algorithmic Composition](#algorithmic-composition)One seemingly unrelated visual operation is image segmentation. This can be done in an unsupervised way using non-negative matrix factorization (NMF) . We can visualize attribution to each of these factors with our spline parameterization to trace the curves of different objects in the image.\n\nSource![](https://storage.googleapis.com/distill-curve-detectors/three_examples/butterfly_leaf/source.png)TracingTracing NMF ComponentsInstead of factoring the activations of a single image, we can jointly factorize lots of butterflies to find the neurons in the network that respond to butterflies in general. One big difference between factoring activations and normal image segmentation is that we get groups of neurons rather than pixels. These neuron groups can be applied to find butterflies in images in general, and by composing this with differentiable spline parameterization we get a single optimization we can apply to any image that automatically finds butterflies and gives us equations to splines that fit them.\n\nWe traced 23 butterfly images and chose our 15 favorites.Reproduce in a ![](data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjRweCIgaGVpZ2h0PSIxNXB4IiB2aWV3Qm94PSIwIDAgMjQgMTUiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDQ4LjIgKDQ3MzI3KSAtIGh0dHA6Ly93d3cuYm9oZW1pYW5jb2RpbmcuY29tL3NrZXRjaCAtLT4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJjb2xhYiIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBhdGggZD0iTTEuOTc3LDExLjc3IEMtMC42OSw5LjQ5MyAtMC42MjgsNC42OTEgMS45NzcsMi40MTMgQzIuOTE5LDMuMDU3IDMuNTIyLDQuMDc1IDQuNDksNC42OTEgQzMuMzM4LDYuMjkxIDMuMzQ0LDcuODkyIDQuNDg2LDkuNDk0IEMzLjUyMiwxMC4xMTEgMi45MTgsMTEuMTI2IDEuOTc3LDExLjc3IFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMTIuMjU3LDEyLjExNCBDMTAuNDksMTAuNDgxIDkuNzcyLDguNDU2IDEwLjEzOSw2LjA5NCBDMTAuNTksMy4xODQgMTIuMjc4LDEuMjAxIDE1LjA4NSwwLjQxNiBDMTcuNjUsLTAuMzAyIDIwLjA0OSwwLjE5OSAyMS45NjMsMi4yMzUgQzIxLjA3OSwyLjk3OCAyMC4yNTYsMy43ODIgMTkuNTI5LDQuNjgxIEMxOC40ODgsMy44MjcgMTcuMzE5LDMuNDM1IDE2LDMuODU2IEMxMy41OTYsNC42MjMgMTIuOTU0LDcuMDk3IDE0LjUwNiw5LjUgQzE0LjI2NSw5Ljc3NSAxNC4wMTMsMTAuMDQxIDEzLjc4NSwxMC4zMjYgQzEzLjI5NSwxMC45MzkgMTIuNTExLDExLjMgMTIuMjU3LDEyLjExNCBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICAgICAgPHBhdGggZD0iTTE5LjUyOSw0LjY4MiBDMjAuMjU2LDMuNzgzIDIxLjA3OSwyLjk3OSAyMS45NjMsMi4yMzYgQzI0LjY2Niw1LjAxOSAyNC42NjQsOS4yNjcgMjEuOTU4LDEyIEMxOS4zMSwxNC42NzQgMTUuMDIyLDE0LjcyNSAxMi4yNTcsMTIuMTE1IEMxMi41MTEsMTEuMzAxIDEzLjI5NSwxMC45NCAxMy43ODUsMTAuMzI3IEMxNC4wMTMsMTAuMDQyIDE0LjI2NSw5Ljc3NSAxNC41MDYsOS41MDEgQzE1LjU1OSwxMC40MTcgMTYuNzYsMTAuNzY5IDE4LjEwNiwxMC4zMzEgQzIwLjUwMiw5LjU1MSAyMS4xNTEsNi45MjcgMTkuNTI5LDQuNjgyIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZEQkExOCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNNC40OSw0LjY5MSBDMy41MjIsNC4wNzUgMi45MTksMy4wNTcgMS45NzcsMi40MTMgQzQuMTg2LDAuMDE1IDcuNjk4LC0wLjUyOSAxMC40NTMsMS4wNTggQzExLjAwOCwxLjM3OCAxMS4xNzIsMS42NjQgMTAuNzM4LDIuMTg2IEMxMC41ODEsMi4zNzQgMTAuNDgsMi42MDggMTAuMzQ3LDIuODE3IEMxMC4wNDgsMy4yODcgOS44MzgsMy44ODQgOS40MTgsNC4xODggQzguOTMzLDQuNTM5IDguNTIzLDMuODQ3IDguMDIxLDMuNzQ2IEM2LjY3MywzLjQ3NSA1LjUwOSwzLjc4NyA0LjQ5LDQuNjkxIFoiIGlkPSJTaGFwZSIgZmlsbD0iI0ZDRDkzRCI+PC9wYXRoPgogICAgICAgICAgICA8cGF0aCBkPSJNMS45NzcsMTEuNzcgQzIuOTE4LDExLjEyNiAzLjUyMiwxMC4xMTEgNC40ODYsOS40OTMgQzUuODU5LDEwLjY0NSA3LjMzNiwxMC45MjYgOC45MzYsOS45OTIgQzkuMjY4LDkuNzk4IDkuNDM5LDkuOTA0IDkuNjA5LDEwLjE4MiBDOS45OTUsMTAuODE3IDEwLjM2MiwxMS40NjcgMTAuNzksMTIuMDcyIEMxMS4xMywxMi41NTIgMTEuMDEyLDEyLjc4NyAxMC41MzcsMTMuMDc4IEM3Ljg0LDE0LjczIDQuMjA1LDE0LjE4OCAxLjk3NywxMS43NyBaIiBpZD0iU2hhcGUiIGZpbGw9IiNGQ0Q5M0QiPjwvcGF0aD4KICAgICAgICA8L2c+CiAgICA8L2c+Cjwvc3ZnPg==) NotebookIn this above example we manipulated butterflies and curves without having to worry about the details of either. We delegated the intricacies of recognizing butterflies of many species and orientations to the neurons, letting us work with the abstract concept of butterflies. \n\nWe think this is one exciting way to fuse classical computer vision with deep learning. There is plenty of low hanging fruit in extending the technique shown above, as our spline parameterization is just an early prototype and our optimizations are using a neural network that’s half a decade old. However, we’re more excited by investigations of how users can explore the space between tasks than improvements in any particular task. Once a task is set in stone, training a neural network for exactly that job will likely give the best results. But real world tasks are rarely specified with precision, and the harder challenge is to explore the space of tasks to find which to commit to.\n\nFor instance, a more developed version of our algorithm that automatically finds the splines of butterflies in an image could be used as a basis for turning video footage of butterflies into an animationUsing a [shared parameterization](https://distill.pub/2018/differentiable-parameterizations/#section-aligned-interpolation) to maintain consistency between frames.. But an animator may wish to add texture neurons and change to a soft brush parameterization to add a rotoscoping style to their animation. Since they have full access to every neuron in the render, they could manipulate attribution to fur neuron families and specific dog breeds, changing how fur is rendered on specific species of dogs across the entire movie. Since none of these algorithms require retraining a neural network or any training data, in-principle an animator could explore this space of algorithms in real time, which is important because tight feedback loops can be crucial in unlocking creative potential. \n\n[The Combing Phenomenon\n----------------------](#the-combing-phenomenon)One curious aspect of curve detectors is that they seem to be excited by small lines perpendicular to the curve, both inwards and outwards. You can see this most easily by inspecting feature visualizations. We call this phenomenon “combing.”\n\nCombing seems to occur across curve detectors from many models, including models trained on Places365 instead of ImageNet. In fact, there’s some weak evidence it occurs in biological neural networks as well: a team that ran a process similar to feature visualization on a biological neuron in a Macaque monkey’s V4 region of the visual cortex found a circular shape with outwardly protruding lines to be one of the highest activating stimuli.\n\nAlexNet[![](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABbAFsDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDvA+KmjkGetZs0pjUmqi6uqSohOBnnPes6l0rmkFzOx1CYI61HMhUZ7Vz7a88TFljb5mEUa/3mJ7fhWzDqkF4lxDAwLwfu2J6bz2rmdS60NFDWwxX8zO3BA60sTbmOOcdfaql5FLFPb6VaP88imSeT+4vrUk9wsMdvZWK+ZLNwH7YHU1Kqa2ZTpt7F5OalAqvcXEOnBEkYNIcZA649atBCAXc8dq2Uu5ixpFRNT5JkDFQwz6UxjxmumCuZNkTGmZpzGo81dhXGG18wHI4way77w1HcrcSJOYWCqIwem4nvXQxuqkVY2286r5wyqkNj3HStE47MzkpNe67HmlzZeIPDMyytbm7gSViqgbgDj73tUOlavbzfZIIJGhnts3EyueZ5uy/TPb0r1ARPbQNLNN9ojLEiNhksxJwB+GBiuU8Q+DbW5jeeKIW+oyfMdvQd/wAfc1jVwVOp71N2f9fl/WpH16rQ0rK8e/b+v89CbRdX+1NPY3g8rVronzXYcCP/AGf8K0YkTQy+ULswEdr3P+7/AFrzuG5uHuRpGsu8N0p/cXnVt3ZSe6n1rudE1M6xDLZagoTU7P5do68fxL715NRSi2pdP6v/AFr36HqUa8Jq97p/1/XkTJDHptjcX2pkS6hIhLR53d+MCn3E+pakLyzVUtzGY2TPVkYdf8+lCt5U66s6+dcqfIkToAOgYVDqF7D4ble/uC97qlwBGkanhV9PpWlGbvZ7l4inpc1YbKCxhMksmW/ikkNUpNSgkbbE24etY39ka5qszXesTfZoBlwjPwgI7Dt71aSKxtsJBIZjjlweK9ahGNtdzzmnJ+6i/wCZupM1BG+fpU2atoka8+O9NXUmj3dG+p6VXlzis+bI7Vg5G6ib1vrFt5onnYl0+WKJeSSa203bQZlD30w+6BkRj0+g/WvOHGG855Coj5G0c5rQ0DX9StpWkMEtzaFgruWyw9K1ipP4X95E6cWrWsTeMfDK3ljI6ZE8eWRvQ9wfauKttYvIRBqURI1PT8LKD0nh9ffHQ+2K9sYw3tss0YEkbr8y/wB4f415J4u01tG8QIAQtrdghJegQn+E1jioKpHmtqv6/r5PocFBPDzdNfDuvI7tLiCcwXcXNjqceBuH3XI4z/L61S1KGR/D98mA9/aoGRiOSAQQf0NY3w4uze6Pf6Ddk+ZbuZIgfvBSefyNdVIrSywJOQGuLeWGQYwNwHBNeJH3Kjh1j+R9DCfPT5X/AF/S0MOeWXW7iC7tYIrxZbNPMnnciKJ8ndx9aheKe0KpNcWhz0SBs4qGxvDH4dtNIsrNryfc6zAqQnB9QRnjFS2+h3VrK0kyW8ak5CI+4D2Fe7h58p575ldXNC3OQKtVDGuBgDFTVpKWpnyiNFkVVkt8ZNa4jzSNAD2rmZ0Rdjn3tN+EIGDyfeq66ENUmaW4lFpZwjG4DB/CuiNqoOSOKzdXM16qWUBKRZG4DvWsNtTZw5l7qL+g6rACbK0+0TQR8ebKP8/rUnijS49Z0WaFsb1BZD1wexqlpmgTxXKSSzPBaRc+X5n3j9ewrU/tK0e7ltoWDgLyqjI/Otm+bWx5mJhrds8u8IXcum+Mrdn3bXYwyE/xZ4H616jeiOO7spsDCXh3fQqc1wUtp5PigAJtxcKw49816BdqskJJPRXb8cV4uLio11Lrb/gHdQleCZz8JuhFDFALhII7iViVO1WQnge9JDDtdgqADPYk/rU2nJ5ttEX3PIkW3LHgc9hV1Lcr1NdVOVgkhka8VJtqQR4pdtb8xnYsLUgxioVapA1ILjJwWUgGqKosMnmFckdKvtyKgdAaqOhaqNKxlXjXV+3ll32njaDgVq6ZpsNhDwB5h6mlijCngVbQYFXOq7aGDgpPUw30/wAzWzPtODzk1rTj9yVHORipiozmmOK86pT5pXZ0w0VinbReVGFx2qwFp22jpWsUEmMIpKcabWyM2IKeDTBTxVEgaYaeaaaAHLUwaoFqQVDKRJupDTaKixVwNNNOphq0iWxhNJmlNNqyT//Z)](https://microscope.openai.com/models/alexnet/conv3_1_0/348)InceptionV1[![](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABzAHMDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDv80ZpmacKskcKdSAU/bTsK5GaYxqRhioGNOwhrGoy1DtULNRYLkm6gNUBegPUtDuWQakBqsr1MhqbFEwNOzTBTqAHZoptFACU9abigHFWiWWEGamCcVVWQKMmkl1vTLQYuLpUbOMEE/yrWNOUtIq5NyaRKrOhpY9b0e7Zlh1KJmX7yn5SPzq2IknjDwSxygjOUYEUSpyh8SsNoy3XrVBbqGWUxxyK7g4IBraSL/S0ikVsSZAIFeS6Tqoj8Xx+Sp5nKys5OCSSAP8ACp5W2kJHfs4BI7imb+a5a58QY8XC2MyRwy3HlZkfAGOM4+ua2E1GB9RltEuIpjG2A8Ryp+lS17vMNb2NeN6txmqEVXoqzbLsWVp+KYtSCgBMUU7FFAARTGFWNtMkXahY9qFKw+W5yni/UtR0zSGu7GLfslVGOeBuPH51UsfFWsJdTR39rBFJBKVdBEOD0x1OTmtfX9Osr6weK7+2MkuOLZC3PY4HesZtIsTLGVtdXZsEF1tnZh9TnrXZDF0acUpNfM3p4KNW/M7ff/kTeLPFc+gaxZ6feaPZ373dv9ojdF+YDB+XjqQMndgZ6e9T6L4g8O61o0FwNM8t5ATstHKuCDjnOBk+gJrk9UitH16C6TWtRt7izVkSS5tPuq4IxnHK4yPbNOtNHuZ/Do0+x1HS79LWwm+yRW0uycTM5Kn/AGm5xx247VtHMaEly/5/52sOrldemm4u66dfw7/qd0JbOW72af4ims7hSFFnqS4Vjnpv6c+2fWuRufCur6VrE1/qcaSW8k8c7XcTbo8hhnJHTj29K567XUYNauNOispnjNlD5ena2rbpJTt3JGcgg5ztIIJHANM/4SS48O3slsBqnh26AHmWk4NxbMPQRyZKnPOcnpVRnQlLm/q359OqaXVmMsPNxtfX8Px/z1My6hluvGl9fBBLb2+pId8nKhZZDsyO4Nb51GBviJq9oI0t2W+kRAo2rkN/OtTTtc0XWljbVY4tNvGu0uP7UslDQTOvTeh+7j8u9Emj2VpqN/d6srtea3qhlt57UZjFuzfM245wOcjvVSwHPGMab7/12fy26o4p1J0Zv2it27f8Dr/mzr7c7WCuRnHUVpRpxXHzpfeGIorqO8h1bQZ2xFdRHmNv7p/AV2lmY57SGaNtySKGBrwvfjJxkj0PdcU0OAp4p22gCtUZsKKXFFMROBUiqO9Ril34rGTNYosNcx28TO3AA7Cl0PW4by8S3SRmJHAA9PpWTqiyT2bJGSG7GovCtr9hvTe3BwIVYk5xuOK4K06ynFQdlfXRbdT0KVGk6EpSfvdAl8VW97r2p2UzxulncLCy7Ay9CeCep6ce3FaMGj6brWnuRaaXdxhs/vIAmD0JyBkNxj8K8EtL+VbvVLyU75bm4Z35xzk44/AV6b8OdVuYfh/e3lwSHub9zH7LtUcfiDV4idSip13L3VsvXpf1MaU5JQpQ+LuX9T8HwTGSES6hawvw1reO1zaOpOcE8heeQeqnkdK5PXfCl1Fc2mjzRNfpcRySx6fPMJPJjTAzbTE5XlgcHG7Hzdq7Cz+IAsSVvsyRDglP4vwPFbUTaT4mtr68025UX91a+QhLZSNW+8yccE9/90dKiGIhKCqQTg97N3i/Jbav1TOmqpxlyVVfz/W+/wCZ86X2jXmhTfbLKSWSzzhvOTEsR/uSoP5jj0rsPCHiktbtYWYBMoaOSwnbMDhuDtPVSevvXW+JtCSxhaSYANClvZ2MpG5JVKhCs464DAtu7buM9K8v1vwxe6PqUhsEZb22YCa37sOzgjqD39K9TA5nyNQlrF/18tdnovQzrYSNaDtv/W3fzW689DtrzTLi7t9P8OeGvMOny3SzXVpMNsluf48t3XHfp0r0eWC2gu/ItTiKFFiC49On6YrifCOrx65p8TeYsOuQoVDYIZo++f8AD6V1Njclk/fAiYnGD2A4r1MVFVrT3t/V/wCvmeJyToyaez/qxdK03bT80VwONjRSuMxRTqKixQZpDRQKyaNUw2561HdWCXlo8K3LWsnVZByM+hHcetTilbBGKn2Se5XtXHVM4S68BTS+Z+4ginkYlntvmjceuM5U/jXR2uhT2nh+20yODZBbIQuZQSxJJY/matyWKk5QsnOfkOKrtojTEnzHJIxl2LEfQ9qpYGE48s5e72M6mMqc3NFe93PM/HFlPGLeytonM9xKESHux6A49O1XNTs9R8IsJLWd42T/AFhzgO2Bk8cDPp7V6Pp3hXSrG9W9a2NzdjGJZ23be+QD0PvWV4v0pL9fMvpIkjXJWOMEs1TUoRUoqPwx/H1KpV6jUubVsi8L+NLLxLaPFqIVrgDYyMuTKD0wO5z+XUUNpTeGrufV7lftRuh5USMdwgXsGPdR0J79a8ugsLiw18XUStHFaHdlCcJnpu9SfTvXtmga1B4h0nytRhV/MXZIhGc/5P61x16bTc4dd/0X36+tjsoV3D3Zar+r28+nmjzbXLN9DuLbxHpLHyJ5Nqvn78nc49O1dvbX0eoWlrq8OB5qBJUz9yQdc1lXtlBo2rs2rrJcWOnxvLp1qOVZz0H4VQ8H/bbK4az1SDyF1dWu0iP/ACzLMcYHv/LFd+BxVlyyf9dR4yimuZbM7qOTcoOc1IGqhbbkBRuq8GrQauqdrnlqNibNFR5orIofmlBpopRUWKuOzRmkoq0iWSL1q1FiqYbFTxyYrQzL4AxVDUoUeJmMYdgOKsLLxUFxJkVlNXNIHnOv6bLOiRXE0NlaI+8qozk9c4HVqh8NamkmpzTWySpZFxEhfjnoD9fWuqv7WKZiXQN9ayxpe+4gw2yGE5SNRgfl6+9c8o3VjdPsdDrVn/bOkrKoH2uA7kbtx1H0rm4oGuL1b6SaSW6GMFv4cdMV1tpLsXb/AAnqKorYJDdSuvRjkVyRThUO2nVXspQl8iRTuO4jBNSCk24pQK9CMmzgkh2aKKKdyLDxSg02lBqiR1GaSkNNCFzTleojRmqFYsiTio5XyKj3U1jkVLGipMMmokXmrDjNNC81k0apk8RwKmJzUCcVJms+UvmAiiiiriiGxaKKKskKWiiqJHCkNFFMQ0000UUxBTTRRSY0MakFFFQy0PFPFFFSMWgUUU0IWiiiqEf/2Q==)](https://microscope.openai.com/models/inceptionv1/inceptionv1_0/385)InceptionV3[![](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABnAGcDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDsQOlSFDtzikRMjmp8fLjv0Fa2JKhQEc4z6VlXjCKUg1vtBjnGK5fUQ0moiMZOW6Cm1pcSepHeDbEr5+8KyZJCFzXQapBBEEBYqwGKwLqaIbhGCcGuVSuzW2hVZjgnBpYiX6A0qiV0HGN3fFXtJ01ruVgXAHXNdEFcym0lcrKMY4q/GpxwRTZbO3SRwLkPtbGF71uLpqR2aMQQWAI9aKr5VcUHzGYisBuq0gPHFFyVhKxjBOOxqRFJA7DFZwdypaE0ajFFPRcCitCLmvDjIz+tW38uMgE89az0zvAxznip1UyTAt9PrWnKVHUddnbACo3Owz9K5FRNJq6BTl92Miul1S5aNGiUEjADcVg6VKG15HIJ5PGKmppA1hFXuWLzR7u6uySruOO+AKdH4YmiyzMhwNvzJxg/1roJby4NyY44f3YIBbNUr+5vktE2tGjNKY9xyQB2NcVOnKb0Mala7sjMvdPktbbT1zbAbygD/KMEf4VY0S0lEsyNPaOEGNseO/rVTV2lOpWFpPFE5hUPIWOVP19BV3SkQw3s7abZhdp+aGUMCMdPauuT5KZMaTbTZmyxutndym1tY4xLsDo3Oc9BWxeRNPDZhSAdgz9cVzkCWUmnRt5jQSu5Ozk7vQD2rp7wILGIM5Q7RlieR+NceKraqCN4RUIOZmrZ2du2+adXkB6ZqVnjbpg+gFZaSafHIfJjmnkPG73q/Z2jI5lnOG/hTvXTTjpqcUpNslAO47uMUVLtBOW4J7UVtYtMtjAcAn3qzGygkk8DvWfKxBD+lQySyGJgGPXtWyjc0ixdX1KGO2baFZy38qxPD83na+CCBuzznGKsto93fjbGvB5BJwB9amg8JxW7I11qSRMT0Xgg+xrGsop2OnlaFubmyW8m3amwTf8AMqk5Ax/n86rw3umi7sGfUZ1EaHcXTA+8evXBrbj8LaRNDMY53dZMLkMOOeafc+FtNEk8y2HnBYhtQSEbmHGPbjHNR9ZvoiY0IQ82c8b4rNquqJeRvglIQwzuXoKs3Uf9l6J5JgtYry6XjYx/ed8j3/xrOv8Aww1pGEVnBiG6RipxI5PyKB3PbNVVuroXTWep7pba0lE0rJ8zRY6hT3GSOPasasZtXeppGUW+U6GMeVp6WwbDRqA1tIg35PPDZ96uatDLLd2dssWYuCzE8fSqWnSvrccss5XMMu63uGUYZeyY9cVq3hTVNNS588wW6gMrhfmLehH+ea8xfxfeKqwvGxgxXLxalLZWsKKd33z2rVmH2bbklnPU4rE0yNptYDRqyqTzJKfmNdRdq0jiOKLEWPmkbv7CvXpSVkcE6PLsZocyMWbg0U4IMnvjv60V0E2JmUsnHenW1uJM/KOvNN2twAaS6vxYWbyMvI+77mq57IpNoreI9aexjFlaZWQj55B1Ueg96zrHTo4bJdTv3LmcldjDJ543f1pdI1GzupbiPUFBa4bO8/yrXfRZxD5dnMksB5VZTyv0rzcQ5y0idtGCunN6GNqOq2sd/Fas1zLHbpiTLHBYYK/lUNtrkTvGFlu49zNNO0chyx7D6YApZ/CusNNK26LM3DkNxj3/ACpsvhDUooCftkQZyu4E4U46ZNYLDq2+pq8TGOjWh0dlrMN5Av2t1uTM48mMIFkiHcn6dc/lWXruiNawxMpzaW3zmc9ZATypxzmsq2eV5J7e8t5Gv4wEjXhTCO5B9O+K7HRrwX1rLZXTxyTwt5cpj5Ugjgj8M1WHxNTD1LS1Xn/X5meIoRaU4dNjlYU/s/V7e4eBodKnZntIppQQr4HJ545rrNIlaaeVpbr7Q7rk7F/dIfQHof51x97Yx28lxbXrSMbY/wCiwIdxZMk4PoK1fDGry3lxJE8SJCiARxRrhVxV4zDqElUjsTTqOULPcjiiluvECl7h5grZBRCFA9BmuiuJYX+9KzbeCqj+dZ1y10NWt5J7mCOEZxGnP51N9rZN6edEiljjC5Jrpw1KTXMzOvUjayEkVeAoOfeikaVJP4txorrOW5BLdFVyBnHpXP6jcTX8gQA7c8DNasqvKNq1NZaaiTK7EZ69a5JSOqEF1OestPae5Fuo+Yc1fS8vdOvvJFw21TyvWtHSIlHiC5PYA4p109np+utJdKTG65z1qonRdWsSanelIrS5Jdkf5Ww2Kzl1UGzcuC/lybGBOfkP9RWpdm21vRJY7JeY2yueMGsvT9AIDCe4XY4wQvNX7lveV0cE6bb3MzXZSNQiuQSZWjXOCcOuPX6cVr6UZLO9sXk8uBJV8tLeMc465Y9+n61auRpVlbQpIVeSDJjZ8nB9COuPasVrx7yWIxRSbjcK52gkHHvXDil7WSsjrptxp8rNbxOwtdWtrj7isfmEY+dz6k+mO1ZVk9zaag/mg20XO2IAZx6eprb8Sky+QESbJbLGJMn86xnty+pblt5V3ZJJUg/rXdGCmo8xwucldLzJTIvmb0s3D54Ltk1bhkuZMOIYlX1Ycmq01uDNtEMiYxy0mSTVsYYbBFllGNxrWVZfDEFTe7JQ7AkkDJ9KKaenQg0VHMVykoTbzQJGB44xTgMdKPKJz71ztHSnYrabG8d7LMe+c1Lqdvp2oMrPd+VIOPWpY18ssQeCKzLyzWQl1PU5xTQ7mlafZdPsZEtJfNYg5IHWube7u7iZkLsOegOKuQM0QK5IFUrhgJd8Zw3c+tDArXccsSgyEAnnJ5qtbSXfnBoJHUpzuB6VZnIkhG/LHOcUkbrHAEGQW+8KFZiLCeINTVgDcE47+taNv4gmd/3wJPrnoPasQQ+W/wB4HPPrUkaYfPTn1rVIlu50Bu7Oc8RSZPJY96sRzxgYiGc9zWOkny4Rdq/Xk1YgbDc8U3YzLzY/CimqaKgZaXrUmcUUVJZFI2VIxVZztWiigaM+fnPWqLqM0UUhldieAeg7UHHp1ooqokMM89qmQA89/WiiqEWI+OKtxDnPeiilcRaVuKKKKBn/2Q==)](https://microscope.openai.com/models/inceptionv3_slim/InceptionV3_InceptionV3_Mixed_5d_concat_0/79)Monkey V4![](./_next/static/images/monkey-c01c4127783d38ea1000f128ee3232d0.png)A number of potential explanations for combing have been proposed, with no clear forerunner.\n\nOne hypothesis is that many important curves in the modern world have perpendicular lines, such as the spokes of a wheel or the markings on the rim of a clock.\n\n![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAePElEQVR4nD2ayZNl13Heczp3elMNXVU9Nxo9oEESEE0MFBUEraDksDYOK+QIS0uHQ1YopL/AEsMLeaW17YWXEr2SLNqmGAQFCRBpQiIJkHTYBAgQIIYGgUZ3A93V3fWGe8/JwYtb5FvUpqJevXdPnvy+75eJ//XPv5JScgCEYMK2qdWUU0WYwrCuG0RQLQDWD31Vpa89+3UkZEFTferJp86fPRuOJW/W/drMgYiJq5QQwMOJiJmZGFGIGZlUFYHd7Kv/86spVYh46tSphy9dBIiq4pSEiAEwAkWklKHrWtWSUhrKEBDM9O5b77788ve3d/dTJY8//hghRs5rBrWS3WHTb8wsFKwYgA3Der1ej+9YVZISX33k6sd374hISumV//ejoR9Ulbjp2i2SGgFV8zCsAYBEpKqQyQA4SRKppe7qtk7VYmvn4NSpO4d3225awlPbcpK6aShVlKRuu6quU0rTyYyI6qZGIUkJMVg4NTUK103NzNPphIAREm+8GAVzECMSeCigBxggEJGZIWLTdAD46KOP7p/YOzw8TCKHh3ffeeedpmmYqKoqYZYk44uZI0K1IBIiaM6qVkpZrdY5D/366Orly8sHR6rl3r3DCEMKUwP3UHUtTMAMxBBhCJCYKpFKkqAQoA2lqVszc3VKVY1MQRgIqsXM3B0gAAzAmYAImAggSilmgQGf//znD+/eNfOtre3vf//7OWdiAvCqrglJSNyh73sAcA9X9WJa1FTdHREj3Mx2drfM84Pl4f0Hh6vVg5QSACBARJgVxHC3nHsk9HA3d3dmCY8kSc02mzUTmRbpupYG7IeNVMwBEEBEABABROwRCAEYAY6AiGhmJw8OvvDMF77/8stnz5wRke985ztf+MIzEZBE0OtSBsRg4vCAcGBmZgRwdy8ZIEQkwufz6f7+icN7dxZbi5u3buzvnVjev2+WzM3Ukoi5uXtgEBIRMgtEpKrKOW82vVqxoVRVJQQwa1s0K6UEIgCaepASCTNEGAB4AICLJFUjgNz3T37mibfefPPe4b2Dg/3r19+9cePh06dPC6UwdxcEBAQzIwQzA4KuaZGZRFR16PtNvzm8d5eEbt36cLqY/sM/vPjaq68e3rnn5rkUNytaSlF3M3dEJMQkQswpJXcPgIfi4s8++Nk3//e3hJEgYjrpch6GPEBgRAQ4gLsZErIAYoxHDxGSBABd/eknn/6b557b2dG2bV559ZX9/X1hBAAicY1h2LBwPWlTVYX7cr2+e3j3xs0Pr19/79atW4d3D4+Oju4c3tnbO2FuH9768MUXX6SgCAgIN5fEZg4IhOjhSIRIZqaqAfHII4/0ZfPBjZ+98MLz4hCIAIBVUwPF0GtEEEKAA7p7EKaACAhzDwREAoBcyvkLD+3t7d+4+eGpU6du3771/vvvX7hwUdW6tpPJzKEcHT14/4MP3n777TfffPPmjZuH9++vNqu+7909IpKkrBkgTty+3U0mkkhIiAUizA0A2D1VlVoGSgAIQBKBpbjZcrNarpZZC2AIEyIgAABFlRoMHYYhPAJcQ4lwyJuURCQRUXikipEIciDAF7/4q//5v/ynvRO78+nspe999zOf+QyjLJfLH/3oR3//rRc++OCD9z+8sV6vEnMgNm2jpqmpiFlNEZF6W/WbD2/dms9mVdNoUQ9HQCAGBGFGhAigQAgU4T5njFAtw9AvV0cBMZ3NxSMAgohEhJCYknvknIkQAFStaSUAInyzGZq6BUREbKo653ywv//000+/9L3v/fJnP3fr1u0vf/nLpZTn/+6Fjz7+2Cyj8GQ6rZu6qRskBARHMHdzA8Tw8AgRPjo6MrO27daxBGBwCHcIAMRwZGJTi4CSCwAABhM9eHD/9kcfbTYDAEoAEFFADEMWZCJqmsbMIsLd67ouJTNT31tdN7kMiJiSbDY9IVZV9cf//o9/99/97vX33rty5cpffeUr169f39ndmW/PA61YIUZCcrSSC6cKkMwUECECkWbdZNOvGIACOFwCAEPNhSUiAMDNTZ2ZkSIoUCjnzCmJ8PLBquTiZviXz74AOH5hqClFOLO42XK1Om7b4AAQEURERHXdEOJsNkek69ffffbZZ19+6aW9g/1r1x5B5m9+65vupq6c6oAws8RkquouJPDzai25gEcZBkkcAUwUVhDBHZhpGAZEcA8iJuKAIKGIYOZScikKgFvbu3nIhCCIGECUkCLCAwFNlUWaphmGwd2TJHMVEXdDBCIgppdefvnZr3/9pZe+32/6xfZMMaRKly5dWmzPDw8PhUTNgJBFzBwCEYkIwc09hmEDAMKc2AQZkIg5wInRPZJw2yZCdjciBoCmaQFpGHrV3DWVFjXzYbNmQDUVLUosYSAiBAgAhOgebTsRqdfro6KKCGYmIpvN+sc//vHXvv6NH/7wh8yp7ZrJolOz5Wr11jtvZS1ERMxZB6YqEAgBCSPAzcDCcjYIRkAIDp3M2kpaojSZzOpaqqoiQhEkRqZk7u5uZhGYJHnYKIKllAiAQCQCAPned7/7K59/hhDUDIMAYBRtM6uqCqAb8hDhd+9+fP369eeff/7Nn/50KGVre95OJqqWrZh5yZrLcOPDG1KxqiIQIgBEv16HKQFEOEtVpTpVUtVSJenaGsARKy0GgEOfH9x/UEoe8iYl7vtcig5DRiIAJKQIQwIiHE+TUYgTU5J33nk3CH/tn/1av9kgMwRUVQUAAICMs9l8+Pj2D374gxdffPGnb7256ddtV8+2p/1QAIOF0UpK4mHuPmx6jDpcI6IfVuqKEW1Tt03dto1wqquGmXIe+n5Yr4aj+3c1XDglrps6VVXd1NV81rEwAiGReyRJiCgiEeahm34zehwIzFnzUGSxtfP6T17PVn79138tVbWVIsRFS9N1Q8mv/d9Xn/vbv3n9jdceLI/qppp3swBQME4MQKaWSLSUROhAZN4fLRHD3NzgxO7OfDHtJq0IL5fL5XJ1585hzkOEV6mez7ZOnjrXzZrZdLI1mydmIkIkEUGiVNWIREgQ4GYexkJVLeu+R2JCrlLqNxuwkLquTp089fpPXjcv/+o3fzNnE+Su6956++2vf/3Z13782rpfA8FisWVhFtnCwt0MhAhiFGY0K1rMECXx1vbW/v5e20xVdSibu3fu3Lu/3PSbJDSfzc+fPTuZTmfT6fb2tps2k9rdakmMFA5N00pdiVRVXaeqhoAkwkhEGOgs5OGlmIdjYO57U8Uv/ovfvvLIZSf74Mb7j33yE7/xz39js+yf+8Zzz//dC4eH91JdBQRgAJO5OxTzMpZgv+lDAwJWyyWGt227vbX18MMXq6oahuHWjY9u3b5pXpLwfGuxt783nbRt0zV1R8R1087nU0mcknRt23XtrJtOJ9O6bjlVUlVmkXMmpKqqXK2uEwkChh0b8tEi+3q1lOl0+trrP/70E7+0e2L32y/+79u3b7/z5juvv/ZGneq2aYPBwqqqLqaA6KbowIj96si0YIAVW0yb3d2Ds2fOpiqtlsufvvnWxx9/HIPPF/ML586ePnuw2Nmqm7qu6qImUk0ns+3tne3txc7OtqTUtA0TEzITqzkxR0S/GYpaABY1ItKACijnzCIB6OF11ZqWqpnif3/2m1/6D3+EFTz5xBMe8Q/f/vZ6uenazhWYOQhFCNC1hHkBsGGzxggzCy8pyfnz58+ePmsu77333vvvv394eL+u6oO9vasXLp85e2a+PUMKYgrEtmt3dk7snjgxncy7rkt1VcpgwcxMAIjIJESsbkgQEZt1D+7uRkRE3HaNlgIY7lFVtQdFhGvG//W3397e2/q9P/i9rmuuXnmkkvSDH/yfo6MjN/CIqqkRwk2JyEopZcMYrgoYW1tbjzxyra6b119747333l+v113XTqfTxz71+MmDUzuzHWZKVaKEs/n80pXLu7u7BFxU+zyUUsZUDUHMhMSEFBGIbGFIWEoehuzu7k5EgiSJRSQQ3U0kQZAIDZteLPy5557TIV+/fZtczl04f+bcuTfeeMPBXN1MEQIhNOdhvZTEiLG9s3Xx4oWm6X723gc/feud+/dXk0l96tTBxYsXz549u3dir6lb8rS9s71/sL+7vzObzwOAkEy9zwMCiIiFUyB6ADgjApKZMSMiVCkRESFtNhtiJmZTFUqAiAiI6G6JSHNhZvzt3/2Dr/71V4NiOpkS8nwxP3ny5Gp1dHR0pKV4OCMN67VbTgnm8/liPlvMt+7du//229fX6zURbW1vnzt//sK58zu7O+G2tbWzt7d/YvfUib3dpmlKyYEIQMLCwsvlMsLykIlZi7ddwyw4WjGMlBIAIoS5m9kw5DwMkpIDVClViT0sECOCqSFEcMftC5cDrOnanEvJykTTblLXNVLknHPuhZOVMp+2O7vzxWJx7/DezZu3b966xVTt7GyfPn361KmT+3v7k8mkbZu9vf29/YP9/YMkTSllDFPMDIAAmJJoKTn3uaiZE5JIYiIkEhHE8Ahh9gh3R4RSdBgGRPIIJmyamggDAInMUFgwQJq2ESEzczN386KFk+UM6HVXgzug7p/YOTi5jxg3b968+eHtBw+WO9t7J3/+mky6tm5OHhzs7O7unDhRVU0ArTd9XddMCABI5BYBEREpJQ8z9wgYjSOmlIjMLFVMDh4RERCOiMzITBFAAaO7FqncnQAtPNwQSIBQzSACIgiYKFyLhwI5bIY6tadOnprP57dufvjg6P6DB0eVNBfOP3zh/PkTeyeqqtpabO3v7+9sb83n8+l0ZgF9X1iCScxckD0AxnNAiohAFBE3R/SSdTwZZh6GgQyIeExXDoEEDJSS5KzEBBHhYcWAEAA9ggEsTJipZNNcIFAQiQLCCJ0Fdne39ndPA9Dbb791eHjopvPZ1sWHLp0+dbpuurqu9/b2PvGpT8zns1IyAuZigASICEDhRClGc3ts7gkwRlBQ1TXk4hYAMEb1lMTdEQPi2MBCEDEmoJLVzQlxDNMYwMwJED0AQUpRUwPA8GBEYSglV2116eEL27vbH37w0Y0bN5erlQifPHX6ofMP7+6cmM3nWvziw5cee/yxxfZCmO8f3R82GdzHJ4pARGxmjAweQCgoxw0kws1SSiJiaoiUcy5Fm6YOcwNLkpg5IJCOe85xlggQkQiAAHcnRGAIB9Gi4a5Fw5QSm+nW1vTK1StM9OZrb310504u1nXdlStXthfb0+minUzarnv02qNXH31UmAMwiJp2EsD9elOlSosioKoiorszc+KECBAACCKSh2JmzMzM7hERpWREYGYEcgeAgAAEdHdEZhYzR0BTJ0YeUSexuTGTNKnuN5tJ3TDFgweHJ08ePPqJRw4PD9955931amND3tneuXz58tbWtlQNS3X+7IWnnnpqOp8tVytmJuJ+yAw8bacUvFouAwLMmQlHXIM03lck0lJEQlJFxG6eJOVSWBiZIAIJ3D2JEIMZIYmp1izMFYCDm4cSChFpKAVGBCFKyUWI6lpWq6Mz585eu3r1xo0bP/nJT8ZSOHXy1CPXrhHRbDEH4Mce//STTz5dii6X46cnIjQAgDDztm0AYrlcAYCZRhAgRZA7BDoBAwQiqvkIjOuqDkA1RSQtwwjizQIQmMXdRMhMUxpDEkSEmnoJYo4wIXJTEQIiyrk/ferkyVMnX3n11Vs3b4lw2zTnzp45f+qsVGk6m88X2596/NNXr147OloBIDEjUXiQsLkBYESYRUpV18Uw9B6OFBGmGkgsIhAhLKO3iQBmAURmbpp2KAOTaFFMSMBuTsQRAR6EISkVtTz0HsHEEYGEoZ5qAQiJsJLLlcuXtnd33njjjTt37qSUtheLq1eunDl9UrMtthYnT51+9JOPbe/sL5crZB6vlbsTo5kxjsUCiMAsAKGqXopZEAESjlZK5JjwMTEEiHAphTlhuEgq2jOzqXItAVBKlsQWziKqSkJsFMXNzMMDoG0qVTVTaep0cHAmVdUrP3r17r3DOqXpdHrt2qO721tJ6sWsO//QQ5euXAFKD44etO1EPQDRwgHBwo+rNgjguCfWdc0s9++rh7ubmjOTu6sWQkRERxSWUpRZiIhB1Kyq6nBXcNWcqsotIggARwkRCjNmD0QksFETAICI5Pz5c8PQv/fez+4/OEKgM2fOnD93bjadNk27s73z8OWrZ86ecwCLwJGzArhqVVdF88hqAuDnnz5SSu5RVVXXTYah7wcD8BiBcbgFMDEhuYeIIHJEMHFKVen7gEDEUpQljZZBJIV5RCATExU3PG4NEA5EaO7SbzY3b91eLtdt05082D995mTXtW1bnzx5cO3qJ7ZO7LOkcE/MMeJpd4hwLYQIhAFOyOEAAElkbJ2llLqpAN1czTTCzTTCxq5VSkmSACDCVQ2ZRMSI3Z0IAZKbAzGCIyoTmioCMY3QZ1QIUrWUhJDl3evvbTabtp1cvHhhb2+3riokPHP23Gf+yZOTbrbuh2KWUgJ3jwA3DPTw0PERUrgTY0AQoYeBW0BAgCDVdcWMR0dLU0dEtxBmJlKz8WGPCAoiMLxpqlIw5yEiStGqQggMR3MgIkQMBEkpzAHQAxgxIpBINn3e2tr+5Cc/OZ1O3EpKcunhy089/fR8th0BNULOOZeBiEcmY+7jlKGUTEQpJQREQncHABIKDyAY4aZI6rrJ0dERIrnZaOKYaRxkIRKREI2oFEXE3SPU3UopdV0REjON5i4CJCWDUrJFAEQQIRHJwd7BtWuPzOdTDKequnLlyjPPfMEDVa1u2kXbrZbLnIdx4DPyolLK+OHcj987gNSMRQiRkBBgtBJEkFK1WGytVitzAwMYIiVGZAISOb76I0gemZqIqCJiuI9QPIAQ4vgFMbItMi1qUCeWhx46V9eVMLVtd+Xy1SeefGoYhqrpiHnImc26rmvadrU6KqW4GyLVdV1KEUl93w/DAIFV2wGMWQnCAxDcy6hWTMzMdV2P3zzC3DEJA0LRIszuMMJ9d6/rahh0PL2cMxEhOiIgITERswHAyLqZRnhMi635fDYhgnPnzj3x5FPDkNtu6g55KAhkZkPORDibLbpuUtctAPT9BpFVNUki5KauSz+ABTpSECMl4rpqECgCWAQRm6bpui4lQcRh6HMZzBQRipbxbFWViJglJWmaGhFEBOD453GhHuuPERHQ8d2Q6aT18Md/6dOf++yvDFnrpkOSlICEPSLUIGCzGZqm7rqJqkJALkVVk1REJEKlFBEah2ju6GAIiIZMxFyNro6JqE4RjWphYmIeJ/KIRESpErfjMhnZ4y+6mZlK4lIGYhFJFg4QgGDmRBAR0rbttWuf+tznPt/32X00X5hLIQ8WjsBxZqpFVbWu6+l0rqp936uqiJhZhEOAqZtaqhIxMo8qy+7GLEhoroTYNPXRkZpbLioi48AhIlRJRLTYaAHHCz1edHOrKLlbkNkwEEBK1firiFBVefji5cce+6V+0BEqASARsQjRsRv3CERQNSIqWeu6Sal2B2Ydhn70/sKiYWZmhh5obsIU4SJpPJnwUPCUUtM0qjYcj8FdWIhY1RBQhJEQEFPUpgoQ7g6IQ87CBGYRUdU1EW7Wmel4iiOPPvpJNSMxD3PFsc+Eu/svZgUOcDxqVnWIDAiTrlMrCJFLMVMmQkImIgIPU/UIEhJCBiJEIqZRtoSZiCJ8GPJoqCKQeVQxwjEPMSZMOQcJj9ppbkJJmM2dkDhV1tuYtoU4qRqyItJI9d1t3BIQYTNLKXkExBi00MPDY8iZmdquS6pD3+eckRBpTIRIyOFeXEeKJiweQSwYHhHChEDhkEtGIAAlQjMAMEEARAIGdCQkQHMIBAcAJB9NLwAxE4WHewSVYkTSD7kU1WIRgMgjAomAY3FxRwBVw1H/cDTPhkhVVU2m07quI3xs50wIIxJ3V9VSivnxPRmNEASmVLVtNw7zxn/BzBHu5qoGgIiYRJgoTBEhIlRtLEVEImR1RUZ3IxaxCJFk5mpWitrxzxi7BAAhMhKrjjU5llkgomoppaSUFluLbjIdP0RWRUIRqeoaEUspOZfxbVUNgNwh5yKcZrPFz1XZzJWIxvUC1TyqCoBDuBUFd0JgpHAPNQSo6kpVmYWYxR2IRpePEaFFI0DV3K3kTETHg3JJZo6IIkSEY6Uh4pjK57P5fD6vqpqJRjKOiMyEBKpFVd0DRqCQqrE1AUDXTcZhVM55PFJ37/MACEhYVVXbda7q5qaah2HsKxBQVWkxnaXRkZNbyZmZ3SDC3K1KlburOgugEgSM5vvYFRuMaoKA4V7ckcgRuq4DovV6BQSmHhGAQIQQ5A6qCiKoNmb2CEDElFLdNEO/0VJMrWm6QIjwnAcMSInruh7qSlUjHIgjvBRPKaFH3TSlFIkIZhHhCFB35gQgY15mZiIbci+SQt2Jxlp3DzMHQCBHJGa2ADO/f/9+09TTyXTdrwxMVZFwlKSUji/JZrMRqUY7yEwRXleVWTancFQtyNS2LUYgIBGVUtq2barazNfrjamKpDBrJu1msyEiqbq6qBZVNPhFOxuNp2pxQKZkasBsZcg6MHMl1ehtiJCFci7kRsBJGNyReTFdrGEddWz6DQBUtUSomTsGUBgMaOoRiWoOQMLpdJ71EMBLyeiCpInAI1hqqSoMZJFiAzEbaLYCCJRZrYximgARGNDA3CJg3I9KcrycM0oVIDAzgINDBNjY9iyKAjOPI9sYu495uLddm0tusd1sVsNgiIDECIBE4F6siCRTJeawAMDZbH50dITq0+lkXHQgwr4fIqxr2n6T3d20CBMgjql1MpmsVivpS66S1HUNDuAx5B4RVXXUVRHx8OKWUFQtMYzHSogs4m7M/HNcQuHgHojjF+tTkrZtI8zMV6u1JPSAClNxh0ArSswlglkQMCLms3muhsVisTpa9v0GAQHc1TabTdd1qppSQqSua6eTKTIfHS2HvhdmKmYQA1hULG3bEtHYE8aFEhyXxMYtByQkJAAANFMAVDVVI2KmAERmEhZzF2EzZea6bkZTmXPJJQOnERARs5s5goMmFDcjkbpucj+MV9yKAUQSSSnN51tmdv36ezs725vN8NJLP3jnnbdffeUVYhIIBAgaFwIiwDRCmCWlYwefrYSru41GBSIAjpkhAOacR2RZ1H4uER4R5OP2miJRRDRNKyI8cB6KqYmImzkEuCFEyc7EVpSI1aMUI0qqOYksFvP5fP7Tt95+7rnn/vEfv3Pp0qUPP7zx8UcfLRaz6XSGSELIxASAEe7hAAwQ4R4MiNS0LSl7mJbiZh4xrjQS8UgWRnMGEGOuGoaCiCJJpILACEAA1SKSAKhpWkIZ+qHfbAJ9Mp0AIx7/qZs5MzBL27alaNu289mUCL785f/2d88/f/vWzf0TB21d/fJTTz/00IXJdDKdzQBAwpyYxm2ahGIeEWHmAYgI6B4eQtxMqlKKu+ehuEVKaSyz0b26ldHJHrsMLZtNpCQAo2WW8HB3ljSZTKtUm9nR6n4X7c7WTr/elKwYGIQegeHz6UKEv/GNbyThv/iLv9ysj5588omn/u2/OdjfX8zm41JqMXd3QBCICA2CIEQHd3BAgkSAyCShCu7mQQgEWNVtU7WlmKqJjAnEiZApqbmIQJiFeXioqaoIM+txbCEKs+IgSRaLBTEuj1bz+QwAAAGZrFeI2NnePn3y5J/8xz/5yv/4q4P9vWee+fwXf/Wf7u3uRISaEYHF8XzHwiJgTBUMpgGhlokZCXykamBWhpQEkcwcAEpWImyaphRt22YY+giPcFUVJvdAwsRiZkRSSjbVADf1cSm6qqoIH8Pu1tZW27aHh/dSqkaIvVqtzp45c3Cw9/u//3sv/P23fuu3/uXv/M6/XsxnptmLjnadmYo5HoMQIERxVbWCiNW4ocp03HsAiFhjXD1WdxepI0DVk4SZRvBsNlPNOecxRYAZIqkqMzmG1AkBEMLJAcHD+n4zcjuiZBZd1zRN3fcbNXvw4MFDF87t7+3/4R/+wfPPP/+nf/qnn/3s08vl0dHRfSYSYggYhlJVSEwQMORBBBFDEKFiRkkA4ZrDHYUhAgC09ElISyHmqqpH8SLC9WbNLIiwXq+Yqa7rqqqHfoMAxIyIv1h4G3IeKbxFEAQRMx4/DkTIuRep6rrZ3L939fLl7373u9/5zj8ul0df+9pfz2bTe/fuEUJVJQD0wDCr63qckzCxiIcbElIQukeEl1yICIkwEAGEeRRXFhl5/zEhixBht2JWxtIahsFMJ9Np23VE1NQ1M1WSKpG6rkVYSzGzgDFLhIflMqiNYqlqevHCQ3/2Z3/+pS/98Waz/tKX/qhp6nv37qV0vHapakgUQIgMJAhsFgA0to3/D+jIlJIkqULfAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAgmElEQVR4nKV62bNux1XfWj3t8Zu/M4930r0armyEkGzHNlBUGYwZAklRoSqEpICHVPKWPCUveUte+QN4STkkISRUwIBNmGzJGCFb1jzcQfdKZz7fOd+4px5XHs6VLMmyi0r6qffevbt/a63fWj2sRoyXiAgAEPGi8n4dEQHgQYWA4MHXD5aLNj/o/fsdvt/bBwsRASAypHDRjAAAvr+/94fFDz8CAID4IWjeH/7Bzx+H//sev4f4+1t/XMH3//heFwQXQOhiAAREIELA97T44AMggvig4uH77PBDxPu48vcE/cFOiRAA6cEIgAhI4AGBECAAICLiBWwERkCIRACMIRERgPiIqj6I/iM0+EibjzXXD2fUxxV6X52AQEREgMAJAAEBCfBBG0REQCJijBERIgMIiPghCn2EqX9vGvw/FkQEYBeMQaQH8IGA2PfYSBSAAAiBBUJCdmGNQASAQCjgB6vnfaX+EJr9f8sAdAEPGVEAJAAiYpwpAo5cMpScS84FIkPkyBCRLkwVPFAI4iNAP9ZbPwL3YwX+ftN98M0HZf4gSwMJQCFEIqOUc6mkFIIzlkiVWQcyyrmIlIylioAgeIdAQnDBmbU2eICLKISIF2YkogcquXB99iFNv+dJDyIsvUdYhA+4yodFIwLGLnRMF2RnKDhLuIiEjIRUQgyYTLKkreIsSXMppOBcqQS5aKwPyJM0ZUhKCu+csxYgUAiRUt45o22SJOI99Bf8gxDCA5E+Evffj8FIFOCBxR+I9YF2BIAIQO9JcmFPwXjEZRzJKI5akRyoqJe2enGWC9XiiK0stcEmWVLXtXdOcukopHE6r+o4S8E5yVEIkXfaSqnFYiGlFExOJ7Os00aWLH/IuBf+//28Z0CBiBAYwgX0C2wf4wvsQfxjQshYipTLbpYuq7Tf6/TzPGUIgYSQkUyEsczb0GrlTDARxQhccCGkQI6CASKISFAgQCJ6n3uolHQuXOARHx0dPzqJXtQpIBAgAgUCZIDhQ5PNRUwABhAYF4JHgscq7nf6G93OdpS1ORdJQgwjQlmVjYyzKEu7wzYDkajEeSsj4ShQ8CF4IgcA2hgg8A04/2DGBiDn3UUYZcgZoif6gTPxR9YXiAwRQ3CAwBhBQCJJhAwJAIh5QBRSMcranY1Bd6fbW45bbZSu0iQiUVda8n6v306TRImYMfDgrSEH86KZWgfQCG2ddx4IOHltdByn86LwwQF4IRGAO8ekEoKzyWSSJKkQvKo0smT5g4h/kAAfsA9AYIgcwAG7mPQFwyTL1na2b7baq/3+0FJhnJdRzKQiyobLvQC2LOsoVt57aywChUBWkw2NYLysSk9EjJzVTbXgEIwx1tK1qw8fHh0vFuXq6jICOzw56XTaKlLn5+fWmt3t7ePjkwcCPEBJ7znhh0PhgzpHCgQEQijvPYBnXHI+XN94bHfn0Xanp22FjIApbWj70m6SpzY0xnhrnbGOuZqCqI0pdaGN9t4DuWAxBDoenW1ubLtgTo/3yWulRKSiydkoElKppDdcub93yBhfXhqOx+PxZPLoI49Mp5Ojw4P1jVXEeOkjWv8wc/C96ELIJAROZJgIIfA8v7y5+dj2zkNKxcZYYhClcmlpyJVE5meLsq7h8Gixtr7pvSmr6WJx0jS2rGfWFeBDqlqtpM2SSMp4MpkzIYF8Xc7Ba+cM51iXi8Vsksb5ytqm9jQ6O+108yiKpuMJIvU67fPzMyBCiIbfQ/mDl3GIGC4+U9zvX71y7ZOD5dWiKuMobrcGQskkj1SkDg5HQPzoaB9Y6PT7xgkUAYgRQTMZW2eB42h8RqCRAke1ur7mPRWlttYlaTIbnzSLKZDz3mnrkyzttrrlYgEBkjyZV8ViOtvc2pxMptXkvDccOK8RoiF8jzYXNP9A9CRkKAI5QOKqO+hf31i7vrS2yiRM51WvtzQY9oTiTMi33rpTLBZKCsbIAxLxum4CQJ7HGKhaFOen+5cuX6l1vXe0H8gmSVZVutNbzrL0cP/A6nplOKirkoJ13pV1jVIAYTvJIxVp77RuyHttDBc8ilQ5nzEGCCAAGRASEWB4sPSD70V3QvLgpEqXlx9bWbkWJTEwqE3JKVrf3GGMn4zOa133BoMoSsbnk3beMqZyttKaGm1lLAj5op4cHe+h9cdn+1EcIbPFbFJWc85EsbCLBatrQ44m52NrqhAMBRAq8oFaWVrO5pjmIok0YK1dHCdxkhBhlLGyKCIlBCAABQAE4IAekUO4MEYABMbVYHBzdfV6pzsAYFwwR74/WGZSNsZwwYu6qmptwxiJBoN+VVa6brJ22tRF8K4qKy4sBSM4KCUR3HxW6rpCT8FYLtn8fOEJAAQCq8qaI3CuQDBjXRQnTaEHvcFsNuu28l4/c36cZlmaZkVRrq3tFEVFwYkLxSMSogghZswHdEQAILrda2vrD3f6A+BcxVkUy1ZHOQp3bt9bXdsdnZ33hl1AQMaLoowiob0djU6RmIgjgmDMotYLpwUQRaQijovJdDabhRAAUQpF1gkuOcM0zxGRESQqCt4BV9b7drs9m4yZELtXrtRWI+PtPFlbXQaiZjFbXxnWPVsWpYAA7y3SkHMK5Al5p7sxGFzr9Td7wyWhYgc+zzMpxdHJvvdwPp6L6MyRns5OvPW6CYQoha/rhXV18DSZBs5804zrcq4JvSXJFDKw1qg4QsYYIiBJqZSSBMwTdFrdSEqGwejKo0hZxBnduH55dHLqXOWchkDoK+YLb023xQVWWcRiqQQEBhgQASAECmm81B/e6PXXB0trACpO2iIWQgnOwnwxPzmecS76g14UI2kM3uu6NMZba4Ei57T3tRCiXoyMrpu6BCBgHMB78FJEeZLEWep8uFgLcs6C0d1uBxhY0wTUTJAxk8bUjCEDEJTX5aI8L30IwfngnK3uO2OQ8+n5LWSMMSYAiKEg8EKobmd3deWhNF8jxtvdHjARRSkTgojKcnZ2dq4UT9IYEItFKZUqi6KcTQChrEqjVSBX14UQ3NfaWcuYQCa4VEIFIZjkIoojAoxE7H1giFrXYK03JRNuPjkG0kC+1pU1GoEY4ujIA4D3LoQQAgHRYvpg7RtCQIZAJAAJkLXy4fLK5X7/UhS3agOtVifNe1xxH6xzvmnMdDau63kIrihM01jGBTfV+fmIkXdO66Y2NRIF7xvHIRBTSiRxwkXUaNNud4ypY6FCsAiUxqJYzHVTWlcF15zWPpAxugzOOucCANBFVA8XG5KLTX8gIAR8sFFhiIiEFyt11e/vbG0+2u2vO2DGUt7trG5sIEdtjWkabaqqqmaLiXW+LGrrXBRHsZKT6TmCRURrzcVOUDIeLEQqMuSzPIviKEnbqTGrS/3D/T3GjalmSG6hR8ViWldzRGet8T4AISJhuHBGBLzYMMH3TosIATgAA2CBEANHFBQCY1JsbtxY3Xik1VoLwAWP837W6eWEvmjKpqxMXTV1VVVlZYwP5CBESZRmUVHMvNcIoSwbhjyKFGKQnMVpHKxXPLTytowkQ563U2fGVp9W9cKZymprbB28h4s9+YNzIYQHu0EpZC/Lc0RM4zzL0iiWSkkpEyGUkAIAjXGMCSVjxgQCE1evP0lcGh/StN/pd0SEja7LqqqasinLuiqNNhdzm7FlCI3WWBSNNTUAGR0EU0JKyTnjxDl2Wr3JeBxnUSBLtprPZ2NTTedHdVMwD0AQiF2QFogARcyjJO/EaTtJszzLpMi63evL60Nrq16n18mzKEKpuJIJoOPShxCqqmQIUvJWngfvRa0jlbR7S30h0fh6em6caxaL2aKaI/mmqp0P7XbHe7dYmHJROFcTeAAOBJxzqbgQBECCKSVEcCZvybKeuLJw+qwsp9Z6AI8BgyQMnBEXIu52h3k7iePuxnB5Y/NKmi+12nmvFyvBozhGrObTQ2fPyvmds/1JWRbGlEWxaGpjrWPIvLUh+DxPvfcCRaczHDAJ2tiymlWVN7r2XpN3xjRc4mQ6DcEJzsl7oJBEKlBAkFGUMMaNta08UoLpSutyVtGCqDCLxawoAQMHLoUCMsSkUINut9XrLi0vb1+/dnNjs91u8Y3VpbqZ7e3fOz587f5b+9NJcXJ6Nj8dY7iI7EiIKpZckZIxg8hZFykFBErKYlJbY8X69hoxMyvmVof5dBECAIRIKW+qsqm1bYgcki/mRVHOkkgIJq11UcQEp6XhoKoXWZKV83G5OGqa86YsrLUXHhhHieARAG+1up321WvXfubTn9l4/JHtPKvv3Hrx1mvffeGvXtx/995s0lgdOGdx1FZZlHfaa1c3s0RFMVNRhMiFigBC4zwxqRsDRK08t1ZzJGutmBYjRDY9H9d1IXjUNKapF60sBecYQTGbM85m2ngfEpVyxhhnl9Y2i2KapUk7lVVRHu7dXiyOm7KgQEg8ilKvTNDIRZymm5966ue++NM//fjN6+3O5OXvfvUP/8fvfPfvXjjYnzCM8ywe9Ic7O2m71UkUxqkErixFAYX3HFjWhMgGgSbmMsYoccSwzSKlTHBBuECBpYQ/9au/PZmezscTyRVgqJra6EoC5Gk2W0zPJ2POOQuQtfIoybX2S0sdqwtnGwF6dHZvPD0gBxx5CEEptIEYKMnbmxs/9TNf/KVPf/rK5qp9642//s7z3/jrP3tWNyHLuoNBt9vtddsq4tK7tqamCV1ky4HHKl2K0jWIMpXKVqbSNGEgBDJAzxl65xA9ZwHJIPjgdSDAy0//a8FjhVBWp4tFHQC8r9IoEoQHRydpnkUqiZVqTNNf7uWt1ny+0NXM2bOz4/tONwAguGScrHNEkrGtX/z5f/GlX/qFp39kePD2N7/6v3/v63/554wxxrr9djro9tudAXHQPiLWlXJFdHa7S1dV3onyjKFiWAtZzMcHEY8mo3ldzev6NBEoRcSQa9MAhLJaxErGSuZJOp9P8akv/QetddPU5WLsdOWspsBtY4wtO50+QtKYMk5pfWPDWHl+OgK/fz66bU0gYIAO/MWxCtve+vFf/pXf+NxnHlvuT5999k/+7I//9GRvlGed5eEgTaMsjTwk2uY8vdRdvrm6fb270jZhxhjVRVlMJsHUdTETPCB6NHXEuBIcmVOxpOAZEkdf1cXF8YiQomksIGPIcHjj11fXlr1rqnJ+drLP0FOQDFHKBAmA2TzrIooo4Y2enBzd1tU4OMsIkYMngBBf2nrqt37jtz73kw+NRre++kf/61tff2ZS0tb65vpKJxEiUb3apYH1V69+ob20tHZlh4Stpif6/JiKxWI6N02dRCySwJkNQRN6MqYpK2OaxWKmEjVfTL1zPIA31mjtyHnvOFf+YgPZvfqPlFJZFpfzWVOVgrMQKI5zZ30cqzzNjK4Za6bTd2fTU+c0hQigBgKG0cb6Y7/8S7/ymacfLYq7z/z5H7/54h30st2LkvYwklGs1qN8N1/eGqyvZp2ou7QzOn5bL06p0U1RGVPGKQrnkAK4RpdzU5dNU3nnQghN03DOGEeRRJyzLE2QM0BgDIzVrU5e100URWVZY+/qL3jvO5325HzMkUcqcsYBUqsdZ2nfNk7rg+nsnikX4GOL3lsHYPrdJ770pZ/+saeun528+fYrL9+/e5+46ub9WNluZ83yXpRfWdv95MrmRpY6EUanB7cnc+7qmnnDfI1QW1tpoyWYxWzMOecMpRBRmikZxzmPEglAPnhLIYSgtbbWVboJ3mmnhRB1U+dZToSYb/0ckUUEBEqimAJJgesb601jrK0m5/dnkxNrG47EkLlAWfLwz37xZz71xCdZdPrCC8+9/HfPgfe97kqnm7bzVZnvZMMnllc3VzZ6vQFnvjp7d+90/x3TNOQaDk5X02Ar75vG1AGhlUV5nnDJVaQcwbyqF0VVaV1VVVEUWmvjnfeeI5IHxsSDLAhRIGLIApEQnFtrgUIcR8H7OI4lYxi8NWcHe29pXVgdOMcAIMTyJ68/9bnP/MTNxztvvPQ3L7/83MnxPFHt3lIrzvqD4Y2V1SeXtq72t1qbfbmYHUwPz6bjxXQ00sVUoHbzIxNsCFpEnClsDfpMqtqb86qYns6KsqxqXTZGGxtAcuRAEEKQKjbWSM6BvHceLpJNgIwpGyAEwtbWF7lADBRJmec5Ywhe19V0Ojsoi6kQEtFbA+urD3/2H3zhkz9ytZ7tv/HKM4cH40hkSsZJ3huu3cz661u7D29t7ea5LRcHi8l4ejap5xNXzfTiPLjCuprzRZzkSdbS1owm56Wx87Jpaues086FgIEQuAgEWlsEYAQuOGTMWsMZOqOBfABChnGUMKEQmQ+A/cu/6JyJJE8i1e22rGnOT+/PZ6cX6x9rfRK3L1+5+ZmnPrW2FJ2e3r3z+q3FpOz3h93Bepwux71Lm5c/0Rtk7TwoaQ/fPZyPRnU192ZuqzPmaw5OKoY8uKBrEyZFPV0U87KyLlhLAbjgwoUgOA/BByJjra4b8h4JtNMBAZEE56ZuOHKCEIiSJAXO4zi21mFr+0t5FuVpAmSMHhfzyXwy8qQ5Co6snV97+sknrj60Ibl++80Xj/dPFM/7/Y2l9a328vX20u5wdXU4jG0xmhwf1tViPJrpeqxEyaBsqnkURcj5oq4WxXw+b0pjCmOtJ4acAQcPtdPeO+MMUAjBeWOsMUAMAQmCd9YHzwVHJGc8EBCBD15KYT0laeK9E4hMSMHQN/X49OReXVUcOOdorbz66Oc///Rn1/v+/PzO67fvlNOqlQ6GS5davc3VyzdXd6+2unEMzfTk/vnxcbOY+2bO7FiEWRzLKEoaW5wX86IxJ+NZWRvnQCgJTAEGH6hqKqLQVKXWDbHgnQOiYH2wDpERAgNiyCB4DgyQEiEvctsAFMgrZMx7RiDyNNZ1U05PTHluyppxQUB5tHPz0596+rM/uZxNXvnbZ0enI/JqfeVKq7PSXb526cbjnUE3yezs7GgyWUymC6un6Mbg50parrwBGI8Xe4en55OFCRAYF1EqmQsh1GXhnQ0hOHLEsNGVMRoZeusgUMyFihPBeaQiBhhFEedCCYkMJRdKSCBkDELwjGEA9J4E0tjqej4+BGcEV56xldWHPv/kz3/ux6/vHzz/zNe+NRuXg8Fap7XZ6l9du/RQf3VpZ2d9dHx3sj8/PZ4xbiNRxjAp9VljmyhOTyeLk8ns+OjcOa5UEkUSma+qopxXZdMwhkJyCoGAtDZeG8W4FErFSnHRy1tZEidREqvUO6+UElwy5ESEgIwJIGIIITgAACYQEQc7D5+PzjBIxn0cdbc2n/jCF/7xj35i+Mq3v/LMXzzbzgcrq5urq7tp99LGpUc3t5aAptPR+OCdPfKAvM5zNz191zYNSnUwGR2MTg6Pz9KkxRgTQhpjddN4a4wxnty8KgGQI4u55IGUEL20k+c5FypLW7GQSgpGQUaJdYEzTgCccbxIEQtJiEQkOCvLBTImpFQyRoiWEAGZivPW44/85K/84q8N+vM//9p/ufXyrW53/fKV6yu9la0rjw+2r0VxCM38/p23jw5GrTbv5exo/3ZTz1Eyz6PX7757Ni+c9UIoJXAyPTe28d55F4InxrjTdQghlnE7aQ3yTr/VybNMqCRJMus8ZxIRrGkYULvXI8TgvQ8+jiMEvMjXMiEvBPDeaq25lEIoxHiVM+Bq8Kmn/+E//dV/wvGtP/jd/3Y+OlztXN3c3Fra3L60+9Dy1g4qHB0f3r9z2Cz8o4+2j/dvv/X6S5vb6yxPX7799pt375eFXV/eETycjPYq7axtJEfyTleVqXUcxcOk0213V4ar7bQdRzEXMjC0HtIsF0JyIQCpKBaCYavd0dpOp+Nupyslv5hqq7pmjHMmnDVpHtd1bYxXUYS8NWQ0ePJH/9m/+be/drD3R3/y+79vm2R7daMT9TZ2r19/8rOdpQRM/cZrbx4ezR65sZup4tlvfIW83Hnoypvv3H/m2y8FVN1ON4uj8+OTolhUdYEydaYKugpaZyoZdpdWVzaWW0MhFGNSScWF5FGEkrPApZSIyDgigm4qZBgAEZmzFgCd1SqKnHOcv5cjBSjKORFwISkA8uzSY4/++r/79//y/jv//Q++/OWe3Njd3OjlyyuXbw4vP7m6opnxzz/3nGL8yScee+feS996/i/Xd294Ef7ymW8dHE7WV7fSOJpN57NqBtzouna1KWZT5jGP8rWl9Y21rThOAmGWtoWKVZIoLjgy75wQ0gcUkhNdUMYG7yl4gsA4cs69p+C9895aB0AhuIuEewiWMc6E9C6Ixz/xm//xP/3zb//17/zX//x7S73NldVee7C5+/hPdYZrN3byd+/uPfs3Lz78yKWHdjtf/cqXXbBPPP3Ut15+9Zm//S6XvRsP/8hsfnxwcg6eIhRkw2wyn04W24ON3e3dQX8guADGslZXxVm737FNItGBrwFYUNISekM+OKIguGUQkLyzTrvQGG1NHSnhrRMqBSYBFCCL0zaXGAsfR10eg7YOv/Z3e3/6u7/9J1/5P1s7164tr+S9raWtJx57+NFr11aee+6bL377pc88fUOXx3/7za9fvXEtqOQrf/FXJ+PZ7u5l7/D09NS7pqkbArJNsxhNYsEff+zmoL/MkAmmut01IdMAVFbFcDCoG+PRIjjF8hAMoPOBBa+tQV3UdW1koqIsYQJVHMVZG6NIe8c4NE5XlSnmjbUuS62gejGeRXHU7a/hb/7av3rplVdXVlc72XB1+frmI0+vba1s9JLnv/FMWc4fubF0740XR2dn1x+7fnvvnWe/86pMO3Gc10WhTV0UBUdAoKYovbY7a1u7G5ea2kRp1moNAEUUxwGsUJwAwFFR+aTFrHZgfFPUxmpTh/YgznpDmaZx2mrsfFaeNRam83A0WvjgjT49OXyzWkyyXq8zuKTkoJgcvnv3+XbSbued0+k+Pv5jP781SLeStWztkY2bT21u5ixUr3/nzQ6Gq7vR17/xtThtr13Z/ParL79196DVW7aO5tMZBRKSF0VhqioiWun2dze2AwltcXltM1E5l7IxJlDI8tzaBsmRJ63NvCiK+ZwIs1a2vLqKSe4Qzmaz2eLw/t3XTw/fIVsX2g5Xrw2Xr/lg33zz2aoYgQfG5MrGzvbWjVdefO5HP3G53+pZR4a02Fi5tJnJlcuf3n70Zq8Titn5/Vv3M1YutcNf/dnXh7tbLnZ//I2vn09tu7ditJtMJxSc9/74YGIrvTpcun7p0lK3p2u9vLreGq4H5E5b701AI6SgoJ1trNa+5otqHLVaVx55RESqdOXx9Gj/eDQd43xiwR9Pz+5cv7TTTtom6O++8mY7T6azWTEbEXkIwnsanx5GgryfcojX1rpf+NnP//7//KZYi2Djxk9s3fxEL6fzw/2Dt++vdVU9Hb/00t2rTzy+Pz3+5jPPOWiJuF2WTbUom8ZEiTo9POI6PH714a2NnUhJ5MgSGbU6rU7vdDRWgiPGKhAXNJtNZvO55NH6xuWN3pWT2fjOyeno/J29d29Nxqc2mF7/Uq+zvffu8cbKyvbmRr/b2j+Z9IfDt++91u/1wHsAAPCAXlsXALiQUsTHR5MXvvOds9MjcfnqT2x+4jN5XBTjs+O9vUFO9fSdo6OjnZuPv7r39quv3wpBBgxlMXfaeuOtsfPJrJ92H715Y9AZyihttVtZK6sb0ziazWZ1VUGchhDKwlRVreL4kcceUonYH+2//tato7NK2+L08PWDe3e8tYioSxNHwrhaqWEIzrtS12USp3Vlele39g6mnDPGyVkvoqg73DkbLV5587vbK1tHR8eVtvjlP6x7aTE7OXz77p1V2bj6eDw9yZaXXr7/zgsvvTFcWp+Vi2I+8wFcAK8tWr+9un55YzdN8ihtR3mWpSkANHUjuJzOZwDIUTS1SZKs02sz5d7ef+v1t14bT8609+3uDufi7u3n59MRAABxAN8frsZxJtBtbHSbqkESB8ezwAdbVx47HZ02dZGnMTmSsWx3e9WiODl8SaHRjUeRiEHfT+7d2b93krM66JPx7Fz22i+8+cYbd49bnTXtab4oIQAR6rLqquTh64+003arNZRJnuTtKJW6qauqtsYwFER8Ph3n7cHSxnqQdDQd3XrxhXfvvdZv95c6vdKVJ6e3W61ucBZJIPcMhfPgCJaWdw/27ty+u8dBoIg8S7d3H0nT4c5OB6nhGIIzLhjvfL/dSsRVaxcupDZkYnb09t7dO8xqLovT4sQq9ea9d956Zz/K+ijVbDIOLtgQnDa9JLu0vLG9us1kknb7jEcEBIHI+YgrQ3o0Hsdxvry+rlrpWTE+2hudjo5efenbl1a7j1+7ury68u1Xb52ezYtyEqddrbOs3crjaDpbDJe2VLTWWxJVPZWSp0kUJ7kQtqn3nCvBN86auimtbbwlJBTcy0imeSeLh+Kd23fQLkQoF8XCCnln7/D+4VnS6jfaLUbHGJg2vq6r9W7/2sZON+txlbSHKygVhOCMBcdiGRky3uneMO/0hsD9nf2X9/ZOtOZFNRUcVlfXtK1araIp60Fv7XQ67g+2RWTXttZiydXxuNvrBybbg40WbgU0wk2qxfh8fOj8gwgWAlljiDwBIgFQkJHo9sr+MhNcT8hNJ+WCpfHJeLZ3dK7itjG2nC+88d4BmbA73Li+fambtVWURVmbR3FV1wIBAbwj57QHv7S+3AS9MKPvvPCtW3duMaa6nVVPJpBfNFVVzx2d1ToUVcN5q9vZylomzyMLmPTaKIiC8zYwFgOrT05uj472mqaEAPjeVQ5EICJgFzclmdVuOj7n8lSk2Lx9cgLtfHw+2ds/i6JeWdV1UwEx65oo8GvruztbO3Gacybb3W4A1E1NzmnvIIALpBLOlNDIbr/z9muvPXfnzsveMQSYTccbW5uc8b2DozyNDg8sys54dr62/hBiw6kc7R1oBKkEE6nV6L0VPDLmZDI6Mk2NAd+7EwvICD+QELy46Gl0XRZTcXR80BDWRXHn3QMu203VlEWJjBVlwwJcXtu8eelhzxTFadbKBWdG66ZcWBcoUBRHSTsWqbh/uP/KG/eapjo6OnEmXKRKG10URdPtbo3PDwVH7Sy5otftKhHG4zeK+aiumkA+Tlud1qaDnLjlwum6bmpLhBe3QAn8xW3ci3wxIlHA8N7lXGfN/wXD/2gb95vvzAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAcaElEQVR4nD26abMkyXUlds697hGRy1vqVVV3bb1UdwNoECBAAhiSIAccymCzmMmGY/ogzQxFycb0k8cko0wCgUZv1V37W/JlRrjfe/Qhi8r8lGGR4e7hfs8999zDz3/7f2RYtySdvqC7MojJfJYF+wAmXV0w0AmypgXpCjA92EtxRAcSIjCYAzCopRlRoAPgMnfBmABIdhDUQAMEUFI6Ko1ggzJplpFZrbihJSpIyGnFFRkLIs0MxayUrrQSEDLgDhIAjCCLshIJB0CCDlIyANK8LAbSlOrqNAIAi6CAhbEoJHUwi7nIUAJFSCAMxcWgpQICAEESQFBwWQ+RPnqp8FQMKGHdwTRGMQWrTAwpECgJHJ8C60onkubgrHQoxU4QKs6RkiOhhugF1RxAJwpYgCYQYhIAe2YhXEygS5npJNCF42ikGZkpiKSQII09RYlUqUYAUkOAZGkSlYrIBjrNWaHiyDQVAEiQIIg0Ie34wimlpxAJB4xzikqwJOBwAg6EqTgEkjApUzDSYEBCAOkIgMYkgSTAVMAypQKXSQpJCjcRyEgWaWQ1ZlgSpSZhYGGPzEgzpGUiEjCU4mJKICgzmNESHQCoZIiQijCnoHQnISQlKiKMIrKjO5CgQKaANDPRg0EQsgIaQadYgAQyxd5DlkVeORbHeuK6cDVMk/l6KiYNg5l1pUazMCGVmQrBraUikqRUi3ScbQF4PIp6900lBDgdoNIgD3QKBJmZmTJPSyCFQrkpoZCjKZ0FxuiN7qAEFjEyKRTjRL9/sroz+vl2NU62GTGiuQ7FerR9T81alvng0DIfDoElG8VpHIdSsqvQhjq5G8wL6MZ3ywAUGUISLGYSAAnLMdDMnCzSDBPlTsBEtwiikZbhAV88KzMb5kKTFEmBQjp1p9YH6/X9jV9cDO/fHZmH/f7m8vLV/Hr//M3bee6Xl5fZebm7FblfGsIyW7EJbK336qVWEyGpFK8OuBWZMUU0QIKJMKtgl+Q+iD0QmU4DuQA8LtJNBosM6FBYujFBRxgVSNAdYubKODDuDHZvM94/Gc/WfrKy3e7V25cv/q8vby5fXF1eXd/u2nxYQgx4AktKCIlu3nMWDGXPQCak5gRJAKm5uluxQoUgSOZQpqQOWBzfeSMEAhSIjuZykgADabDj6QqF891dijpYnWgbz5OSZxMvNnXyZrq82V//4eu3u6v926ubm9tDX3o0C2ckRGZGIGCWpJGZIOIdxiZIwQgeUSslkjZDaq0oRZJ0KaEgjoFgoEhBBGQQQcAIg+hEKvLdw8ONmWHgAK3Z7g1x/3R9vqqFh+XmbX+zf/b27dX17vXVzfWuzy0kdOOcMKCnSJEQREIJWmZaMiVSBoTBIR0PuReSkojjP4yFIo5ZVMkjgECCRAkEzGCQABL1OFT25sfABsOSaSvk3QF3xzircl1j//bt1f5w2F2/ub29nneHZd+iK2aVTqN1JWlIxrswE1MGSWqgUqQZMo9DwDsFSVIy4TzCJl2EsQQXkwYbG7tIgzsyLSVPudE7sqI3iGZOViLoaUjJc9mST07HJyfD3a21uHr9/NmrN9evX1/d3M773luL1hWimIdDT0sQThFIJMkAkhACSprDK4ORLQBkiVQKckKBd3uAyCjGyTmYOVjM0sRgIAErx/PmsG7HzQgh4SxilxIZoGDKfm755Gz46M7JyUlCefX2zTdff/3tV9/eHvqcaLCeziQRst7VO2HKpJJgemRpTKQnuhlnoUWn+jtyITi7GwZHMS+OUoqRyRjMTrbjUNKN6igGe8dDikGAaCpQN0WFEtlhGcWMBT0TC7lyfLDqPzyzizWfXz/73fPn373dvXpzM1/PsWTqeBa7LJxMUIJxKsVazA1aQplJHulSUHS3cbB7J37ntF7c2V7c2d6/OD09256cTZvBp+q2TvdSitNdkgGK7K3tbw9F4WIYsicAujKPW5BQMlFkaOgOc/RNyfNiT054PrblcP1Pz96+eHV1edvfzq1HZGvGKkHKCNDYya6UWaB3p5yrWi4K68BauarDtF4/ef/04ZOLD54+efjw4dm2TqerYSpDoQ8VzN5urfckQZhbqWPmkbx6BuVWEgQsIgkTEUjjkRlpIZTB7A4/Mb635fsrlLjp+5s/fPX61auby5v9kgTVQAAhNLVj0HcgCCFaMJrc872tXVxs37+7fvjgzpMP7j18cnF+sd1s1yebzXiyXd05K7X0w3Xvh2y30VufM5VSy34EvaRhviUAGpxOM9CLmCABWhJQUhIQ+BdGZAPybo0PN35vw2W5+v67F6/e3Ly92u1bn3s2uBUCikgRUgZTZBBLaOmytLsn/vlnF5//4OLjD9//4OlHn/3483v3zxI3cz/0ufXeoN18dXm7tIhD9kVakBlCxjGJKyPBfgQsM0vQzI0GoqTBlGYCZASllIxWUAKxxfzxGo/OvB9eP//67TcvL6+uD3PTPjMBK6zIQxAZMrB4D0RoTkrajPbJ49OffH7vpz998NkPH7/38N7J+Xa1nbzgcv9tny+1zG3pkRG5UMFMRLZsiZYpBixBQIkwS3aTTBAtIRkDIqyQzswOI3uKoJXi2VUYT2p8dlq2df/yzfcvnr99eXl4u1sOrRnYwQb2SEqd7vBIdcSywLvdvVN+/MP3/9UvH/30zz5+9Oje5nTNwlSPdnP1/NUSvWePOCBmKtAzI1oGezKys0WCXcG0NME8I4l+LKsAh1IiQIBmpfQ5hc6KKG5Zzdi0sv2jaf/Z2WCx/+Lr7797eX1706/nZUkF0RgkLGAD07jfRxEjYeD9O/VXP330b/7m6Z/+7JOL9+9q8Ii+xKFf7Vq7YQQUEZHHOfeUlGrqS/ZEWgaEEOmJRiFSQJGCAKCEEA7Gu0AwlCyMdPOqUAGRzDz35Qcn8/0N3r5+8eXX37/d6XqOlr0xZSCMwSA0OCAmordS7cMHq7/4s/t/+5vPP/rh4/N759XHaPN8uIzDZS5LW/rSbiRzQdGiR8YxpSAlBNAJKZQOF7NBKUkSsnfIcCyxoAhJboIZRKEkKCVhTFXtH479gynWufvqi8tnr65vbud9z/2Rp0qpolQyRe+RoRx8+PGHJ7/4xYO//ItPfvj5g7O7GzrbfHtz87K3Xcbc90soUi2WRbIeyGwRDXLIA5E9DASQyYQ6lOqilBRCKaSHEQoihcxMduO7j0pSSQ7CiPnJ2B6ulna4/t2zF2/eLrvMg6JJeeSIqgF1ZgJLi+L24aOzv/zFk7/4Vz/4/PPHJxersFjaTrvbdrs/7G8iblt061Rmi8weFAWmekQKUGbLjtbcLahIQoWC2IOy9BAkQP8/tkAUw0yUQUxKhaQpB86frZb3x2V3ffni1dWrq9u3XRIaElBNLsiW2ToOgVC+dz7+zS+f/A9/96Of/vzTO/dOjFza25ivcp6XZbfMu+VwiOy9yxoiMkWRGSkpIno/hgy6gkGGgkgGUk6nrBsoSJ6JbvIu6JhYaUewzDyurLSIU/SfXOQH2/HZNy9//+z57T4Ps9J70BUSLIrdRu+ZtfDOpE8+OP13v/3pX/3Njx59+J5Pw7J7My+7WK56P0RvEa23pS+R6ZGJUO/oyogeyASQps6EkkhYCkySSXPBm8RkAMxQQonu2RVqoYQIMztSWANJlHuGPznD+fr6D7+/fPH6+vWN5gBMGX5102GU96Vh5eXDi+Hxw/rLn3/469/86IPPnvq4bW13++qP+91e7RB9tyyH2Idaj3cvLyIjkpFqraWlxERaplkNeM+W7wrCJlksoh8yrJvh0GUZIQZaJhQxUwTppMy7Gd0dKOXHd/K03n7xh+9fPp93c0uFgyG0DCsIqgtnK3v6cPz8s4tf/uoHP/3Fn7z36L0ldzfX38y3b7PvtZ+XZW6tZe/sjPRQJpbIbKGIRM9MKQQowU5LAIlM9LYw1DODvYcUjrRAzgHAmZ6yRutaLB2OJAln7waikIhS9OaPX7x4+Wa+2rclFxQmLAEOpSN64HyNn//o5M//5NEv/+rzp5//YFqP+8Pr/c3rw+FNLLfR5jwsS+uRRMoScyiSzIyMiMywTO/oiSJ1gJJF9oylN7SIpSE0BhRSy0GiG5qZgQbrhjSLHB2QZ9Kc1dQtgDQJ5eU3z757tb9ZslmmGaDOnmYd7MmTAX/1Z/f+7u8+/+mf/+je4zuR/fLqWbu5avPN0nbZD9nmXLxHpExJ9GgZAi20ZO9CJKIjU8nMVAZ6y9ajw7tKkyJq2gA7CpdMc4MZCMJEVxSis8AkprMaBRqdBidUXr3Y3S7oBFy9Z0Ld0dLmQ24m+9tfP/lP//HPf/yzP9mcDcvyan/9an992Q63y2Efy4IMRfSwHhE6ZnxrYCrUFVFa5tJaNGaUzDxktvTImprCPc3NSK8w0Y+/lUYXlUcVj65KIDOiI5NuRiaKmVVDFrG8nltQQlla9JSci3x/28839p/+/Y///n/6609//DR4fbh91g43/eZ62V/Ny3W0ngsQ1puCCyIj1UChRFdPKYGFreGwaMkyR5HQMQYchTAHaSqDW3jSVVTcPUWy02gUDP0dHUVPtOgZIPtQUY6CNnJJlJQF29xbEPLSMw637cE9+2//8Ov/+Pe/uf/Rg7fXz29efVm0i0NbDtd9v9chKapj6b1nSMW69eiLRaJFc3RTxyHnXdhtruYsaQO8VAwOydJsAFgImAwmmqyEQpEMSxeKS0mZw7OYGWoxEWQpQHGL9BTSe+nAMnO3KCtgTeLnn53/t//1V//j3/92defk9fMv3r74qs+7w/422tzmhlRPhkwKpDKV6g0KC0v2eezRD63dzn3WkLbuWQl3s8FrMs0JFCNwVFeNhDmtFDtq1k4LJYsBR8pmbogUNLU2j6vKlIWBHiypXjoyjM2UgcHtVz9/8I//5V//9j/8dT3hs69+d/3m2zhct91N9p5tYVekIrMjI9CCSxRD9ECjq9dlyetl3t/kosnXp2Zjdeo4N2dhPeootDQUK/TRSxlIq6UAANLo5ShlA0qRhbSC6D0javWJBZSlugHgWAKpaj1QxV//4uE//Oe//te//ZVt5md//OerN6/a4Tr3uza33nv0kBjBJbJBGerqPZPdcrEmHrr2i8257UOpw2BWzYqTIrNINBxzJ0t1J1iGgkr3AtgxrZLqIcC8EkpDgYpZpXVaT8FLKcUlIitgmVk62dJH5N/95eP/+o+/+dkvf3RYXjz7/Zdxs2v7fdsfYm7ZsRwzrJSJ3tmkaL33WLKzD22xXcMNamIaSpkMMKQ56EedxFXlcOcw1FpGgPv9fhyqFXM30kkDRLLtDynUOiG70kAzJllqYaa8AAaI5gaAgbLrKNb+7W8e/eM//O0PfvZ0wdv9i+d52Pd+mA+37bBkpyS1pNihTPXeeove1cOXqLvw1nxmyRhKKaxm+U5Rosto1b3WwQu9GK2WYgD2C83rNFZCoBtNgKShDhlh5kf9CBatt+obMx/H6qQE0jKPMlcpVP7mrx7/7//bv/n0J5+m3uyvXs7zlWWP5dD2rffMHiFm9Ax1IAO9Yw715CH8ZsEhJmBiLSvnUKzRG0DEaJalTMMwDiUrBzOymnkpZlYiSXOvRZHRE++6SjEMQyiN1pTuhUAImd2suBX3IiUAdyOpLOVvfvngH//nX3/2k6dz3uwuX8Z8hb4sy6E3sBOpgCLbDGRK0tLVm7WYbg+4Dp+5GtzJyc1g3gEprWgoUx1Hc7fqxwgd6gCEV6cVyrcbb70FQPfe+uhj73MxSwFJGK0YYUirpcDn6IAsxVIMhMIAZPbyX/+Xv/zhTz66jav95cv5sOuH/TLfLnNjc7ADShwRTQJ7i2XREn6zoGVx20w2BBsVCRSlWdZhKtXKWGGVfqwKWUcXQDICyIi+lDJ4cXVIWTmogzLAneYOd05lIhyAF3c/AaiUuZlTyQgp+7Jk+cmffQy0w9WrdnsThzmWzCZ1KJRAT7UQAupYGrLb3HHda8NoVsyraAOrwQQWxzCshvVAK/Rq7jS1rogueiEiaMWUVt3BbjYo3VzuLO7Ok1JI0ovX6sMwmLk7ax0iIqXifHfuS5r7Yb/s93OB6/b2ct5dxbxb5jkbFVQiM1IMKTNzwZK+dOy77brtMVhZGxwETJCZFa9lrMUrWEqtA+heCqlSsLRmZr1FHScA5qW4mY/G0dyH0YahljJMw1DKccZ1GN2skO+00C+/+D6yTVO5utqdnpxd3D0fpzIfekQv8+51m2/74RDLIXvLzt5jQSADyRaIzh6YI69bmVsVy8Di7gZHyiI4DnWoMJZxMJcPYyluViLlXvsyj+PgPopZaqllqKW6m1kdhmkYWMca0S7unE/jYDzqmjI347v27e3tvkefpqktvZbxvffvTVMBNY4UvLT5ts+37faAPMo12aMnmR1A64kQd8mbPXYx0dYDrdKIDHYUr7Yat6tSfWmRyZOTM3kgTamM9GFAobub11KH4vQyTlMdBq43J6tpPBz2//z/frPerh49fLBerVMLSUOVEiYkScu+X6+n9WZN8Pr6qlSN49h7wgREacucMTOjtd6SmSaxRc8kZJGYm725mRYZvMgsvKbRMmHp47CeNlazuNdxPNzeAMxMBQA72Z4BPo2DmQk+VF+vN6vVNK2mzXZ68/rNP/3TF08//nhzMn366ePNSbUkUQA4Xe+65AS53qy//PLb3vu9e3drrfOhn587ePQ8lKLlWsrIpWfvSXVLMBJHXDksuN5zxkSr7sVpAnr2Yr5ebYapWk33wb26V+pkfziMqzHV3V3BYRrcVQrHaXO63azXKy88poOTk1PDdHpytt6sv/322+32dDXROBBHQgEzAklis1m999570fP8/HS9GWstSpodmZLK4faQXaHoiDj20Xse+3y7OS73Psd6GCrIYqZUAKX4elqthtGLqb4L4oxey1A8IfWemYtb3VhZb8bNZtpsJnd3N4k0gbHdTGdnq3me71xsXj4vV2+vVw/PlCkQTEhKAgRB8oMP3wco5bhyKcCeoaO0VNSYgWbqaano2aJTWN3OuNz7rMHq5KV0MpUgyziMUxnG6malDnJSzFQtbjSzYX+4dS+bzeb8/M7Z2clq7eNYax2UFJJArbUO1a1cXJyTKsUfP3nfWHU0DwBIB0SjEgDIlHRsWhiZaSRIuAHw0rO1jBCWHiZklABbs8s9Z43mK3cDdSxDptUwjINXFrdaB7MK937I7XadCMgUOa0262l9erqVtFoNm43VYaAmCeBCs+LVzEA9eHhX6WZcbyaiQE4DIZgDwLE8hoCgHf0cdvQVZCYZgEgvLbMh1EEyOqS6iG/32ueatZDePWkYwqdhGjaVBZC11ksFTZv1WLdjZPRARk7r8e7di1Ltyz9+vV6dPnmyHUeVUskiCXD3AjhQyKy1ZogwK5bZhCQHI4U48hxQXghB8Mxjq7fQLLORPO5SUSJE9N7FTC6yt3vs+sSyYnWDpTrMyjROJ2uaipVaayYTcef8nDo6aNLdt+v1/fv3r66vrq8Ovdn67nZ7sjJfiIG0zG42SHkk0ADJpAvwUjwyJDnJfyEuInuPtl9W6y2CVOrYR6K8wMxBV6oEkKEOZaLlcL0vh7lyWJmXQkdktVrrtDpd+VirF0sbSwU5rKc6jH3u0VXqsN5uz09PI/LFi7ePHz/89LPPvvrqj1988cVnn31USiU8xKPfzGg0QcyU+7tLzgLZ8QaaSYLyqCCaSTA6IAFGHmV3HFv0pan3bD0M4fuZu6V63dRqCaYSjnEaV9v1WOpQhuKV9KFUIQjN+9lUavWT8/XZ2ek4lnnpZ2enMATmjz95/N23b3orw2CpLObHfjuN7p6pQqNBChxtbY4jhmbiuAvDMI7jSmihGcfwThwObRwHt2PfkiUDEZ5RDjNvZrbixc3NkSDSig9jGYdavLhXQMW9Z3dHa81Yh6mcnW+2220p1Z0X59NUx9/989ffPnv5y198/slnT9wQ2WlGDlCTJSwlmgEIokoUQCtCLG2e98t+t5ydn07rSpqEw2H58psXiu7GDL1+/faTTz++f/cOmTCV1hlRl8WvZwZXk69Ra5IGeB2GsWRK4DgOpTqEw2Ffq5tVZ5lWq4uL8+12RYO7xqHWWrYndv+98/u8N46r4lXqSpoVcEYOQCF6or9zu1Hu6NFfvnxzc33be6f59dX+MPenTx8fzVe1jPOhv3jxioABxXl1dXP34k4ttcVcMj1z2C2cVauvJ1ShzAgffLOtw1iRvtvd3jk/L6VGtt57a+3sbHWyPb1zcbZaDWYogxnhpYBWCp88ed+LZTIBc9BoxgTIZiTgZpSEbrJOU/Q4HPowreJ2P60LZFfXh9Y1jgmzoZ6sNyu+5HZ7Wpi1lpPttriTILyo+/6Qc6x9WHEYFsAya/XTs63XI9Ed795dL0s7OT199XJ3dXU9DmvjcPfexWYzkl5qKSXc3bwYLXKppbpXUbCIXJwDQeQY2JOoXpR+ONxE02q1iVjK6Pfu33n54s0fvvgGso+fPhSue96OXEO43b95cP+i7Q8fffThZjsCMgQAUavVuuwP2i01yjB4oQAzkutxqnUwH4rV9TTVoa5PVvvby1ptHCdjbW1p7eC+GoYJ6KTRzM1oZjZKBDqdmXCuzBykeCg2tGX58puvdtf91avXH3/88OR0rayQ977f3c7vPXx/ux4+/fTp//1P/8+L51ebj9aG+vVX3+72h6EO7uYFhegBieTw/Pnrcp02+6oMxYsBlGFajeNmXepYq61X4zSu5tZaNGgwyydPHi/L8vrV2+fP/ezsdL12UVKSJqWyuxUaRRFWi0Uv+S+U4OXL7188f/P7f/4uup+dnVxd7m9vl9U0GnK9HtebsbR8e3nz5dfflWFUWuscBz1+8t6z716fn52tVxOzH6H32DWeD1maF6IaEQiw1LEO21IHr6WsVuMwTi3CzJbDwb1s1idn55vf/e73R3/L99+/rIOvVpM7wUyEIgG5j5CBIYnHGsUCaa9e3Tz79rVYesYSLRLz4TDWwWrJHm1ube7b1baa3X/w3mo1lmIStyfjZ5snECMbwEzgnS+1PXhwVszWjkFAKmot63G1KmUcxmm1qrX0aITt9wth4zSNQ7m6vL57997Nze7t5S4TpJ5+8oEUPOqHIARlmBnJTJnD320tLy4uXr+6Gdc2TStFfPfsxenpuN1spRwGPH50L7Jv1qe1ullR6kj+lAKPbU1kgu8M2BDkpRhssuJ0s+LjOIxjAW2cptVqAtytLMtcSpmmcbOZ3ly+3u+Xp0+f/OnPfrReTz3mm+v5m2fPj3ZHiLXWzDzMM0nC3KYj3kNm7hd3Th49er+6r0Zfr+qdO3dOT9bj6G50t5PTk+12U7yCFLqYpMRFxzUgzN3MzAwM0o5Fc4E7rKN5qXWzmUotIFvvw2poV3NrnSwAzs5PT05Xb15f7272h8O+jvjTn39yc324ud69ePFys5ku7p6SAsyLH+V9CQYaHeyQMjBOfPjoTmv58OGjzXaoZTDGkf2nlJFuo7GQBh6Olm4SBiYa0oRGK0f0BEQWoReDR2Yd/GR74u7b7ebk9DwTNze3bnZ0726269Ozk2kcfvijT/77f/8/v/nm+2kaPv30o3v31mbowvffv5qm6ex0kxLp4zgYTdJRL4QsMYMZHZAePDw/P5/q4MhUupAwMhYhAYM1oBKggRgkO5ZXx1gy86Pn3cxJSv7/AZ9lOJ/KY41AAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAihElEQVR4nHV6d5Ckx3Xfex2+OHlmc969vXw4HO4OkQRAIhJggEhLssRo0n+IlqqksuQqllSyy6n8h23ZlqvEkmiVypbEBIIgIQYwgAGBOODuANzt5bB5Z3dnw+T5Unc//zG7J4Cguqa6vvlC93uvX/j1e40b1ToiAgAAAhLQP/ZkAJDglxoKREEIAPS23y819rZrg+963G27n+28jMSMNr9iQgTAt09CAAi4M7HQzH7n67s9EjD2ywwQGGCGEAjeSToBEO1OhMRoZyRGwJB+JQu4SwvsvkwcCH75XQrDIOh03sk4dgcgAFHtJAwZIrxtKNpdje4KvOO+Jq2oSzjtPiACkEJ0XyEEpN0VINad521UAt0S07uaURrhl2YEIRynkMJd4m+pSHcoMbeyzhjuatE7eX/XahKgBthl4N1PAQnpFu87d3/V0IiA79SL3dvv1iCjmTH8ndq9ywCC+L9f+65tW0KId8+itekO8TZ5sIQwMfhuqow2uwzgrWkAOBB/p8AJAKQQyDm+czUIgDF8l8UgAHvXbAyAABHACJkqSMeRUt4S4q3VVkrtDnGLDQbEyDAA9ksrHUcREQGxrmLtCJIEkPVOmyYAQMd+u8i6gxgkbnFCMu9kII7jMIzeyVL3MwQgEYE0mkVACIiIuGMPDBESAwBAOw26nSZMQBAxAgJDhqhLrtaCiJDY2xYaASSCbQwY6lo8MkRkqBIpSHT1lnbtyKCWQsKO1YAxRiuttQEQaEkgMl0KzA490hIIIJjlEmOmSztDQLYzCSJC0pUPGUPGGGMImCELwCZiZMiAMTvDAjLrncsFAAjECSWZ7qwGEYkhMqYZpx0tJkQupOCMC0bCAmBERFobY4xKtEoSAuCc70gXGIodA2rU64a0MLgresSugLo9IhpkAEAEBsAAGAQiRsCJJAAjMN0nXVsneHuPt64NKGIIsOt6ETUY17IYwziOojgiIqmEtCzOaKuyxjkKwRnnCIhEwhjX9XzfBezSjQg79JZSNoG5pYg7PpFRt0dCvOUlb0UDvNURIjEkINpxo0iM0LzTE4MBY0gz7FosAhCRMYY6nbArNilACiEE45wEmtJIr2TU/Q9ERmujdBCGnfq61jqO4ziOkzhWidJGAyIACARiYAAAgSEY3Al8iLQrWyAkYtA1AkIwjMhQ9xNC6DKx69aIuiGkewdJg46YYIILREZktFKkVbPWQMR8Ll/IF7KpFOMsSRIK2xjWVdKptzuNRn17e7tarbYazSiKlFJaqSSO4zg2cQxKgTbAGCAIRjtOEQEYASIyAgREQCTTFRojY2jHtBh1tQkMEXRtqssDmF0F2lUpNEAxUsCBCZSMM2MMQEKUDA/kPc/hjJPpVNY2qlubW5vbnfrWlTMvg0l2BtEajQFjUAhmdQ2MGKBAYIKBBY7tACPBjWFdxQJCA91rhgiAHEzXC+Huj6jrlQwBdsXf5WpXD3/pwnCIDDUFcMkszrgBjRiTidq1emMzbjdbtWp1c3Nju7qdtDqoYwaJFCCltGzLklJyzhhP4iiKIgTgiAwZY8gZB4R6o6FJCwIwoAENQwDkBAyBG2OAMUNAyABAd5WXiIgREwQsSRQRcY5cgkpMEsWIRnLOGBijGCDnqLUyupOSmkGCURArFUVR2OnEUXx5ZmZrfY0TMQBANJxZtm17FoWm64TisN1pKRXHSilbStdxEIB23bxGhkhp3wZGIiGDlBApICJAJEbAkRiBkNwhA2SYMWg0GAIi4NI2XLY6NSKTTfuuZ7eazUatyoCyGQ8Bgk7Tc+1MKht0kna74Upe3aysra5WKuu1ajWKQoYIBLZknHFLMCBKkiRu1psqISIuhGM7nuf5vue6rmVZqVQqlUpxzjjnnHMhBGOMI7QbVTJK0A6w3A2dCEBgEIFQd/UfwBAawK7qqCQhAtd3jVZBHHTCFhqVyXqklJQIKmFMg46Cdq1Vb6wvz781f60TtMKgY5RGRFtKIYRRSRyEQRR2tLGkcF03m886jtM/PIyMCSGElJaUjm27tr21tbW0vKS1TpIkjuMkSZRSZFTUqhMlwlAXMd7CjzvmQICkDQGQ4UQaTHcFMIoCFbaFFGRMFAVGJa4tfN91hCsFmiQG1Wlsb81V1pcXFtqVMueacbSkxWymlEriOOh0GJAleKlUTOVyuWwum8mkUinLstr1WhRGjWp1u1qt1uthGCKRUSpRCliXMyGlFEIIzpAzMFwYTYx1fU/XMTLoIngEMIYAwEA3lHbxPxpjdBLGAYKRnFmeZQnGSOk4rqysrSwstDZWg06QGMXAFIt5gqTTboatJilFnDueVyoVC4WC73uWZSFiGAQbW5tLi4smjtc3NrpxXxttCJCh4Dydz6X9FCECY5xzLrjgnDNEUwLSgowBYkAcb4kfGBBDQGMIjCEiMHSLAVswack4jpDQsYUlWBIH7Ub7ysWZanml1azZlpBCaJUAYzEzneqmJm07Tr6nJ5VJ+5lMNpOJoqjValUqlUat1mk2wzDUSQLG5AYGOBdSCtu2bdvuCpszhowZY7TW2hijNREprVUUA2hBRDthdYcBgC6eMmi0RgIygAS4w4MhIm7xtCPI6E67VqluN2vVKGiXF24mzbrnuyPDA2D0ytJCa7PKGXppv7evb2xsrNRTUolar6wvLS5tlMudVktppYks1+7p7y2WSr6XarbbxkDXWwOiUkapqBMEQSdgnHHGhRBccMEFY+D4PiIJIgBijHjXjFk3XHTduOlaMRlj2C71nVZLgyqVikYnlZWFm5cudjY3COHYyTtWlhbWF+Zm65uAZJK4kM8fOXLkwYceWl1bvX79xqu/eHW7UkmCQBEJx3GzWc/3HMcRkhNRu9OpNVq+l4Idl226jQxxxnP5XBd6MMZuAbdavWaMEpJbJqE4SoDejrUBANLpVLlcjqKokM95rhMEQRRHA73FTNa7fv3am6df21xecny/Z6gviYK3XvqJnU4NjQwGrUarWZ+amnj88ccmJqe+/o1vzi/Mr5fLKlGWn0qXSo7j9PT22pYVRlEQBkprBLAsx3VFHCdd6jjjgosudHMcx7btJEnCKAqDIAjDOIqUUql0iksLP/pvn2YaUHcREDH6x8jKONq2dF03CNory4tRFI2NjyUq+PnPf7K2MJ/OpCf3TNpSbK2vtRr1saHB+bkb9Y3KkaNHnvjA40T6e//wnTOvnwEmUVrpQnFwaKhQLHEh4jgur6xEUeT6nuu6QghjTBzHSqm+vgFjTKKSJE6iKEqSRGlltAEAxlnXeoUQnHNADOPYAAgyBMQZ8W4wwC71aACop1hcXl7cWi/39BQnx0fW19evX54JwxaLQ2GSXMq1GawvLazcuO6lvctbqydOHL/vM5+Iws6Pnv/u2TOvk6ZcqRCj7aezpVKpUCgIIZrNZrVatR0nlUkDgDEmjCKVJHGSGK23trd3nDkRAAgphZSu6/q+b4zpwtOuYhkCTsAQ8ak/+ZrUUhiOhAAGkJAMoAHQBLrZqtmWyOUy7Vbj8uWLczeupzwr31toNWo6Uc1GLWk2bEscOLh/bHSktr25WVnfrKy3201LCtfznFTOyvbECpRKlNJaq64fyReKUsog6ARBYIwRUrqOY1k2Mk60A7mN2dl6NZvNer3e3fjtbMYACMGAAMYFaYMEXXAPwAD0rXxNGHSKuZxji5Xy8tnTr64uLeZ7ikyy5YVZUkkYBCnPPXz82PBgPwdoVrfnb1zb3towxviePTg44KdSrchst1oauNZaKYWItm27rhsEQbPZEkI4jscYAwCtTavVrjcaOxasd42YyPO8np4eIYRlWV33alkWCmGYZZjoulFAQryVVNhBw3Tb4cNvnHnt6qVzjXZDG9Xb32vZVhx22o0GB1PM5/ftnZ6aGAvbrbfeODt385rn2lOTE6mUW69Wo6DNEDTKIIidVM71PKVUFEZJorRuM8a7bp4xlsRJs9Ws1+tBEAyPjAghHMdxPddzd7CQ49iWZb9dtQjIAONOGrgQZDSgATCAgGAQNIAB1AhqeXH28qXz1y/NpDJ+32AfAtUbtXazJhkMDg3dftttvufOnHtzZXGh2WoOjw5LzjQZbUyhWIzicHNrqxFq9EuEqLSJE6UJLMdN+T4ZE3SCre3tKAiQiUwms2//wXQ63dffD4gMkXHOEBnjiBjHURBFu7TvIHoDLOlUDROCYdzsBFk/59h2ZW2VcxgbHSINl2cuXpp5S+loaLiU8h00YatVT1pN35ETU/v6hwYbreqFmbMLc7O+n9pz6AAiMQBjzHYYR1Fday0dz3WstWqQ5m4m7XhpVxtKVNLohK1WEwEzucLoZCGbzadSKduykXMEgZyRoTCOozBMlNrNL3DYwdLdNCJyZEJYmkAgRV7KAaFaUQBSua7V7Gwtzl4/8/rLjs19R/QUM0h6aWGp3Wz0D/RN7N1jBJ+du7G4vERa+8W8n0qFRluWbAVBFEUA4HkpW0oAZEaO5YaVAaVUpx0kSmulDFFPT39PT0+xVCoUCo7tJknSbDbbndCyLc4sZGgAFRkNTDDBhGCMMca44KIbijlDxuJEAZFQKhHC7oQtRjAw2Ndp1V979ZWFi+eCOOyfnvAdu16vtuq1Tqfd29uzf/8+J5M+d+ni7PUbfjo9NjVJRM1Gs9FoGiDf9weGhtvtdrvdDqIkk8mk09l6S3eCuNMJACCVShUHB/O5fE9Pj+3YxlAYRM1mWyVKKWU0RRTZNgoppRQMPWRoW7Y2OgxDItPdOe26Imq3WlwIoZVeWlxwXX98bBTBXDh/7uabbyQmOXDkULtZtQWvrG+EzfrBIwcffOCBxaWlV0+9trqx4efywyMjQLi2ukZEg4ODzVZLCEGEnHNErnWSJDqq1m/OldO5Um9vb19/X7FQTKfTlmUppeKom2GItdaIKISQwlKKAAiMIaONVmBAIVq27WQzO1TvJtIMUTad5lIIAGNx5juyWd26euXy1TfPKtsa7B1ApFqttl5eSfne9MFDI6NjK+XyzIWLy3OLPUMDe48cMdrMzc0lSvf09PrpTE/vwNp6+fy5mXwhNzI8atlyo7JVrdUO3XY0Xyj19vam0xkiaDVblY3NlJ9yXMdxhef5txAOAIZBIoQEQhXGURAkKmaMx3EchqEQ0nFsz/Ucz3Esx7YlEiqj8EN/8D8K2VyzXj/72mvLV6/Yucy+fXuDVuPK+TczhXwUtE+eOHnP3Xedfv3Uz154IVco9A0P256XSqdXyuVmozk0NOSn/fX1SqPR6OvrK5VKrVZrZWWFiPZMT49N7HFSBQJOREYbpXcQsRSSC86QAUL3kdbaGEJCz/VTns+lkFzYruN7fjqTdmxbKd3dkMVxpJXWRtuWHatYSCGCVuPG5Qtrc9ecjD8xOZbEQWV9VQDFUTy5Z08ml7tw8dKlS1ddN5XNFfKl3k4YbG5XuZA9fX2ErNFocy4El2EYSy6iIMpmc7lcbmpyT+/AyFa9o7r0kWHIbNvmXBCR0TpRyS2cIy0pmfBsR0qLMx7HUb1Z19tGCtndUoZB2Go1G41mo1lPOgHEEUkbGYg4Cq9evjh35RJINjI25Nji+tVrre3NwsBAJpUaG5+sbGyc+cWrfiaz58hRaVlb1UaSqCAM+vr6HNtZWl7SSk9NTY2PTp6fmTn7i9Nuyv/gr310enq6vLo6v7CYLfQikWRSCIEM4yhuNhuZdMbxvDAKVZL4fqpYLNqOreLEhHHY6SwtLmxXq3Ozs9tbW3EcG629dEorzRiTUrqW5VqykC9UazXP8/Dwk78zd+Eto6JCqWg7drvdbjdbiFgqFR986JFz589fnJkxscr09uVzeWW00QYBlFJdlKu1AgLHccqLi+l8fmpqanBw0PM8BDAAxLhGGcax1loKYdsO5xzIdDpBd3eLiLlcLpVKLS0vXT8/s7ayrLVemJvPF4utZjOXzWYzmXK5XOzt9RzH8/1MNgsAtm1PTEwsLCwUCgXMHXiU6dASJDiXUm5XtxvV+tDY+IMPPnDmrXP1erMThLbjZ3I5LmQnCFuNRjadGhke3t7e3tzcLBTynuuvrq6Mj0/29JR6e/pSaZ8h1zoBZLHWjSBMpTPG6M3NrWajkc6kR4ZHXNchQ1EUVWu1yvp6ZWlprlzWjXqrWh0ZHws7nVyhkMvlJiYne3t6mq1WX19fX1+f4zhbW1vpdDqfz1+7ejWdTiutRavRGCimfZcHQadW3eq0Wtlsenh4KJPL37x4WaQy+VJvJl8Q0ooSZZC7fopzq1ZrWtLJ5YrNRisMVf/A8Nj4ZCadchzXGIjiSGslhCSGQNho1BliPpft7++VUjKGRLS+vnbp4qX5hfnNSsW02l5vz/6DB9KcTU1NcsaHhoa66aD+/v4kSU6fPn3ujbPHjh1LwuDi/Gx/b//WZqWytsoFF9lCgUsjOYuBOvWqsOyjt9/e2z/45ltvKQDHti3HA8bDWIeJJhR+yqMkWV/fyOXzru21Wx3bco8cvl1IYQwGQZwkSRxFyJjrMunYrudGceQ4dm9Pr+u5m5WNG9du3LhxbXO13Gx3/FRqz549mXSqWOoZHx7cPzEetBpAUCwV18prP/j+d1N+6s677rx65dLlS5f2TU8N9PU0G9VcNj001Ddz/sI9994lSsVCWFtrNYMkCoHM0ODA9N7pZqvz1qlTPYOD0vEIodUJlQaDTEorSowklssX6/V6gxpTU1PTe6Z934/jGHfrGowLKaTjeLbnEMdsLsMFr1QqZ8+evXnz5ubGZqO+7dj2wODA8NDw/gP7JyenPNdVUcdG3Gw2jTGuYydJdOXSRcF5X3+vJfjw8BBnaFtydGRECjF748b+fdM6USIMwziOwsZ2Ekdj46OHjxxJ4mhhYSGI46neviDRndgoIsZty3aktEwYEmkpbc6l53kTk3uGhkcq6xUhrK5RSqlUojjntu3Yts1toZRaKa+cP3fu8oWZRqPheJ5t2XfccfzBBx8sFguZVKpQKMzPz79x5syhycls2vf9lNLqrpMnatub8/PzSRi89cbZza1NS7AH7n+gVCoxzvZMTQwODn7tq19j9fUVNEqpuB0G0wf27zu4f2Vt9cbcbN/wsGU7SEhacwDfsdOe51pWLp8zHG8uzmeK+TvvvcdO+YvlFWZJbkkmuyhGMsaItFYJR3Alf/PMa99+5utvnj7lWOLA/ulD+6f/zR/+/n/5T184eezQj3/w/X/3p3/8rW8+Xcyln/zAY+1mfd/0dDadKi8v6yQu5vPLS0v79+79l5/7bD6bJa17ioXVleUL586dPHH4Rz98/tTpU5gfvR0ECVvEUfyhjzx18+bcmVOvvf+xJze3GyvL5cGh0ViZlZVVLuTAwBAyXl5djkyy77Yj03umXccJO0EcxoLxwwcOnD1zJu16h/YfWC2Xg3Z7YnKiXt/84Q+ei5Nwc2MjDMPJqamHH3ro7rvvjqL46ae/fvjIkUcfefRP/uSPL1y4+Of/63/2lUrXLlzoKRSz2WwURYuLi9dv3FhZWTly+PATTz5ZyGcti6+tbURR0tdX+vJXv/4Pz3/f8TwcPnjPVqOazmff9/6HWs3Wa6+fdV2fMV4s9G1tVW3LAZRKaylszkUQRiEz2f7S4MhwqVBiiCpOLCFTnlcpr05P7Um73vL8fNr39u6Zvn7l8refe6a2XW616mPjEw88+MDhQ4f9lC+42Nzc/Nu/+9vlpeXHHn8MAJ599tnh4eFnvv7lS29cuHnzZrVaHR8bO3HyeF9vsdlst9uh1vrcufOnT5++du3qeqVSLBRn5+YSpFjFwrIs13EG+gf279//7DefbTabR2+/45WXf5HNFB3HabU6luVls7lEmXq1pgH6xobH9k65nscZxlECZCxLpnx/JY57isXZa9eJ1MMP3/3M099/9itfHp8eHxw4ZNvyyG1Hjhw+Agirq6tRFI2Ojn78t3/rb/76b374/A/+8I/+qNFo/OLFl/7yr/7mgTvvvP+9780X8p1288aN+W8+863r16/fuHkjCAIAyKTThw4dklJWKpV0OpUuFe66+25Rq1X9XGbP9J7Z2dlKpVIsFmvVqmVJ6BY3ATnnRKZWrTbq9aHR0fHx0UI+h4IbpcEYo5EBqSh68IH7v/vct3/tg08dPjj+p1/498srS4duP3ruzdMPvv++Y8eODg0PNVvNVrNlO3Z/f79jOw+9/33G0P/50pcWFhY++9nPnjxxIpPO7N8/vbRU/vELL5x69dTi4qLnuWNj4/fff//4+JhjO2Nj47fddvDpbzzz5S9/VRnjud6hw4dFu9Wa2jddKvX87IUXVJLs27PvyqUr/f39WmultGVJzlmr1Wo2GpZljY6M9vf2JaAZEBeMo22U5mTisHPtyqWnPvJ4NuX/xf/+y5WFOWU0Ej38yMMPPXz/wEBfu9MOgqDU05PJpGu1+tbWllJqas+eT37q0/V6va+3b/SDT5x65bU/+7M/X1tdq9VquXzuA0984NChQ4ODg5aUliVbrZbjehubW5cvXVldXTWG7nrPfb7viXyh0NPbW14tVyqVdC6PiGSM53mddhyGseBWGEZBEPqp1MjI6MDAgJRCGUVaIaLFOHdsBoAE2WIh7jS/8u3vXjk3MzI+trQw71ryYx/7aK228eKLP4/CcHp6eurgeDqTqddrKlGXLl/OxtknPvBovdG6cuXyG2fPfuubz04ODR8/fvzkyZO9Pb1xEqtEIeL2drXdabuu67je3NzCyko5k8k0m83x8XHP9cTeffvyhcKZ06+ls9lMLru8vDwyPt5ut+PIRFFMEtqdmIjGxyePHDnsOG4cRbYnojjSWgtp2bblOY7kYqiv/9lvPHP29dclwGZl/YNPPnH06O3Vrc1qbfM7zz0Xx/GBAwdefeUVxtnJkyfHxseHBgfy+fz21taLL778/e99Txvzu//qdw5O70/7Kcu2giAIOoHWGhCEEJlMJpfL+b7fVa18Ph9G0dDwEONMHDh4cHFlaa2yUezp6eZekiQxxjiOG0UJ51wKwbno6enp7x+I47ge1IzS9e2tYqHoSFHb2jj+vvfqhL7291+5ef1Gp9Uc6u99/LFHbr/tCCJL+Znhkb77H7j/5z//+cbGRr1e397eyucLpZ7eQwf33Zyd/6svfrHVCR5+5OEDBw+dPH7H6vxSFEXNZtMYwzkXUnQzipZldzodIcSZM6dT6dTq6momk52cnIpUIjjn5XKZEDUZpZS0ZBAEiBwRLMvSxggh8vlCNptj3cq8MXEYF3KZtGeTNqPDA2srKxcvXFpZmt/eXAOT3HbbkYcfeqDTas/NzvlpT5n0hz/0wdmbs9Va7Z577nFc94477ti3d+9//2//9ec/+9nJO+96+I47Hn300Xwh+7WvPH1was9OhYLtnqQAIETGGee82WhWq7WU75e1FpbkTBAlYntra2t723Edxng3JUdExugEEtu2q9s110sPDw/3lkpJnMRhKBjjljUyOrS9uRXF4W2H7vzWN7/50x//ZLCvb3J0NLXff9973jM2PHzj+nXPtXt7ehRopc1DDz/08ksvDwwOPPHEE9evX//DP/j97Wrtk5/+zCOPPKKNOXXqlGVbH3zyA3NXb/zKg1fdbcMbZ892gg5nrFAoMM6iKFRKscXFxSAIXNfjnHdzkQBIRN1UfZIo3/PHxkYLhYJKlEqULWWpWLAkj8JOJp1qNRvbmxv5rD/z1pvTU+Of+uTH+3pLL7/4s7feOGtLsWdqIpXyVldXT548+dsf//jo6Bjn4qt/9/cv/PSnH3rqqX/+W7/BGPb29u7dt7dRb9TrTfyVDQARU6nU7OxsGIRxnGQzWdt2uuVKsTI/b3Fu25YhMsYgE0IwpbTSJgxDz/cGBvqLxZKU0hjNODNGMTD17e1cOjU1NfnCj36wsrjgOTaqeHx0ZGJ89OLMhVazPjTQZ1tiZuY8WuLgwUOr5fLY2OjW1ta/+NSnVlbL/+/v/jaK4tOnzx48ePBnP/2pNubee++9culyby7/7gVAQwyZ0WZ1bW2nEEnkp/wuwaLabBT7+xiyRMVEYEkppERkZEy1Wpuc2DM+PmHbVhCEAOC6bqxarWZTSj42MsIB3jx7enlh0RLyw0995K677mRIKc+9/ciRoaG+2dnFazdvBkG72WyMjY/X67VnvvHM4vLyJz75ib6+gVTKr1Q2vvGNZ65evToyMjI8PHT8xInlm7PvViGDiAzrjXq73bZsS3BRq1V7BwekEJFKRPdgAgEppQEALZRSInJjdLvRGBjoHxsb7SbpGeee66YtK4xq6bQPQC+99KIxxvd9o9RnPvNpKcR3nnuuXC5Xq9X5+ZIUYv/+fbV2q9luZ8bTT3/9a88///znP//5Rx99tNPpbFQ2vv2tb50/f/5zn/vcPffcc+3atQszM/lU+t0MIDMIWN2ukjGu6yLA1tZWqluljUPh+SkuRGSUNpoxjgyRMQBjtPYsq1AopFKpzc3tThBk0hnGmO/agsWFbH5zfWPm3Hkk9D3/5MnjuUJxaXGhsl09ffaNl189lctmh4aGPvyRp/ZMT2fSme9/57tnTr3+3nvuPXnsuGvZQav5V3/xxVqt/ru/8/kjR440tmsWF/liWseqe1CKdhJw1K3iCMFr9VoQhlLIOInjJOm02kEQpHyf+am0kFb3gCJjCAy10VEcdTqd3pGRdDqdJCoIOmSMEEJr3ag1BHDfdstLK5LLSmUjDKOPfezXt6q1heXlf/abv/mvv/CFE/fcw23nyo0bW9UtNLQ0O/f95/6hkMn+1q//xt7JyajV/vY3vlVeXHryscdG+gfqm9uozeLN2b5Sr+CcM949NgawU6xUStm21Ww2tre2ASEMQ8/z5hcWCIxWStiuw7kQQhgizhgZSoxSSaK1LhQKlm0jopTSOExKKQRXcWhJp91qr62tNhoNIeSxY3dYlr22tv6lL/11b09pZHjEGJMYPTYxcdfdd+f89PdmZpRSD73/ob3Te4j0K6+88tJLLx47duzee+8OOmEYRWOjY57rNeq1d1sw7B4wI0OGDAJwxrp4Z3R05NVTrzFLWt2KvmVZXAhjTBLH3dM0g4ODjuMQGSktz3Vt27akFFKk0+lGo75aXq1Wq67jnjh5QmsdhsHv/d7vfvgjH3r99Os//tGPMpn0fffdlyRqdm7uhRdeKOTzR28/6nnuhQsXfvTDH5ZXysduPzYxPtpqtboHQKUltTG/kgGGiMi6NSdAZJwjojamUW9MTk4KbbQyBhClEIZIJSZOlDGGMTY0NGRJSynFObOkJaUAAN/3U6nU7Oz1eqPeaXcmJiYmxieajSZj/Pjx4319pf379r708qsbm5tHbz9aKOR/9L3nG43GJz/x8Z6eYhyrmfMX1iuVAwcP7N033Wx1OkEwMlJqNBqcc8d2kjj5FSvAGCIarbXSiCg4Z4ha6eWVlT179zKldBCGWitkjDPWDcPdalxfXx/nLIwiAJBSIjIA8j1fG1Mur3bNZmJyIpVKdTqdqampmZmZ//gf/rPt+J/+1Mc/+tGP2Zb92muv/+SFF0ZGRu677z1KmevXb7zxxhtKqduPHi3kC+XymuDC87x6o5HLZv+JA83AkCFj2hitNQJyIZAxY7TjOJcuXWQEEEVRnCQ7552IiEAI7jhOLpcDgDAItTaIjIi01pZlbW5szs/NCc4d1zl48CBjDBlbWFi4++6Tjz3++MbG5ptvXajXa1NTE0LIcnn1xIkTcZy0Wq3LVy6XV1fDIFRKGyIyJl/IN5vNJI5Hx8bW19b/SRsANNpooxGBc86QGUOWtIQQ/x/Os0lzG1fAhQAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAcPElEQVR4nI16aYxlWXLWF3HOXd7+cq2szKqurqp2z0xP0+7psZkxtpGRsQAZLCQLJGuEfyIkhOAP/sEPDBKyQLJk+QcWNhKLbCEhBIMx49F4Yaa9zXh6mZ5eq6ura83KPfPtdzvnRPDj3peV3TMezVVm6uV7990TcSLii4jvBB1OCnx/l6IQKQAPsGqACoQ/dg8Rnb9UqECNkkKDL0OoxDtVYSIFMxtwTAQQgQBlVUA9IGVZ9VbWjWlDLUAAETEAqAoqVS/iKSjEVdXMGv64EOcSNGIDgAAAGMzL92n5c/EyFz5aag0FAgKLMIJlAESqBGOJIjCj1lkZCiipSMQaW0vEQL3c+aJimiUYCFBNktiqChFUiYiW4jaiq2q9XP1lVVYoAc32AwQlEFSVVEGAJ2IoAAJR/XVFbQmVoOSDMoFIhEiFOEAUzKSAqigBosGJKrQWJdTCEBgQQJVUVSCBEKA+qNpmjUb25U43PvDEfVQhApBnkKjTIAQwDIigolAlVoCIoQQFMUnzXAU8gvO+gq+YGcRBiA0xC8jAGBKFQJQJLkjlnUJVKQAMJQIrCaCAqIqoQDwhIPggnpe7/hddWl+AEiuRAkIKIqpf1x/VolK9D81i9T3EAGutLZ0/8tzMSx3D+UcEaozWrF3fo8v/Lj5DAdjlOx/zZgAggn5cOwIUdP5sBQQQEFQDK9daKClUiVRqzQFVbVxdleqohaqi/iK0lphUhaDNNtd3Nht/caPr3QGgEP2LIrgJP7qgGCkIqiJab0/zSFU6v13O36alIrXKKkE1gEBExES01HTpp8RUL9BYii66sC5/ls+vEQ6K78TB77DCBY1UAUiz8R/78HwdVVIigKAgJUENLSoieuE5ev5Stdmn2jK1yejjDiHL9WuznKONfme8flT0j9rhIx5FOPdJveCfy50HSOtQoSWuiqpIUJWL91NjFqalGh+Rnup7qP5LIAKfi0IEuzSYLMHnIztKy4WJYECiysxQJlVRKNdbhtovGAoVC1JVYSEyaKBRCQTDpGKNURFiBgnIMquCleHFsSqDg4JhNEhgsWQAITIKQ4BCmFihxihZIAQ21tZQTh93h+921WnzHKPrb9aWoDqfkjZJYPmKlt7bhDXXn55blYgUGiQYsAQBcwiGSFTValvVECQER8YrBEpKolwCgRBAglDY7yHwX6AGCFTHEEGW/6OG7lqxesehog0sNsCqSqrL8FQwEUSV1BDV4WJUVdh5BhJFRPAAsUkCKh9ywwL1ZT4j9cGV6nOBWoVQk5+/H+llmfQUpARWVVISqIIIUhctRDVCghSgWmqq8bQxZePpBLBRKLOIjwypOGul9JWXudUqVFjkp48e3V5MJk9d2Wy3uCy9jTrGGEWIkpTJLC3wfXgQlilFIRfqpBrOFdAAMqp0DtTLSG3+SFCISB2LrOKJlWAURrxjBrNWWkoolUpXjqejRWTZVXzz6pa9sVFU1cnpbL4I2TzLs/l0Onrn3feCd3YpiX4fSsgFHBJgCcZ0XkFpoyIRSAGui0hiIABSZ0BZxr6qOhXhyDgXjEE2HwMlaUhiUUOddnc0mt+5fXdv72B8enp8fDI5m2ZZ4apyPptHcVp5DzbntVCNNxeB6Fyren0oMRQEUUCaGo9Q13dQUlJIXbwQuCkXiKDKQFAloohFoWwigC0pRQlxhCBcLYrZwhoJbBhmPpn98R/9ya1379+9s1u5ADIirMpERqESYJJOUGKO4qT1vYP4olmW+bb5TwkQkdpX6tTYeHaD43XWliY1NTU4S/DEpUEswTMH8XlZLCyTaXGe42u/+5WDg6MPP7i/WBTBQZVEVBG892ziIsvjOAkCgFuttvfBf+9a6Lu8SYAsg5G0aUWocRtSUOPjtCwpVEWgAgIQFAA4OHHIQiAjsLZtDT3efe/Pv/HKo3sHZyeT2bTKMpdllYIAUQiYkjRRrQarMRFHUQ8As1VRNWQBIvD3rkiXiBK09nKSZSlS66hQJWhtCFp2Elqr0WQAUqIQKpJIOA7exWnP2JV333ntz/7o9+fj0w9u3ckmGdu2DxYEAW9sbcwmkyi2nU4rSWM2SFpxEK9CRVEVRVG5Cmpsk1C+ewzrRxypFoWhdawonkA/AChxkxZUqanwVBisbJzzArKmI2lkq7w1fGo+y379l//1b//2l55++tL8LGfDsNyCkHUm4jRmtsW1G5v9/iBbVMdHp6L08MFxnmch+DRtiWiSpIaFjmYl4buD0LJZW/ZrVKicAaqoWIRUzjMw1wDUPIGUWBVEJKIskBBEPVjjOEaw4zm98qd/8L/++29N5xkrXCFBRdTGlpJ2PFwZtFombcWLRS6qx8cnWeagNrItImqlLSJxrlLRqvLeOzqaFQRqyvSL+44aYLQp50BEpeopNKg6VqmbGSIC19sOIWraESIQqxKTRQgiYOsIaRC99fYHv/flL77xrVfLzBvmEGDZQCWO405n2Bt2FaHIZ6dnR96BjSVmZgNQnhXGmCxfRIatNSBIkKTVtgoBTFOUXTTDMlc15U7jL6wI2sD/MkyVFFp34EpG4QGloIZjYoCsuinpSlaF3/pPv/7w7u39R4985tMkKcuynaRJK7WJdjs9X+Lk5CB4ATS2HQ0SvPe+ICIfqjRNOp2k1eqRotvpAkFU48hY0idA2AjeVGDNL9F59fbEOsQKOS/Cl500KaSybEUIJKKVgfV+Qdh6vHfy7/7lPyudm03ODEeG7aDTc2ky6K/EaeRR7D5+nNpWXrg0armq2n38uN3qrwyHSSvp9Vo2FjJSVUXftpm422kZw1VVxjamo2lGZM4z/rJXA+rmvHlFBFLNgTHUBymZAgurgLjuU4jBTmEM+cpHUeoktyQElGX7W69/67/+xq/4yriyZMNERkVbaZy2YiIzmU2cOGIOFUKgKi+ybDoY9NbXV6FEjHa7zYQkTZnIOxecr6pCVZyvgpOPJbKLfA6WGNn0eiCuUYcAEn5Sg1PTVkVMLhDF8aIYGxVrk73dg9/78pf/8P99xWgrn2cbl7aK0kVRFMcmSaLR6MywLfMiTtM8K8SLtWZra4XQjyPb7aUhiKtcVSyq0lfFiTEmiE/iKI5smiatJAGr1e9An/POqmmrahQHoAZNUwcC12QPEeoqhyAiAZ6LRQ6qonb3/ge3/v2v/ofDw3GZt1gLY1xezDqdYRLHhnE6Oo3jRINAdDaddlqt4cpQ4Hu9liHjXZhPFydHIzbUbiXra4O01QfUGtPttIIP1po4iSeTiV1CvNSIoksVtOlXiJma9MVGtabL6JxkQs1oQYhI5kWFrDcYEgZnR/u/8ev/OZvNq2kpSojhiK6s90mtOn90eNTtry7yfH9/b3VlkMZ2Y23YjiO2aZzYhw8ePd59vHFp5fqN7Svb29PJOE6jTisti6LdSsuySNM4jiJic/vRQzqeFY2bNNnTKFQhomAwNdSKEESoIpkIJgRUzsVRx3AqwRG3Cp+RmxlrD/cODu686+aLL/7f3z86m8xnWZK04jhtt9N2uxPH0fHxIYSIzGh0WhTF9vZOt9uBhHa7BdXdx7vqysFw8PTNa4mNTk6Ptrcvq2I+nfZ63aLI83y+vr62u/to0O2trq7t7+0/iQH6LlynLovkUDeQCmbiAGet1aCQoEgASZmR9t965Wt/8Lu/c3mwef+Dh5OTrCyl2+1671dW+0mcnp2Nq8oxBw0yncwGw8Ha6urq6jBN4vF4dLD3mJm3L6+vrw4V0uskVZHPp2fY2kAIRTbrddJWEiVRPzi3c3nL2iiIu7S1eSGI6WMRjGXnJUsjkKoBWYJTZRGjpmcMEKazo7t7p8e/9qu/9qM/8tKdDx7t7Z0djiar62sKt7NzOU2Sg4OTxbwcDIbHJ49JZWNjbXVlBYoksvuPd/N8fu2pq512u9WOI8sHB7uDfpqmpp3GRTbrtNqb66tpbFVEGGyIGKpiIgvgiQudb79C6hTASxxS9UwCsIZcMVXKgyNjOiJO8nL/vZfPRvNf/Ff/8W/8rc9PF+WHd+/nmWxuromWw8EgTeOHD3dns/mgv/rw4aPLl1aj2KytrlpjgvPvvff2zvb2Jz/1A6RqjR1NR+urw/HkrN/vsiqTMcY4V7WTRESImQlRZEPDBSoTfbwfUBU0FcH5FRpWEIGoJgtja5mhCOHOO3+ahPDyH371c5+9adXcff9e6X2/16+y+eWdtcq5hw+PvPNJlB7s729uDFfXhkliIbKYzY6O9j//uc/2+t3R2cnq6rDdSY+OCxK/vrZqrEWQw4Oj0dnZi595oSoyG7E1ViQAQiLMrApjjP2I8A1nxudVMjUUem0cr1SRArDMlE9Pju+9+8X/8ZsnZxP1yfra1tdffbOVpF3Dlhera0OisL+3a01HPDlX9nvt4bCTJDYEf3x4sLoyeP75T3Z7aZKY6XS0ttpzZX51Z0u8Y6iIRGw67d6dOx+6yoEhEjiyIYgKGTa1pMF7fkLz1QRJXTkoqTKBoYFArBHEEBiwykxUkRvNDm6Xo0eff+FTVeWv33hmb/+0k7Zbse12zObGyualrfsPDzq9FSduUcwHK72dq5fbnc58Nt59cP/mjaefunKllabzyTgUxbM3rqdxxKwILolNbNigNjSe+8QzlpDY2DBLCMtWSZnADZv2UdS5QPmJwiuCqiqRMggxYFQr0uzs8M58tD86OmTYv/zS507Pzsqq6A+6cSs21qyvb37j66/EcbpYZOPxuN/vbl5ab3dbeTafjsfPPffJqipXVgatOCqyvJUmEkKNF4apFtSoWMLKcHBlZ4sIEgIzK2CYoyRmIgmBCIaZjmbFEiJrugo4b9DIEQhqACicoaA6IR3Pzx4c3n//bO9wcnQ2Grs/efXdYAyU4iQm8ObG+oMH96tKbBIf7u3duHkzSSwz5vN5WS6euX5t2O892n301JUrpKGVJM6V1jCIrY2YVCTUh14hSGQi1dAgDJGxBmiOUwCICuGCBajhWUHUJDBqlGFtyNNCdermJ9ODx/np6Xw0rUL09q2HgdKsEJMmAVWr19o/PApiuv3+6OR058pOr9teWx2eHR+mMT9z/VqnHRvWZ565/ujhXWvYh8oaMkxMIIJozUKrqDBDScAw1oAhKt57H0JT+XN9ivVRHqVh4FXqIxPVc8YnAFUIGYXF9ORgdnQwPT0enU5u3dk9nhaVFy86moz7/d54Mjo4PmZjP7xzZ3t7K43jJOK33nx9Y33lyvallUH39nvvGIYlXLtyddAftFvtyFo2TAQJXrxnZoUycxTZ89Mz57yxlo0NPkDBxhhj2+0OW3NRAXqSwaDQOhsoSBQVIWdk2eQgH59NzkanJ+PZrHrt27cojufzKZNura/PZ3NXlWmano1GO9vbnU57c3316HD/8ubG6rC3tjKII37xhecjQ1VeDIeDl19+Oc9LVdUQSCUyxlhDADMzUQgeBGO4FrcsSgIxWyYbvIrX11791qA/uMBHqDREbcNaEhE3hZo6hDmKo2p6enywf3o6rUL8wf3dK09dKatyOOy0UhuZaDqeq9eqzA3LysogjeOTs2OC3rx+rZVEt955O2Zqp6mKxFEkzh/snzIbBRsbGWsVIYqsQiEqokTGi/OiReWMtZXD472DsnKnk9OyqKyxbPjDuw/4o2UDLuRjAIZhWAlSkptUxUl2dEpBTqfFw4Pp1Hm13GpFlc+GvfTk+ASwZe4TY69e3QT80dHheHS2s7MdWZPG8bM3r6sXV7jYxCRKxJ/85DPW2rrlFRERCcEH71SbU6zIGiLy3gP2/v1dgA/2joLzbGPRcPMTz/YH6ZNj7icESV35UH1UB6BQN5JsVp6dTkan9x7tLir59nt30t5wni1UwsbqWqicIWolqYpubmz0Op2drc18MXn+uU+FUB0fH1rDIGFmNtY7b6wF8OKLf0mC89477yWAman2GQZBJ5OZL+KHd+/t3jlMI/vcJ5/d2rx8/frVze3rQr5AaZSIPhIDXDfAyxMuKAi6AA5ID4vZ0fjo5Ph0NJ9Xt96/WwWo8mwys2RbUXsyWVgbnZycrKwMkjRO4vi1V7/5wvOf6nbi7Usb3Xar1Y6jyBJL3YsBgUiyxYxIjKlPmNhQFBxcFVQZGlmTvvHm7SgxTz/Tn45z0YXIwjs1Id9e2by6eilN2fJHWsqP03NEAIL4RSin88nReDQ/Op0JkpPTWavdmY6nK/0hgh6dnJGJT89G3W5n0Ou2kuTxo92rO1uRQZllca+9vr6qEow1hggqILJEqt5aCkFCCETM1vjgiSlJYlUFo91uPfupGyuDKIRMQgpIuSjRMl97Zff+/UcF2Wevrv745z7BF5qA2mOoJp8VIDhIqc7Px/PpeLy/fzyZFW++eUsCFWXlvXeVM5HxIajCEFaH/eGwV5ZFHMc7OzvdXrssclXxrhQRQ+ydDxLq88qahjKGjaHIMkhFPRkVhMpXxAoOw0Eosnmvd/nB4zOhdrs/dJT8ztfe+sIXfvaf/sO/+/pb9wer299RjTa+JEpCWiKMXX7m82w6nh0dHoPSbF512oN5KIqq6K+vTafjVrt7dnyy1usX+QK09vrrr332sy9d3t4enx3tXL3iXQn11hgmMsYARMxMpKqGiZhEJIRgmAGBRkVRRTYWYRfcuJysrdz8pV/50sOjoxD4F//Fz/a63Mfi3/7yb+7Ny5/+sZvzrLBa85mkBhAyqiB4UhvIKeUqCxPCww8f3Lv3iGx6sDuqYKejCRtpd+KyqMgk2WLRbiXrl9biyLz91vuf+cEXjbjbb7197foOqVg2hkhVyiCRiSQECQHMQYQYpETMpF5FjLEgpDGRtfChlfRC8A/v7kVR+j9/89+8/NVX/st/+8Nf+oUvbD1940c+/fTW+vD/fPUbkMXyCF9JlVTDctJBCJ40C9nZ6Gg/my3msyr3YZYtgoSiyEPwpXOLLJcgVVkNBgMmCkLDfvvqzsozz1yPk+jw8CTLy5OjST2zYYiDCBl+UrGARFRERNQYK6IELb0JEmcUf+WP34/MIHh+4cWbL3/tjfdvH4+Pssn47NM3WteutVbX6a/9yFY5fczNQRzOyTmpg4LJh+KUUUzPjqRS4uTB430BtdMkiS0RWq0WkymLcm11DdB2O71398HWVj82CYLf3t5aWRvG1oqEWkprDC3J6+UgD1HTfNTpiOM4ubS5U3rfMe0//tNbSdLf2Fr/6tdff/PtWz/4wtbf/qkXxpP58z/wFMRMptl6d3NR+nMYJWg9zdSwWIxg4PLR0Ww83jvYH8/n03nuQ3C+unRpfTgcQtFOWsE5Jr26c3k2nxkO/V5fQ3Xvzv0o1mG/c3iwOxi2gcBMKmqYz1nkhp4Rqc/SRNDu9O89WBzuHb7w3Kf2Zm71knn55XfWNpNPP936mZ98sd2unnq6Nc+Kyod5lruARfDeJ3Q0zWoTAwxytTIqHvIYiw9vv/Ynd28/fvRwdDAdj+d5VQYgRtDZfNpud/Jssb6xvnVpzbB+cPv955/7xEq/G7FRCZ1unBeZYWtNrBSYNPhgbBxEmmEHESKqZwCYyIt4Ty+/8uCLX/rmT/3ED165es3I4ivfePef//yPcaSnM99WKfxUuAs2QFQsZFEeW77QUqp6Zq/KBKNaBTeTxTSxUZ654Mg7hMpVlUvTSEX7nW5RFUkUafDBV4t8vr6+7l2e5bw6bBmGc94yM3MIni2JQAQUpCGzl2Y3hkm1dAWz9SH8zF//7OsfPLp06cob7z/8oRvtf/L3XypzX+YuUhSGCmdni9nxyWmWOQk5IbVR3DAPCgVbaLw83Jr7alwU89FoPJ3NHUEEs9nM2jgEraqqqqoQ0O131tdXW2k6Go23Lm/1er2HD+8nSSx1v6GkCsPMYChZY+uMCwWDDDHV54QqgCHiyNJ0MfmFf/zTH9764B/9/F/dvrJi4niWL0Amc3L//uE3X/vgay+/LoJer7OxsXV5+9LOTs0LkRI4EEiMogTEuxmknE+nh4dHYDPP52ejUZy2iGxZlJYIxKJudW0tjqko8yCSRDaK6TOfebHIy/qQ27BR1aChpt+NMcF7EFljVNWHYEx9g7Axqiq+BLliUv7cz/3o2eGhIT06OVlf33nwaPfVV99JkvTy5esv/dDnjw/v93rdPCuVwmSysM1ZL4RBRCwSmJwrR6jmx4fHs2keghY+KNnIUn2+6Yo8iGu308n4tL21ee/ew53traLMOq1BVRRpGkGEmMUHorq2J63HXIxRIEioh55EpCk8CQox1oqHSpaXhS+DMT1Q9NWXX9/fP3rq2uWnr93M8sXuow/T1I7H4zhKINJqp3bJ5iqBl/xEbnSRzUbj00me+7xC5X1ZOVVrIxR5FlubZfOV9ZXhoJsXi1Yr3thYnU7H04kO+12oqqhlAl2o1es5MlUQMTGI6qEb5npIkH0QUfFeQcGVUMa9B49fe+P2+vrllz77Upzq4emDNEmNUWOYEIkEYgQJFiCQQAPBqhKRBDensJidnhXzcjZzhRjvffDBpm1ViaLYO5emSa/XSdJ4MppcurTebqdpshqxtZZEpJ4RqkUXkUYDY6SGIGYVUUBCqAc+VVWEgsCrFKW3Jn3/gwff/PO7P/YTL1kTVeW8mJStNK7Ji7IqmYhADIN6VG05AkSqbMCMioPLp4vJaFFkwdhk/+AgSZI0TUUCKIQQLl26pJBet3N2duacW8znkbXMTICqGMMhhCACgJmNMbUjGWNsFHnnVNUawzWLQuScF1EROGdU1t54986XvvTWT/3NH6/KSZ6fgMQaciVBjYQQ28gYa9gwWxXi5hgVNTaoqHPlbD6ZzCfZyclZt78yGo0Nm3ar5UMoyyqOY2IE8YZ5/+DQWrO6MsyyGaCGG2KemIjImCelrtTZGKpQG0XGsAJ14+uDqGoIWlbBhfDhvUfvvbP3hX/wd/LiwFiFqooLTpPIloWzNhIREV+5SiRUpeNmsEGNQMkIk9NqUs7nk/HCq+Su9M63kjYbu8iyyMYAWUvtdtxpp6PTk+3LlzvteHtrwxBpUECNMUGEzHLCrZ4LZq6LBy9BGUE1+GA4slEMVoWrfFEEun3v+M++/vaP/pXPlMVplcOyNYaYkaSR8z5N07J0BERRnCRpjQ1Wocy0ZEWDhhLiZpNJVhSzxaLypYJFtayqKIqcq+bzebeT9no9Ih2PztaGvXbaiqxxpe/1B84XzhXLCe/m9DyEwMaICLiOYDCxaBChosyJuXLBiWZF8dYbd3/27/3k8dFeVbk0bgfvaoARCRKkKNzqykrpstlkFkI9xpewNoektSW8cxlcmM8Wo/F4kWU2jrM8DyHUmd97ly1mi/m0KIqT4+PBsH/58tbrr73+3ru32q3uu++8N5suVMgaW7nKGpvEMRHZKBIRw0zNBIvUY0NFNTcmqjy8xqX3v/O/v/78p68f7B14p8ZGlVsQGGAFVc5HUbK+vrn7+PDxwTEZ22p1FeybudGG6QJQkhZVlrtS8qK68QPPHh+flpWDkvehKvNWmsZx9MyNG+00FQk729v9Xu+Hf/iHtzYvici3v/2O4YjIiIDZeu8r50II3nutR2+YFCISqqpSFTLqRSsfFkXxzT9/0O+trG8MyHhj2ZqQpqlCrLUhwNjWbOFeffXNWZa1Wp3ZLBvPZllRnByf/X+Os9prZu7tAAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAmW0lEQVR4nAXBCZit50EY5m///vXs58ycWe6dubt0tVu6EraEJWRA3rCxwQQb6rgU6pK2IbTJ45YlBZ4+pGkpCU8ciEnMEhODZLAlEaHakuUryZItWZulu+ouM3f2mbP9+//tfV+4v/U68o3DnY3H/+jx3/obx11RQoktEeDYvNlNqDVwrummpRkeJnZEYl/PmEYApwot9sm7V2y/JXmXFqVqWWo5nB4YH4Fh7A4EaA3waNvEK87mCCHnk3iW5LZAVajcnrN9hDHyLECeMRmwvs23KSGIB8ZnQFSAtIBxCBitEtBrsT3lPA/5UFdTAxmBSk8NQ4ilANF0+7mv/c5XiGdwbhjVR5tscZFriYRmNdA7CO5KYg9wBl0VGjtCjNLDLbuzr1pUxhxFGRwOaC7qqItabekN5DjXQRxMhYn8IB5F3Idp5iUbicqFG7jAGBYgqMGqr1VCgrFpL8dAE78pFhYshAoFJurytrS+1LiAfo9MtZTC7UzL/bFFmtY1SFTY4TXcmb1q0uypL/yz739nf8BtK9DCwQaB0OKtKVqe05ZjhbSc0YBhiusgZDkUOMGcY2U0CxE3WFpTGUMp0RbXleDA422QlHW3idPKtCk+SGA3ttYBybFyFk2BYLYJKehIuIVAB3rG7VWOWFBBzDTgAI1K1/FgzXWMYD5zTrqRc36NQMvFDmlpWAxwx0cEdV/+93907vmdRavagStTQHyQIje1yA91XuCauu13XDM27Q7zPF4zyTlLPCwqzSnarshGYWtlkA8RMNrqQwsUhsrVCmhQjZ3dxdK4eMF1+ySDGhnEE7yvTAXd2GqVwdLgaWJSaYxDApM6M0kFpjMVUXWQGlCR0kLigTLAgDjQ0qR2tbGKAoEgFhVJ1/76ja++fHQBwMIZ4rIQJhs86lqn1AKFNgSTDXjsNoekNTjbHruGYSi2c11C5uzsGml0bKM0eMmzWxrPM6rF1gwzq1XkFqCXOOMvWBVH6XYJjQY8SJWItPGUb6c6pGSt0AOilCZ55qS1XkM7D/vOAoFnyvLQAW29iZ5RyzFS1sHKr5GFvkEjyCs7bUH4b+8bzrZRzcFcVJeF31qwOEfbUqPCeQu4nZht5ygEnoGYuimA2KLFnlGaVZUCBHtEhj0MJVG1AQYohOqJiudJA7hsClEIMTDc87YTYTkwSgeciJlDmqFQqoJKJIlCBjo5gTQAZMnJEaQxwDPnd4CYYBUrDyNUuFpAR1VVoWqMUoCaxDSbjmKI9ogTSHEnJjLSTeulGgzUQgM1OORjVjDYRDZook6Aj80Rb4YONaya2gDBo4uwH+AAB7WGlsDRfgwdDgkiEYmYzUcgahNOXFnB3USBwjS0oxC4EaQ1grRSGmdO2wbWVFYphX3nLwG5BU0FkAMS4ol1gCszASaF1gO0obFPA8rn58hSV5ceTAU2pYP/ctiBAZkh1qdVGHBC3fY+XJpzroC1dblx7ZgCWyLpt/zaAWoJ1AQarWoLbY78DhDWAEEcMo3QWOCVI9PsYC8wlUGmsAYCg0mWqMh3xgEaU5yZolAmQnpmAAxqq1lgyhy2LKg8AwDhXBtDqXFpZSQA7Y7LU4wD1EPYSLNTwRBCSsFWqUiISTCAQIFSm77Hph40G6QxEIDB2QTNnYLtKShro7RLna22aDi0iDMsNQ2jeZInPmpzakgxnrp9Y5c7VFWud0hlACiAaKEyByGEnhNIOqEBgQ4oVDitaoB9bIGeuIprJFPdYDTXgGHItQkF3Kn1rOYwdnNN7VJqczsB+G0Fdy3Y2lcVhso5IZzGDv72sU4rJCFCmYOhc9uCgXF9/HZ4MPbjqEoqGAUmm1GqgG25ECK+IJmOpK7rmQk8gscWdrhrCZdCn7kqY8pp3tH1nmEN6AJcT5HOXamhCZW2yK8A8JEzyJbGYWgohFONugg5k5UkxiYNIa9QHDu/sqWiPxjTawadL1UFKQojRLnm3XbrlGs4KALmMfin9/Q2HOk7Y2mkTNYCpMZYlCYi4FKB5mIbaMuNQwHDbe0ySGOQSx4TfWNLtDzeXwEU6kIw39naWQQN6/vmAIK4RBVWAE5KbXNoC2sJwdQI4GxFkW+CyhIEjEeUNdISp0Cl9XwDyBrdKMjV0mw4t5ZA1735pptWol5PstbF/asst2U1jif5VKSNdH+OWjJWjcAq4FxnSYz20eaOOX1P9NZraTAEt3lR4YrxFsADxyq7tECnQudSMoE4cytLUTgQIHMi4HRSt05E++8WvGW33lDtRYW34LRBCKkAcLV2jQaxVOKSpUSyhhbSeG1WjmTFZHuf66EAe2Suh99J2Dd3q8uVdIidvPP9H/rQx47C4tmn//b5p54RJXwvzsGQn9+Cuu+VI9T20A8Fg196pKtKrFOSU9l0VPoWTQGat60azwCRmW528MEV1VwGoADEww4YQ3AzoLxrOTGygIDCgII8w7RXuwmTRhAKVYHqHOAQAuOkQaq0s8TiBtK5s8w2EJqVThNGmY6ZmaFwYsSzW/DijMWtxdbyLWyl3cr3xm+eNVbyFHVj85YITgPWmUuWYyy29Q8KGEPysXlJvJF/kOpWB/S0n1MBStbpqMpyrYyVklhW7xnXgLEHSuakxE3KkkLIOeQLqRxGDmCOMKi9tme23chTSz4/KJSUkEAkaksdNJEDztgUUuYiBwqFk4r5rToyZl/qr27jq0U+sR5fvu9YNwrMWjh68cKFqQ4Zr12O6O2LxBmTVMkdnN7tgyCqtlo8KUTRrGmTkBKIdhcBZMZ7wBCweJMJPCY2KtanKA1pVVzfAkeOwcpAD1vniZGig3kzvm7YTchzptq1rEaKospIbamH8eYuClwtYoMgl8oAjuB1kFDW4lYJNaHYAyZolkXKvp/Zb6REA+zFx4fLt4jN16C4Bri36wB38VZW/XhXP9SSLYbeuIYvE34gVdV3pCCopS/cgLP9+D5iCXCmkJhaRKABTZjdcAVyOMQoc0UhOx4+fJI0G1R6AhdYVq4fWCCQ10G8csqDaN7hHCdUBtgxCitsa6gpZrF00CmLubZCtYlXSMdxVbImNnHM/mHTvnSgdzTtBawVxAGezu19/cUycCS6BZhPHCbv7mXPziRUOA5AIRzjtk6rdYLppCOX6MmQfhyM93bIh3+xR3LtO6dSZXsnkRuZwmPO0XleS+edOGSzPTBDliMwD8SlNcqcEk3G26KTGAyETLxprSNIkXICEhYSOLYrh1E6BqbBfZU1oEqmtgJIB24iVMfgPRr83g/Laa0dXegQ9b4jZZlNSI2va2uNuydQv3wnG01VNq9cEu1nNrlWlB1/7pbFT+1tBMPGA584NlicU3i69hX2bprmpEMIqmfAi6FWwmJMB84kSOdT0mrqJENBYBagRTt6rxXOL4hRAXCIFKF4oJhjsq5aJGg2wVQGoa6soZiD2QZ2zMLt3PSCYqpSwZwzUKDI4RcMePJSakl/yNs5m7acqWq0NVMLEb4bUiCtb/W1TAhDp7seIeD4sebDHz/WXo2KmX7iWfeD61PvlbrQb0824Le3s8nu+I++VpPcEZWCznHfprLyjJrCKqi7nCsIfa9WB9S0XXyCqV1ZCr3YJZPSYWKpA+NEJzPU9Oo9CxsBBJFjBM608WJTl5AFkFpgfGhK22RyF/t/fk6+W8O5xtLRjiXZzsCH/3UPzG/a/+YIUIHZL6yamdc1eA9cOHqruO2BwL4wOtVCkx79xtPZhRt7HudVHR7YYn3dehEZMLsL2KnVkAw9NA5UkQIQy1h6u9w1nLGBtkCPN0ijDzE14y1BjWEhqyopMO5YIEa4wrBFpfUJBSaduMCPRl7Zij1cWKtMDWxalxT53Y54Yafx5fO1w6TptU6FuZU6te6tfRSass/JLHO+lGhGQ24P0eBjH/SbyydvJOW3nkieuXrQuJzccXj12G2rm+uTV3avnd6/4wMnxoNDKHgPv3KDnr6bw985Nj+cd6MJ6gWIxtoCoCSRDgTIEQpSbdsMWIZ2d8EgdLYJcWVnDoTSKgvCAFrjpLOsxWAqGUIJIN1AW4qggJa6vOBP7NmnLhmfMp97EdJLvMoF/9Ge4FS/vAmlsB8/To7c3Aj7jYub06de1g893Lh6xTy3Pj2Y8eMd/dA9i6dWQDvgTKGySofzDFVAqXJcIGR0KRISATxJnB+DilkS4IMdtDqopeYQylr4zHO1H5pJcvgITaaIKicAFMoxQRQxlitdB57VUexqTXRlPR9QZ+ocUajWbeOP3hHbtYzChb7beei4euwd3K/UHzwYPv2ORgjslLLb8W/9seHqcjQ4TP/d7++eTdTzX6usU7ctRp//sb5KzacerC+MAirLEtlIuM2rhTCGwCwZQRDXekThf3xkbraHY6Vzy2hHEwPDEBYZZJ4KABQcuhlQHvQZsAQQqNMUAgkB1zSkTJkMMit17GOcK7zguUJxYLCPzh6A//A6lsYse3S+AQkG45Hat+gRZk4uoJgir03WAMlKeN8dg6zb/v6Lo7cu751LC2vtP72jce8xYyrjd/y9SeTTqdBW02anW5UTvxCEEyHJQswgjkKysQYDDmof9CO9tgNOHsb7xoXMghBahLFwztO1gTlwEcXTKQ+VnBiHEtI8hOSU0rryCN0rTFw4tik9DFtH+Z++g5+8KJWVzuveMy9fOYAPLdQfJ/a/juBris4l9oHPnFg9vXz5+savfPHCRtC99ZhxTayQgg7FiE/36t0QlqUFAlbTvRhqzJrRks3M4XClPTd/s9/pEticVcpkU2Iy67Vsp0kroYZD4vVF/yDKgAjHsOyoAEF/mYFSckkzWWOFSYQXGlYQns5KNOPKIWe0byhg0MWIRPL3vw3OjmvKQQ8RBurvTYpfuj28O9IbU+uNqAzR6t39o3ec3IDmDx69QQC9dRVnBfilB5c7axvb+bEaru0ilrlQ55n2vbnThzwexM0Itrr+/O39zmBv68LupYt2JkUYdfkS6R4BBqLdzNgUjyRIFQ8ilWOFGcAgLJjyDNA1qktc1qjdVJqh7IBQVjtKAt/VlFapsNga7PwI/Mm54IVxNfBZg9JAiwzXXJKVSkwwDjz+wPuJ9249yap//Y03qSan770pfnd7yeMPPmDXL+00e3i4s36Y8HZMSuumCydvPzK/cvJwGsQ8NUampBDIK1aOnbnt/o9Hc4uUecZh+OVHlpk10y07omSxZXQLsjWpeh4l1gkx3aeUYunXzRBmhsUjIwIkWtbPXBe6jDnCvGLqPAD7TfXbF7y3R1kT0eMx3cvBSQ/8aIf8zaQ4ge2n742Onul7HfL5Lx14zc6nPnLq2eeu/q//7Yd6bTM9/8yFS0RN9vbW6iuVubnXvRa2Di0cYvP3NfSNarq5sHrnsfe/v7NyDBPfYkKAk2JDrZ/fn+4V33mB7B4ol+qGz0NY1RljFPiLnWqcZgRFkA1XyWysKAZaMqAg60EBLD1wDruDEDCNKNSNAYSa/Oar6HyeEYA5awgr7juqf/Mh++SbbncEVyw98uPLXm/4Z4/vIA15Ue9eu/YvP39S7V96591zu+ecCB2sGTk+fGR1wYC5YWSJYPGw6i29d/HWO12jYSfjyZVXs6uX1t54TV//gS0nVqhUmSZB8Aun5+aRBSFo+tYKWoUgyCAhQPpAl85jxnncgDqtmBfaYuKocxg5hxDILYpQKW0H4T++js5norLgrja/UZohJX/8o6A+jLYumCfOoxb3j78nfGtDLC7NTVI7CMuPvbc3OtgvkrwoNMx7Kmqhhj93+01VETWq8fDMA51DJ4L+IaXT9Nz3rj3z1PTqxenmjsWyVTrQdphiAwzyQVMzMuwAjUhd23lMRhyoQk0N5gBJowOOUh+LbRlGoaireubmcVBDoYCqLULWeRINmPn318mbs9pCeNiDMcGVBJ1QXw9gB6rh7a1PLKF/9vXi1iudD3xkpRqrX//czVubVy5//3yluZEMD3vNbjdcXASdnr64dfy+w8sP/nfWSpHLg6e+ePbRv9Xnri+eAS6JVlbdwQ6Qy47llnquSkAU0spJohNcQxtgMxKoTBEOEQmcqw2S1GEstwXvYGzywMMiwnmpa2yaDNUGBQT6Fjy2Td+aVgCi436rsGIQm5/S9rXMPPmS/88/1e48cOff/+HLTa+++Wh8c9tfOVmff+n5dzcP0pHX7YTxUnNudbWUHQHMfO/4wi9/gnYa62cfvfrtl/GVN8+tpYNFNXcXU5KSUM9GJuA2naKgjQrj2vMhsIWlAVk6LG5cZpMQ+totLbrSWVzjkmHJNNgQjTkwSV2B/AZWyCeGG3rAcFuhXLuee3YD/sMWgACsxjQTya/9CL2rD//zcyoxrk/w4s8+/Bu//vQ99x1/4tIbty2EC/7Bzgsqp8iTgi+GraVjjaWVycZu93jn9AOfILGfXH3xhf/jd2fndqvQ9FtssacUimxiRNcOGuAAGDNBTkABELdG6NrSOI4L+O8e6jkbkMIkyDWYs9ZgBkoFpQHcJ5K6oLDCASZBxFBSaAApiIUS+FIGv3wVAKgGnFqAjiP2j0+IYAW/+qZ+bQK0A/3D7PYTx8NG+ckfO7y7XY2vXC8dqVXdba/wk7chySRCt957Jjx2Klm7uv7Nr24884SHSMkhzaSEDgriUTIWdS8ITGwMddXUtQJbV1AZF7eIRVR6hiCHuqjKAsoM0b5AChHIIa6ttTWT3V085SDYdbLlQNc0CnwQ2y5BV3L26HqNgA0B+rEBfnLbLDOTaCdvuIcfmp9cSs++nd+62Fies+87OXj3h+/u3MiM0oYE3e7N3Zvv3l+7snLqnvkH7/IC+vaX/9Wlbz4fg0pa0OgZzXsEj9WYgKatjeEEk4bUwucg9zyvTq2krskx5FLXmO4bUk6d6/MGcbQudBkZUZWzSndB2+BsgrIWVAdirw/nQTDb09jXkYIgJF96V1XGUco+3EQ/ODAPxPCVWb2coR//mcHqrd3dp3YFwGp/dPsDwxtX3r6asLISFPDBLTcNhkdH167c9fBHm3edyndG3/knnx5dmsUnQaxwqx9NCFKwpij0Vmp3w2YABcoViasbNpY4m0rcpa221bVMsngxFGErIF6X4NpsY8MkIw2dVK7TB7UBCiKGtSgcp6hIjRlKJJFxUHDwl+dtopSHwfua7lKt/qcT+LUD+2IS5Bx1BvQL/+ryoeXhnMs/dIa+8cJ50HDFbnDoSDRYOYPoHBbq1p//cDh/6trjf7n+949Pqll0kydxlSHfEeDXAKa2xNJXdowolQqERLT5HKo0IZEvEXOU0FKIdlOaiu2VBQmlFgo2E5+189L4czFwGuYJjaxVDegsnCHcxw5XRnioae1z+9HruylE9LY4SID5g5+QGwVsbLolVl7Nya/83/vvPd5bisZ3nJl7+6X1/ZEj6+jUHXDh3gdM4hOtTnzsk4iqd//4V9ffuEx8RhRzgfYqoIAJhYJNyzQy2FqMzVQiiKtKd6itDNEOuSn1h35t6zAOUFmBABzpEzStiWki069TgJDSWsMxsX6onK/K0lkJuJOzBhYaFgdyTbBvbCSW2kOeO1e6rBKX93lemfsenv/gSfzijphrgDLdXw3d9Ze293d0qQgetFrNI2ofNjm89Vf+UTa69uI//x/fOP+2kjqbpCFwEcXQoIDVIKakQDLyKGQY0YVTjXCRtI54jiDsY8dsp+MrrXzmI+xgw2e+tz1xSC8YWxoq7UEK5RSm0vEpqipsLAqwQ9aELUSJsxQutcmfvltLA2KIHljAyNYcgrV9M9/r3HVn/9wBshC8tCa8kbp+wynHLIHDQXT6vbckxaR388rRX/z0+Ny3Xv/9363FtCcC4GEMEerQbKoZhg43UGJF5ayxIeehj5QuPIjVAZhmVkwqVgMJS44hViAnlUqcpdIgjcjUU8BmhobYWY1IAJk2FQCV7whEcRs2DBxvE4+6xzbJvnTWoVsBevoGua+tJwX8++tscHr1m/vuxT0NIEfA/NW+ueLopJCrtw5W3nNGHBTv+9QvHnrwoxvf+fNX/p8/pFFRjhVBtW9gNCSyBToewg5myVS1UKODAygOdtPRrPYNi2JQp5KtNvwuRQHGxORKjPIqyrhqSVbrRoZIFFi9h6VngYPWsy7Hs8gNEZQ1DLCbCWRqe+dR9+YuePpAQADvW6Amd797U311F7+D9OJc/cRrl14/L+9tsjeSWiH/OC45Vc3FtgmGYmfn7p/5QHzmpot//X9ufudZgD1RONYOtQAoEoSGKrUFMtinQURgZkWsmA+b7RBpZWblzGvE85SrSlSmgsYGkMeU1VpL7UmUaEMQQGlZ1wzkAkUSW2Uqo+MA8kB6zo40phlY8L1Om//tDQsQwNaVtZ5DmkK63DH/6Di9VvKXr9ObB8FnH6EfPQLnnVqJA5/RqDmXra/f/pnPHLr/A5PHfnPju2eNR/2WawWs3WGsCbwogCwitesvxFC6ACDSFI2lPnYhi2SBnN+1LC4kDgPjHIMV91EBTapcDTQxsy0kxgRKSrRABlLM7EFp4iZtUyEEldo0u7BLgUkpCcSX3pFb0nEK5yN+daZ/xIMK4Vve1+uT4M/+eOsWihc6qNlqnpkvAkOavF45fBwB8/Ff/nzv1uH5v/ut5NVNrxOiiQnbDYNTZCpolXGcg5x0zMZmTbGuSuLZ1uRSWmNAI96InQs5SGXL5DsO+AC3lISYEKsRZUZgz5cSw6QiqI0BsABKRDtEKFkJBJ3kyELjNq9BiOs1Tb65bhgCQ0e6GFDsfuDs3LCxcmr5YL96cJ69uVbdtlheWp/sK8A67PSZZUOyex+4f+H9t2889ofi2tVW1/nQ6PlAorGyvi4Nwp4X2bpCFDkvDgdDMxiGmFNNFREgMALHVqXadyhoeZFEFgPqu6rCtsaGlBoASVhDW0EJqijtNFwOod7WoKCMUmZ4VdL0APcbwqfeX77prAERQtypEQA9Rq5IuJ3bS/v6W9+pPCi/8InOteu62lIwZcMICR7ffuLMwoceePvPfmN84ZrYmlnIIBdNXAkA89GEurAkqFSUM0A7HV5Vk0loKih1xUHIQ1JXtt5UwAsLS3cmGWgojlUILCZVOnN2DLGPdGlmMfSIJNKAibbOAdm2rHSSAM4k1ySYN1aRKzlYr52BYBhDWuP//Qz9i5dECsn33q6fuLZ2aEjvP+GzvdHz17yolfaODeZuWuqo+PjPfuTg7JeTzbHX8kHLs42gZQfjbCK1jbuIBYinklGsK2GTnM6FqFYixRbISQEhQTgztB3W05lPUbzYcbWqyqKoMO1IioEXcpVC4iNCiDIC+ZVrChBm2vnUhBBqld6ABLgs8TVsPHpJKWBb3J/H9n2H3MZa9fBd4EdPwid3s9UenZcwDOy1idONstkkqzcdFTvizP/8j0evfvXs154Xu1W+D23tZ2mCDIOwisAcZ4zcmMZyAL1IIgX1AOdKTgDktfV7ocYxtJ2eH4raCssiqHdgYYSpCcSKy3AGjSmqulUTRiZWmJohiZDUoGqDNrI2k0kJ2kPAm64bVFe3iqtCA+CgVOsJOdo0J4bozHsWW9QyCg725AMfUnsHs31pDzXY8r3v0fvJR/7pP5ldOrf97e+EUV81QgA09NIOhZnaRV7UbDdC1AAnTmu0a5O0lmBKjaEIB2SUGQ8lrAdl7namdqZBJ25A7mHumoXFEfAtTqXsQTAWRGyjuirrCd86cEgSQ6mxGaY5RB5uOeQKojOcE/SX2xoB1Kfs5ADk1l685vp39vx5tpf5p1qs1feun9c31mHgk/mV5XJ3cvMjnyqLzR889hUbcc9lDEVh1/MQz0SzKiEq69qOJ7aQs0K4htI2KjwM96zROEAsjrgO8lxX2FqmgIVVpoqZFU5pQ1WqxwDnNREAtSBCLST8aDdFGzUgGzk8LAjuMlUUVQVtSDTRng3ePFeOJUAIrEDw5oFa8PCTU/TjI3t+Z6uHyZEF9shPdV58MsGI9fzewvFh2/bn7ui88IVfj5q9dGawHSB/t8zna1Uomg36kUIAbJQ8DBirpJMIgaTTXIT+mEpZzOYRHWUKECviRjiWnQ6HPatqMZ2AvcR1fdABEKDy4q7/5qa8lsJzRQGA0wiTY9xW1gW2tAEJnImpyyrIY/lcSSjQhyJoDPjVRfbNHZVi9+RbcqLBkRB+9CPxpR/u17Xg3Fs4Hcx20rv/t89e+LMv2kYvwyZyiLUy6PfqakSCNge6rp3D1O9ZhFWCgZYa5ThuJqlEoshh6c18wziHCNB9Wfkm1YXYQHqiGCYeI5en7K3d6odjnhWqNlAC7QDmEN3co0SnwIV2nJJlq3KL9rXrxO7i1GwI0IR2lYXElIegf9dt5uol/jcX8l+4pXWkh5JJMs5BuxG0T6xm0/Thz/zc+JUn8+tXA2gsWtXgAiiCEtmo0VNsbzoDYcbDtldXkQuyHoMy9BMz84C/ORWMUr/Bajf1Rk215OACA7tpUlgUwyD2n7mSP7MLJ3WFjPFj6mE8Rm5g4dGGm3WOPtTLSMUBrd1cw00qTDFq5roRhS9cKTF0OY4uKP3JBqFR9dNnlp7TO1fPgcqWJ850t69UtaiX55vHWhgGN6MhOff1swRHBlRQXiQ4Zg1uLOFlzuZDpTugmSHoeT7YTgDDpYWmprEsZCMy2GNVwhzt115FNtw4mcZ9IGT4/XPqtbHazAB2ohN2AlQjBgkHd7r2535Czx3tfPsVOTfskaSyfoQJAsEeQnMwWPDGorgioENgyPCsKCYYnXhoQFci9KbfDGoszd7GbGvD9EEUnAjXzm/8xO9+8vLffWmUSFbLOBr4XaaNVVJ7DSMtNzk07MArTQV1hlQExCRrQGdDXFeAcyyzSmIOI0enMTN2Fjebr4zUsxfKzanxGjRuMJGCpUD/7F29O27pv/x2ubs/O/NzD6IeH14/K+MWmYOWhh5F9WQOdTo2z+Hj68Q43cL2JwfF1677L1XVZ4OorsfZDX2yQT7w0YXr526AFNVH5vxW68xn7i82frj13JgdIyaXLt8teyQkAKFwLHbawaKuSwKYLnIVJbhsCcytzhPj+cIbzHM18iM5UTGZigzn5qAk/+nZ/IIRc5gxjAF0D/XBvXcMX6+Sn3zw6ODOQ+HJNx7/W/K1757/4je21q9NPrg6JcmAoploGOaFSmWUtcE754Fz7kSTfWdbdXl9oNw3Xt1lF9n8ifbDp9p2MvVzDhtieQDk1qjxuXte/tf/FxkgsG9LqfZId7FukQaopklIDlVYcUFhQJTHkZpKSmQ9DhDqEW27TAKk2wdVBhtK0Tr+u7eKb1+ufd872gkHBCjIP3em99FfuHMsxltffP3VGzfeOT/d254+9f3Nh5D76QeXD/3cTYsRIz50C77LKjMb2+Yh+901UhiIkdzN9d0DpnL9YgYeuyTvWWGHQdm+t3vx4mQsm8sBT5PqgV/87PbfPZoCqkLDYRRQgIOKQA10TGLiJaVsRsaZNLnhVw3t1ZXOmqjnsO/1cBPDKptN0lYjrtKr8Pe/O7taiphGzugPngx//sOHf+tLa82bl4so/ou/+u5fvJaAN9Jf+uDKx+8//OmHhwudqhgFRZE2WZOsOrvXYjSwzdgzRL1UIQjqk77XYuYMgcP3s+Zb6L+s1fMevPNYY6/Y1gKENBnes8TKODwUfe/pjah0B94q76Uh5d6shdG4KADFLuUSJ0iBwky4sVmZtvpDRn2rpSZlKXFfVpTH4q3L5Z9/s0g9djpuesQcO8r/xa/9nOhFP/nGV19749pff287v5Jm2gQYfeL0uMySZDt6cy1pEV40OmD7h6SOYEPq2iKZyR3lrU+Ug3S7rn2ATewtHg0eOs6f+Dc3vn2p+On3+ntb5bSwx9vxxtuTD//aR64+8QTJYt02fpb7FjtI0bLNx4tRs3AyZ0UseGl2BcDMdr24rgxkjhNlDfRDP6pE5X/j7OhblxX2yZwP33tbfGTo3xixMZSP/sMPLq5rHNDPPRijOxtr/zYTDLzyQgECjH0TLaxEh+YDdTjUl4jNMGk5IF0Wu3P7tXYAAjAz4eUqr6Fudpt4U/zEAveGNCngZBPqmvsnvDhpsYjsbEwG88uFuYJaLV1JrH1Z7s73F0vVyHWjiq+006aMtVacjg/w3JGZnnasN0qCw30wLvP/9I39s1N5/7D15sbs8x8++sn//pPb++v/+SvP/Yv/98XThxY//jMn7z3kzLXk8n7+Gz/lq5TO3bTY7FDGEOk+4rorzSO3Nqwle1DFFlcVHGL6WO4A1AOkMmctIl/fdR9MZ8+8UEah+dSdvc39Kod2cYVU2fT9n/30+isvAR7NsgoHEYQt150CAQhdzSYGc8PBlqt8RSvnyMCnFQ+1nDBXpxUeePnGVfAfni72CjMfhdi3p5c6x47Ob0P89f/vyrdeV7/16zffdpiI3d0bF3ecsKg3f3gFAbfYGXRx0DTBcDJJdr7/9uS/PLp77lWCMAyd9Xt415irGaTIKUyOAqqMTWbm9/5qEoTB6bkQI5EFFTSmc6zppi3aCnYubzA/4GEvxygIWuRA7jeWB5PzlNkc95ha1fgtNWqWAUBlbHs10mGrADsFILb524+vJ7npNvF7j7f/l//h4Ue//crZq9sX33rsrvnun/ze8QGWxfa7o81NB1te23XbIW2/j932I5df+OHbX368XD/AcGIj5xmmDSSt0DLAitLubztjIXEYYfDLt7pXtuRTI/jKAf4Qqlb9YG1U0Ey3mi15kD/4k++58fRT4zFhcRv00raiuspdMxrykeL9sbK03suLKSZBY0g82K/GB6BuEVcLjyUz8wf/cK0DvJMLbNvZ3/nNT+rm3NxLbz5/Dvz8J07cvWrS/fXNAvVD0ug0MJvn/iI/dteN65ef/oVflZMEYHdowAmglZViAhueITxCexugEdnvZhZA5yBaxM730ed/aQF9ZfeZLfP8DsyS+viCbQfk+O0YucA7cfjSq2t8Cfl7U7I/AKtW1dhJ1YnCXTwJECwFMtG8TNaLvRB0DijrB2FtZ6Kk3l99c5uHYVGK+x86dfbs1fXra4+/8w7P8P0/MrypLzau58hIarYBY/7STcHC8WvfXXvrK39Sb+22qQV9YGtcGt2ncFzgcLEuS/j/AwW47bSKRG9LAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAkOElEQVR4nCXY569t6XkY9ud5+1prr933afecc+feO72QMxwOOaIMShZlC5JomYEsO1GLYDkRHCAJkG4jSGQYieMgDpAvCRIblmGJEqwgpq3CGYqkxJFYphdyhlPvndvOPXX3vcpbn3zw//D79MNf+fV/9MLR6Vhmea/HTdYfTijG8d7WJoa6Wguts7L7zmtvXN27vLd/5ej8ImIcDifz5WqxmiegXtHNZX5nudnvb2cAj3aKn77vUk5eUEAA0oYtbRQMGOOMRUgik7ZupVHReSJkUljrBt3eejoXCURuGGMxRSJCRM54iJFBap01Jo+JgJM0ej1fZMKk1nPF8JG//l+XPB+MtsphuUmhW/ZNbiL6ugmFzBcre3br5KHHH64pbOar7e1tknzt7WbWOlvv7e26yAhVn6vHy+LpYXfXcGoqlRmKSfoktLI2RsESkmAcEFKIjHNQPPrAAiWGREmjoMZSSqAEcI4A5D1R8gSMM6kUpBATCCkThZgiZwwScBTAQfSy7mBnpGTRSixVR+eqSW1IsVOUy+mCpHzsL39ms1iKgDuXr8w289B4HjnYWPS3PM86wQ+c/5kHL13NjBKkhakiuZSMksG36FNiXBgZU0opMQQldaJkWy+VYhQZEQCLITDEIJhARMZi8BSCKDQLxAi880yAkBoRICTJeAghJQKNDJFl/bwNvMlFUZRN03ImpvO1zvvfe/GN/YMrve745g8+0LpM3W4bveKSAzIu8n53VPbHdfgbl6/8N5996v5cCAVMy9rWjJMKVrPkNUQlGYJznoiiD61tL1YX0bUsJB6i9Z6IUko+hhYi7+YJIQSffEDB2hC4kt4FpZQQgoi8DyEGiMl7LxhbrBdNsxGmHFVVtY1agswm24FABH7rnetPPfHEe7dvxQYeuHa1CXFdrZuVTdyU3UKili58dtR55sG9biDnXYqQWpu4Z1pZTMbktrHkYhQWCQUwZJAAjDRZ3iEXWGNdCNIIhsk2daeT197BxnLJkRGTwjlbyhyVSoVXIrEkQFMKEQmtq8oCltUcwR1fLMRmvRluD6Pinlwh1b1bR0Z3Dg4m1z+63hsPxzvbF+sVU4xaMloJUgPevcz45++fXO72/XwuhRIAniMSjymlEIzRMTjOZcYgETqgHDC2XhgROUMXEqQkmGAMGUJiWdZxCULEJBMHlhonOAspNdRgsgytJ+7RQmvvnh/dbWbHi4s3P3rnnQ/fW6yryIzYPbyklGraFgK8+/Ibh9euCpGdXZwX45Ey2Xo5H3a3lr5izGSdnrlYf2F7/PSoz0LL61ppI7iwTSulYII1wTElIoIgFm3LBGceFIFzFhkmByA4AJNcAuO+qZEb66zOjPMeleQcfFN7jBxIMFq2rXP++O6ttz6+fmN6dmd5fuv8tKpb4kITi+SBKSAm1rWVVRsD+bp96hNPHR9fLKIz3X7VNDZRt+zPpkvs5jvG6IvZ3/rkJ6/1MyGTY9rXLZcyQDJZ5p1FwZWUBJAopUgSpY9JcBacE0pwJQEghBC9i5x7SFxyQNSZiuQSOQYYUwjCYvAf3zn68OLkzZNbr99+b9quZ8taII/LSgAXymhjsn7JhCAgpbQ4Oz1TEXWRP/30Z6r5Jh+hiLRc1KbbHw66F6tNMRhIoQ+9/zt/+fNkpxwhBhDOcQCgmEi00QslIRJLRJFkpiw5lohRanmKigvOU0y1bYtOkXHeWKsLnQRCE320UjFpuG2bO+e3vvfum9+6+977i0Xd+LiqXW0RoGRZb7KDE0Ep5LluKWrUKBUh863Dh/69f/DgtQewzECYYXfCBFvXG++IkM7bSmM2cvZXH3jy0Z0uUAIiLgUwDM4yxbENLJA0WesdR8YQbdMyySVh8C5JBjGBUOgdgghEWvAQHOtkMUYlOQXX2NXZ6vTVow9fuP72vaOLtW3WrkE0nHSplFSMihxQGa3bthWM53nOGIvJCZNxyRmhePKTnw5G2eVK5jxq6eyqtQGItW3qePZEpn/hE48NulniFolhSOQiEkIMiEhEEYmcSwwQgGJigjMuHJAQWWzaJLgwMkEiAp8SU5DLrKqXyNPxYnFjdvLdGz/8wfTo9vExS1QIhdKUeaF1KbiRjDGkyLhWJpHYGe4kirkxAsWmrkMkLZQpOsLFGCON9vcEymW0duOjJKgRZ9O/8ugDP3f/oyZuIEKMybvayAwh+ZhM4pJ4S+gouLaVSoXkERkQRfQgeSQUSlKmXAyeRwEsA0h2c7xaWxFevv3Dr/7w9eunF1le2E1VyDxplqkcAzFtym6fEWcx5XlODLiU5AJIjqb0Hmarihw/ODxkPPJMClGWRqkmpc16vpiv93f2F7OzbFr9xmeffWivr0LjKUqTy9pzplLwiIgpBaGCj4kjB4aUEEBr3XpHCJxxkTAFRwqJnGacU0TfVuCuX9z+kze/992b12cuNbXr6gJJ60JJxpmQMjOSm2LQDT5EH8teN8u6zjliqYHamELpgmViPNphUmZFZ7GcrquNyHr91XzV+PZssTycHJzOzh7w+d/+/I/0BtqkRAmEMTx4z4BxDTFhCoxjgsS5JIE+JdYx5Kh1DgUTQkQfJAAI3kCKrgYmQrO6fnLrqz989Xu33z+f1UpkPLK+yPtlnynBpJQ65yjLvlGmdCnmBY8JnCVb2Y4psjIrQ2SKE8c6+JTLlOzGRpJ8frQSR3fucG2W8+Xuzna9Wn8ai7/1+WcmDAhTCoAArgkOQKEIIUkpOGKbnFZZ2zY8ISVPAiEzgoCsR4q9TgmucbbuCLGO7ubxza9852vfPbm7dNYA65oSSOhuWRZFV2XD8aiB2AKgEJ2851xTb6qtyd7R8UnW6SulO+MisLCqfHMx5UJkefHDl99qvdPGbG3tzk82+Df/9z9fucgjtLPzn9h77GcfulxIz1HyBBCijzFJTiHyBFwwDpBiTCyJpIlBhISMkrcROXEmubRNq5RQLG5SNT0//fp7b/zh999a2QaZ4ExyJSHRoDfiJst7pVJZbdtM627WYZlBFCEkpTQDRMSLzXIyHDW2uvnxHd+4xb0LqTvleOfK4WVSCADnF9O8KMXsbMZVxoP/D574zOf2esZF8tjRsGkDYWBK2pi0FBgTInDGXQqS8+QtMR6BRZc4SS2Zpxh5gNDGyh4r//yr3/6D9167F2yZFMs6OSoUIp9Myk45LBQK4aJqatvtjRhXVeu4libjdWh7w/L07p0Y3cnpLPm0XGzaNQ6Gu5cfepqjUEXZSlyenVCMjrFmsxE5ywvCLz3+qaeG/SQ9Ji4Eb7wnjSKqum5VphhQopgIW+8iEAeBEmzwQitkYDKzcY5iYJWNKv75zXd+6zvfXNdtzZIKSucdwY2Qoru9BVnR7/aq1UIzCUx29/pSKed9Mei72nKucxnfeum1po6Q+OUrD3R3tvcf6qzWawwKM7GZLaqzM0u0bmuplCWGKQn04a89/vQnx32ZXJyTVz5xyEAkXxNwIyXEmFJCAiFFJBScU6LkMS9Kn7wAsrblwRH6u/PT5957+Rs/eHNWB12WmQtQ5HUVJuN8sr8fOBNCbtYbTLJTjObVMmPs4uystz25WC+Wx7PFyflgtD0eXb3v6fua2Kycv6jsxaKl6Hk3Z2tKHHg/V9ZPclO7Jit6IhJ++XdPnso7xgAIIksAQAxSTBo5OAsMI5GQzDrPOCNKFIgzzohLJQPHSA6jhVA9/+4rX37txdurRY4KUQtlym6Rlb0s60hA4BlJOZyM8iwTWkgtgfF33n3XWX9ydGxXNuf5I08/Jcu8BZ44V1qf3T7JODcaSXFAENzM100ExhiLMfi174LKFIhHMy46gJyFGBMmhSyEAECQAqUQAwCioySkCjEoqUN0HBlRDIxT8jFUF+3sn379j1+f3QsOwIvecMi4VJ1OdzAireym0mUphelPtqbVCo1KTbM6mh/dPZnfnQ0G493BftgxYMwmyY4H3256w1FyNssFOtcEKYBtAEtIXUe7hboy2b6v7A45sM1yu+zih88vomAyAgRIiiGDuK4LJm3yLEFKhIyBkEBEFIUSrbVCCYO6CctNPX3x9kf/7NvfOF1ueuUQCAuhHBdMq/uuXIPEhFYceWc4AIiJ08q3sQ6Lk9N62ejuMB9tpUjRk4sx75RCM0iBkSTESNFFLyx2Zb6f62udzsFwMFZyooR0LXGiyLTIHHkRI2TEIAbiCBx4BKFNBGCMp9ZJLZum4SCRASWoqlrnWYAwZUthN7/76gv/5wt/UlK50x+A6QbJ9XDIA25NhuumHhTDTq/UgmEnX87P0ZJM/Nvf+t6nP/3ZTimWzoWAkRCMGnX7mPy6WQETzbrdKrvC01XZ/dRe/5lrD+S0YXYD0aEgpXnlAmY9IUN0DW8TfvD8kgNLFIWA1iWdGBEFBJYgBg9IQggkEHmWW1dlGH1KfnNzcesf/97vvDedFuVAiUxnOh9MSOvpxXR/a091TRv84aXLPkWGMJ2uvA0CeJYV3fHe0fEZAHAjVZZFCCEEIta0UQduwF8bDT4xHD7Y7Y8RRIqkFROcUeSEyYcYIiLjCMmogCBiEuhjXddFkYXgJUoAZMjQOYmMScUEjzGkGBDR50VKla9nL9394B/+zj89Xi7vv/pIYbotQjkca2mkMlsPTkIKla1DiKvVsnJpM12JKPavXDWD7my9OlvOu70u5UoItpgtqsoKlHmiK53Og/udZ/cOJoxxjJyAWucoSQY2WIXgXOQJOBME5BhGZ4WQIUa88QdLAABIyAgSD5SAYSZVu1iHQmliEUErXvNYMA62+pcvPfcPfutfPPbwUzFDnWUM5e7+FVtZ0cmYkgDgWtvZGrz/gx/Ud1dFd+vpZz+XT4at4NVmgz7UbfDgBbIA5EmICFvEf+rK/qNGSQpcCU9OMoU+JmK5MX5TM60SEqXI/l0roXAcyzyvqhoo4fU/nIeUlJTNppJSJ44AJIkhgFPEW4oSpEgxusov/+fnf/f577xRDrYG5bjXy8tuyXWWUJtMay3bEAIXwNStN94puB5evXp4/8NrDKFxdV0LwQXy1iWmGGckav/kcPTkaOuBotDoY7AuyYSUM2Gr2uS5Q8JEjAgFR0AfLQBpqZMPMSFyBggxkZBakbcpBskYI+JMQIiciCTnmqG3jJOLzVk9/ecvfvNbH3082d7v9ydq0O9kWd00zWw92enKLE8aVTTf+eo3+/n40595dnTl4Nw102plXWis7fV7obGUUiFUnqmyrX7+yU90ve1KqTWEyA0Y0VjQ0reNMTJA4MSJyCIZJarFmktmjGjrKlOZdx6ZiCGFGESIFlKIMSEHF5z2GBgiJA6kheISW1sdL47/++e+fHu67Iu+QJaAF0XRulgUg35XgUujrfGrr78RF+1P/8zPkZCy7K7IgUsJOWnkbZidr7NO3ut1t4P94tb2gCMny0FQoLbdGKkSQuTJMNg0LWrNlYpIEJNWMjQuzzNLkUAKSc45ImAhxbrpFAV+9NwMKKUQgRAZZ4QQIZJDyYxRrpn/8O6N/+PPnrtbN8YUTeWNNlt7h4mlLC91XvRGw9a2FNx8XUvSxWgrckkcgmsVqtbao6Ol6ZX3jQZjw350a/eKkl3vQKAUIhFGF42SwXsCkAK991oo5xwAIGBMERhjSmAglCL5qIUkAOccKKG4qJcbISI556XRAVKGvBWM2xg3ATiD2n947/b/8rWvrKJklDPM9y71mOBtclu7l5TKY4RBb3Dv7BhI9/rdFBCkIZ8gkcg7q9WmWtcHh7sDJu8X8if2Lw2bSgsVgLcxYIgQodAmxcSlTDFFQJXLECM3GoFsW3fKom1bjCFGSCFIIax1oAQo6Sm21SYvM8ES8cR85bjhG+7lmhpOpqtpsf7O/Po/fuH5mcVBViwXi/GlfZ5lQmJZlIqEBD7emZzNV0fni1KXk50d33pggEoYbubn87Iodq9sHxL91O7uJDrRNL7fa1PiTLC6ZpLHAAnApyQ7GSaKlJyLnDPnXJ4XKPxiVWkhgIgLAQyAMcIUITEhwfpkveVc1JQEokRkAVukApgXCZx98fyD/+elFxZnKzkcVxuXDccqK6VRWZblndJoLY2erxfT+WbQH0sh54tF3uknBlVdBcP725O+bz+Vl8+WBTgnFBcKsK44KM9JcIy1E50OY5wo+hQhOGQcUgoRldDVesMkz8rSOaeNpqqJAAQgOKMUeeBgtBACbWASFBIGRNfaHcF4r+hAuN7c+V//4rl16y/tHYykObiytz0ZBnKKK4osptg4atrm9Pjs0u6B6XR8QuBoY/KNQ1n0+4PBevGLTz7+LAupWiuGEcBryaSpbE0pOaVEv4OWmtpBwoyYFFwIxpAKqdqq4kwoEIuTCwAW2iDzsoMyU4KnmBgqIXMmBYLOFH78R7MICCEiQiGkr8/enR3/w+e+euG8KbTQpTJZUZaIrN8b+ARZt8syyYlu3Ly1v3MlJXE8O+8UfZ0ZmXGeeEHp093BM52ixMi5DI1liIGiyjKG6AVql1xV4zCPAQQh85FzZqOVQgSXABCJAWKMkSvpGcQYRQpGZSR4sj5l3DDklERw63YlUqKQguKCom1dc7qe/aOv/uuKlyTjoByBzrhWDEAyWbXt1u526/3F2WwxX47Gu+cXi83GFv2uFAFtg7i1W+rP9c3DeS5Cy2KKEGWm/abWSkXrhVLBhSS5MCrVlnORGIuGxUg8cp+QaZVSSpEEohLcR4cEioAI27ZmCiM0sbYXa3eynB6tzt89vyOsswYFk0lzcXN+9Jt/+JVz1HUz7/cHvMxYVqAUmTbJJZXnTIrZyUm12thNrETb+jgcj1WWNYt5b+vy1UL88u4oNmsTIkUuPCiCmlORF+v1GgSPKXYm46qugLFcycZaToknBlI5zl1dC5QMGSreRkdIgkCnGJ1PUM/JT+9Nb7rl8++/dmtx8d7Nm97GYW8oJNPIMdn6nFb/03P/5naz6XS3gsyzchIDsshVnsWIed43mV6sNwGl7gyr8wuMol9mMdnT0/Wl0c5THfkzl4eqalBljsUiQcBYeYqJ1ZFMJ/eUUkyr5UohpJRsGyQKZz0wgGRDcpJx9J5z2QbHJGcIKflFrI4256/dvvXtW+8cXZye15vFcu1twyKOy14KSfR62cav29D8k//3/3v/eDkcbXGu94o8G3YAmTH57GK+u7vnbQt51qxbM+gffXRn64kHQABLul63l4eTH9sffuHyNmw23kfBSQCzLuSmsJAAk/DJtm2W58Sosq4oiqZpTFFUbSM4E5InzmQTsrwAcDFSmWC9Orto19++99G3b7z/xs2Pzy7OWItCY0A9znLdnWR5JriOHEVgZMP8d//0hT9+9+bhwaEuu0XZK/IiInpIxuSH93Wttd1R72w2f/mVN370J7/Q0R0/vSh2tkhBj/lfefShK8FC610SmcFUNS6kzGQ+OotJKx1C7IwGm81GMC4ZVrZJAtaxERyACe/IRcs7plqvhCLW1Df89F+//dKfff/t08Ws2dhYtw9fu2J4twELeZ7Cptcbu5gggRFC+M38m++98/vvv32wv1cWHS44Nywycj6QEtPVcm/vkkCYzs/vHd3NVbz+7it+uVpRoI/S7PziRy/fN3rg0PUKbPMUIVHwPhhlorP47xhYZJzXTUMADLFJQUhpjNpsNsxkaKNPjljA6XKF9btHd755471Xb9+6dzrNUI6yMRtraTQzqlonlRUChDLKJy6l2ayrMi/EmVv8b9/4Wr+3ZXJtlEkCe71O08ZmY/vdjjJZGyOlFJzDFK4cbDWpDiZkkalMdIJ84/orX/5h95c/+YWsMBKIQswy0zTW6Ewb4533MVIiLgWlkCiYTEVAbyNPnEek6LBaYw5vLm4+d/29P3n/+za46e3jQXc8vrQtht2MWO0asklwSZESsO1Ll4mpjfXjLc2lxp/5+f/qLjmTlcCHnV6/6GWC0fRs1R/t9Ld7d4/utT70OwX5UK3XuYCGhfO7J7t7u/PZkU7paDGPm/bHH3jo7//E3xCcWw8daZL3QukkBPjoeTJcNrYFRJaIc+4oJaBMG4JI0b57fvP33/7u1958xTvXLkOupernKi+3hruriwUpcf/B5TbYshxFsjn2mqZmWpGWu3uXrHf4zJf+S56bzAxEXnRGg4zr9briWc6lrPz67GQqUWDy2qjdrZ1ItKxXgkAq0SwW5/PTxlu7rptgf+HyI3/3C3+9FB2bAtoQOReZImKBBUpJJUhEWijmo+VELLloz+3ij9566as/ePX0bMZBaMaEzkzZKzrdWBrmMBMauPPOm6xDTHCRZXmhlc7K3EiZEEOIAk1miq7QHdPpGdFZrdeTvUu3j+6NR7ldOs71cl31e71y1L97ftItuv3R8O3X33rs0UcarbfG2zc//DA3HWHpKze+Xwf397/4S54oSCyYcs5zo7gHntB5LxLVIvKcJ58I6Z3zG//81Rdef+v7DEWedYq8W3a7alDaqh0OtnivsFUrSTDGqqoyMhMqU2W3N+qcT0+n56trB/cd3z01zIit3f3j89mlrZ5UJoRQ9ntcy8P7Li+XyxiYYLpbGJF4aMna4IVXXBmVI/JkdGjS3oMP3b04S5I6Ovv6vZvdF/7oP/rsX8FcBmcpCZNEzZNQkguk4IkC+Hhy58ZXzz78N+++vTk5N3mv4PlwNG4U73R6IlP7+5dvH50Om3JkBgkBOZdZByVXWsbkL87P600sxzsuycneYV3V4uTsfGv7UGe5lMpZJzJ9MZ2PhmMSEogpJeabVVb2s7x3rT/4+MMfFv1y//Dwle+89MyPPDtr4rJajspRvZqLnOvEvvzOq0j4iz/6Y8Ijqn5E0JEcD8E1JBM7nz8/vf57779xdjpfVdW4HA/2doKS1qdx0Ve5jsYEJ4ajSx5JbW+vNwtFyjCxXCxdAOJFAMKSolezWZWVxdK2+FP/+f/d7W7J0nBkpuxMl2vuWQRYtG1crddtW5Slc1EpNe73D3cnf/q9b2TGDLp9GXHRbFrvMsZiqG6c3d0Zj8/vnrSL5S89/tTf/UtfMHxoKbBMJiOTrUO1/Bfvf/f/+uM/nnQHxOVgvDXIu1xnnMtu3uP9giDcuzcrsz51lCRazM77k5FiHV10I9c+1gzI2yBJReT1anl072hv70B0iq7pqNbaVdMqFzItluvKBUaQ2kgc4dadu7nSe3t7s2Z1/sF0b+fa6eltl5rl2ebyww+e3L2TvBuOdtbet+vNZDTcRPjKnY/Ma53/5Nkf74hCm45T7VvHt3/rxRe+8ebb4972aLTVG/QTcFmUtm0JaLQ3rqqG687l+zre02az9t7t7x7Ioix64yZBvbZKDdq6jsH6FDehcdGDkveO77LuoL/eVMDFvfNzIXggQKVkpm3bAgNKoT+Y9LDj2ka71Fb1pqmHg4kI2en0mJp2tHOJkEKgva29XjkQKsvGPefS7735+m+/+p2Y2lVz+uevfOd//Mpvv/j++wc7ew/c/3DRH+pOtz/ZZkISyMlk/97FgrJc5dm6qgMRMVEMxynL0OR12womim4WfCsUSwwSkgCfMXawe2nYH+AX/97vcKEdUdNanqDXH9Y+WhdW8xXGJMt+5ttmvekfXOUEy81i4zYyxFx1tvbNi9/69o9/8efArmfzFePMtdWq2kyXc1/P43TRsPSfPvKpy3vb/+3X/hUFtbd1MD7cJ6LucMC18imQzyDC1tZO4ng8P5cxbU12j07P+v3R2dnF3v5h412WZ6umCcEH5zjXKYHfOJkSi24i1dMH+/izf+/LwNVgOHLeX5ydh8B0Uc5XC0RlmNzu9H71vv279fL3P/iAmCq3xs16kWyytR1N8vnq/Pzs7JFHH5Imr9s6WscZHJ3drder6Z27XrbGunC+5JPJdnen3N7Kut0s6wkhQTCQUiNH4Mv1ZtM0/UHf5GWR5cvlmkmdl91A4IOtN7UyuchKalqeAm+aJ3YOrql0n8n3Bt3YVKI/GEZiiHhyeqyzQrPcxuhj4Jnpl4NHKD046j4x6LbJfeP8xC0Xjvxgssfn87ZNB1cfWE+nNz68/tATj1vnm6rengy3tnaOo9/fvzw7O79z++UO27o82GO9Xi4zKjKhtETOpMiH/fXZ1Ln2xsc373/gAS0V6mzjwqJp9sdbpxfnDGBZ1d3e0IEokyu8e6jf+bHHHhmumxCa3HDbbITJmeByuZxXti4HEzJ5gFDFOpP6sJjsWfYffvqzrq5jR3zp4PCXrz5+vtmEAIt6Mbw0ybQOLQ52D6a3j37w4suK2JXL107PZ5KbvfFBRDHYHubdrcFksly3/c6oGEy6MosxJoDhcFStq+WqOr53duXgal35xM1y4ygJnRfXP/6orZqTk2nloj89/wSlX9na/bXtSz8lynKzabSTRbZpXfTYrir8z/7la2f3TqumdsB9ivXGkmJa6AEO/4uHLu9nGh1ZIwqAlODf3rn7/NEJFDIwm5GOCYwRYKvv/ek3utvDrF/2eyNILAS7tTX87ot/Gqbn4942z3r7B9dkLjr94uJiYzqdmBwnPb2YXX3gQRchAlNaV9HNzy54rOqYqnXqZ+apvPOlRx+9r8wh+oQA1gdMJVNt9CkRCTRMsovpRVXVIcTMFNa2UoAO2WU1+Zs740ucc8aipE6KDVhH9mcPLv3kdi80Tie+rJZZZtra8rL/pV/7tbquRaDhaBgold3urXt3Pvn0p3TR8aDvu3Yt7xc676wqmgxHSoJrwma5evThR6VUJLildDK9OL5xp92s/VlU5/4nd7b/u0898+uPPFr6DXhLEcCnqBjYAAgcUGvVyQoBKI5v3wMmsk6nqh0l3jZtr+wORfyRrTLZFH1kAj2LfSZ87Z2wX9o/7G3t/fbLL3d7xXR+0un2m9nmgrEnn3zm7e+/Xrt39u+7xhAF8aPbZ3v795e6b4mUkKhyJYDx9P4bHxb5YPvwsOUsAToXTu7dA2BaohJlh6dff+rJ+zPBOBRGuRqp9Uvri7KjkAnDWue0ybzzCQJSEiHgaHtcObtYTKWQFvShVr/02COcPARrYxQgBRctQVZkjbUC4C8J3Tz1md9/9buyw5bTs6K35VaWy85jn3j6pe99G5i8evVqbgrJVTNf9rZK0RuavGtdq3N86TtvPH7tsc5oaKWeeT+dTquzeZZnZdGRIf7V/uSnr1wybcVIJKVs06jGMiVGXiw2K5blNoZCGucjA/DOIQLbPdivrXfOdft6val2h7ufv//+QWttikybXBlOLARAlKfzBcUEUnBFX8yzz1+7NjtfhdA01nf7uewUk8v3XX3k0eXxycnHH/f6Pefj2XKVhKrW6/Vqipz+/IW/+Mznnu3tTWofj49Pzk+n8/NVQFZOxurm/Def+twv7OymZu5K1WKsVyvBZcOxJfKSyqLDgIyStbNKSkDSRiophYeEmgQBF5ko5P2Mns37EbyBZG3QSjauVjrzPuRFzpH5GIRSDu1v7B6a1v/JvTv3zj/ayqUw2XQ23z28Ou6PPvrwvQ8+vPH4o0/s7xwgo/lieunSzpuvvPH4lUeDlBerOqw8EbLNZtDnqmU/QsW//5PPaLeqcyhlPyFD5yJjobFGZnXTmDzztcuNShCQYr2a56b0LiGLzLahrW1COZ9VPUt/9dJerNZMCiW0i5EJLoRQSnGlIjAUEoEHG/MsZ4L96sMPfX57NNaD87PTul5TYtOLdd4bPfzIJ3mADz58j/fNmoAX5Rt/8eqomOjBaLZY15sGiBZ1JUZF6eTP90f/8WG/ZBE46KyoGssiRSNRKAaJotMsZcSEFB5jBGAESuoUI4VkrWWL+Sy6pLK8w/VTZeeR8UiPe5nzXEqVKR+DEOi8A45MIjBKEBOL9brWSjdh/Qv372Zk33nnxdde/vqND16fn99czM4jhEcee+TBK1e+9QfPHxzurmezne3d7njkBdZV4MAulrNhp5Rn7f/w6c/+3PZ4SWhUDxjFi7ngIqQotGpdsD4xoTnT0XoJwBEIgRU5ExIAE4LJukLkWnmIHroofvFzT0PbAomAKVWNNCqQl8A449F6IYSPATkHSKNBz1dNbts/e+/7P3j964wVWzuTXAujc+VrIYpZ0/bKzpNPPvbK17+W7+zpfrlcr2d3VqAUcpRcHkb4jc89udVUadAtUTRtLXPEgSEbMdd2VSsheZY1wRuhw2IBmfYxcqM8eSTgmkvOWUR2vpglBJHiFcWLumIAyTsfiYjIeiDwzrfeCSWrTS2Y5IylkKqq2dTrj9ezL996i5vy4PDyZHunO+h3et1yNCj6nUsH+1xrYdTs3uylv3hlfbHc2HVvK0NuYsQHZec3n316DyUJEYB5wayWdmNhFRiTUAew0aeIiDIQ4wijrs2l6nUROIYASSbg4FK9mf3/2yjkLjKzc/8AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAeFUlEQVR4nFV66bNkWXFfZp5z763tVb2t93V66O5ZwWiGYRgkhgEMMpaMHV5khz9Y+mAZWcQEEdZn2wr/Iw5bAQohIclCQMgSI2kEPUDTM8MsTHdPb2+vV/ty13My0x9O1ev2jYoXt+rVck4uv/zlLw/u3vxrAEBERARQgPBs8QoiqgKAIiIihZvwnuWnDKICqAIBoKJBEwMAkll+m7JnYwhBQUGNUSJEC2QQDZBBMIqEaAANKIISIABIWAgAARoAUkAFQjz6WgIFQLXeO0JCQiLS8K/FKnW5K1isGXR58/BabBIBABUI0AIiACKSKisCKCoRmggBAcAhIhkii2QVI0CDaBWREREQAFHp6MtFGQBAFUBAAUAUOdgubAOBLACoqgqoMCzXq6qECAiqAKqKwScSNoAASOFnSEQQUdAgWSRDxigYQFIkDUtSIjSKBpAA0JAFQgQDSIRWARRI0UowmSqiLj0HgMEdAgAKimGpGFykwXyWEAEpmBmRjuyqquH34ZFwWrgPYWmG4BdCEyESklG0iKRoAFEQAA0AIkVAEaAFAEK7/NiRwwGVDQChR2IAARAQBVVgDyzCXkVUGUQBRYH0KMYA7MJ1i2A4CgxApKMQIiR49L+ASAhoiAiRFC0YS0SAVoEACQgBiaxBtIAG1DAYBAIkFAMAqAogRGEdiuK1KlyZVtVcpVT1IAqiCAAqKqyqBIogqgpkFXXhAUKLhIC6jA1dprOSIuAy6gmREABJAcP6yCqSkEEkIRtCkdAgkJIBEwGRQQQ1qkbVGkQiFBBFT6jATjl3RVblM18WKAVwKaLMHGJfRAGgImD2IsosIuKZRURFnXNh9YboUQ8Ewy58oUqISIgACBRcH7xikAjIAllFA4SWjAAAkGKEFCNZJauIlRIgoAKiolYqTCA867qqqIrCc24QPZfOOWEGJAXNsnIym/f7w4PD7vbW3kF/NOgPp9NpnuWuYmaPAABWAYhIVStXWUTUJTguAyZEPR0hDAIBhdSJAA2QUbJoIiQDaAABQQCsQoQUK0SqqACETCqIDFL6bJrOx67MIs4AhFk8cy6cZ8XuwcHdOw8+uH3//v3t4XA8macCpvKMZFUMAqmqiiqgSmSMARBmNgaNMc6pDWAUICwgZ3AGkTmKd0BCIkRUICWjaIEsxQmo8aKgSASAy0xVIVVEgWouvpxPB2U6scDiSxCfeiw9H3QP33zrvTff/vntO3ezrHAexRMLqBKjFUEBo4gkEsCImSmARMkKXpWD11XE4gKCjnyAIXQUYZELENCJFFDIhO0okqoyABhLZAAIgBSNiCOj4DM3m2SjAxEn3juXe8QyK2/f/PDv3njrxpvvjSfTvChZyHtksKLKgqABsQMCKaASqqoAkIowIjAvEZIUEACY1T5c+FH80AI4QSlkCB4FlQFdegMQjbGqRtGoGgRBdaSumAzL+VB9VuVzBZ2m6fb27t+/fu1n198eDKdlhSrkHHtGEWUlBHAiTEihZiGCKgIiICuLCBEpACoCKiioGpHwRlQ1RxuAYHIFUBJSMqKKAmgAraIhsoqoGBNZogjIKMYqGLKWAKAaS3Y4m3aBHbM4L71B/yfXb3zve3/T7Y7TXKoK2KPHSAFViEUYVFRFBAGBSRF1cYExJKyIVlVZlpVDBBARRVVBhIgWdWBRlx7ekRKoGkQEEwEQkgGySBiRUbJiEiWrACRCkoErqmI6n/SMVkU2tVFy597Od7//Nz/6hxvjWVrkXsGyGmb0rAocirtXCQkKiAoEosKMtKBb7HlJa/SoSIkqqBpacg1VUbFH0Q8PaQ+CIhgLSEAG0OgCOg2ARbCAEZARriyWUkyLSdcVU/BcVH73oP+nf/H9v//h9fE0r6qIKPJMnkU1kBIJbMCrqEDYDKJRFVVFQlFRVSIS0MAGAmsIyKKoqqAqYcleFQAsAKoq0YICIRikkLJGAkCZgPch3Y0igXgiAc6rfDjp7SYk4nR3p//tP/vuP1x7sztMBePK17xnVaeqggAgIrogTgIhUBY/pKFsoSqER8gvET4CGAAQUQAT6qqqAlIAVwsQynKwvFnQWqTwF8kiGl1UMasYI4KRgmfzYj4s0okFGY3T1/7u2h/90Z+Pp26We5akqrwAMoCEiBY5Cm4EEFVRUSWDKCB6VOcRww4k5CoAiCxeXPKYsFAFoCUZsgsmv+RwhKRgAFDJICISBdghEwkgGlKXV5NeOe8b8eCq927d/uYf/58b79zMcnZsWUhYBLjyDBSJoCqgmmBmIlJkBVUNkKMsgKiAoMtwx0dSOYBKeEVUF1yMCJY9gahaQITwLiAMvBcAkZQMIoGxuODuJGBMfjgfdrnKq7IoS/eX33vtO9/9q93DmZO4LJWZFZVZhYEZUHzAhAV6gIgIEDKgoiKiV0FCVQn+15CvCg/vA3jrIjUJDSI68IDIIgSICPYI+wlJcEEzkQwhqLFqYkVDICSVz+fz6aGvCvb+3oOdb/7ht2+8/cFkXjk2zjtVFFVmJ4KBkKqERJRgvLAiFgFCRBRZouMiuR9ymUf9QESCoCIIqMqgEMpTbCCOrYhYQgJCRKNkEAySFUQ1iIRCRtEqIIh380E56RFwlRfX3vjpH3zzT7b3h3llSxd5ryLsRQBQBFkk+ByUFqGwjG8FIERevviIpeGoQQmBpKAhLiREGyKoWGuNMc16nQDiOGo1G3FsQwdICgRASJYJyRCAVTKERkBA/Hzc13QgRVo6/53v/d9vffu7vXFWeOs8OCYVYVFmQTRHywmeX6DNI/ZW1dBxEREE+NTl06PmD1GFFQBBENAaMkTtTme13Tl27NjaSrtWSxA1is10PAp1ILR8JjBoQYtoASNSMVylk4GWaVWWs8nsT/7su9/9/muTucu9ZY2YgdkLsygAkGcmQgEN5g6eDVHx/5kcl8h45Jxl2AS4DRw+MhhZPL65cebMmfWNVV8WG5sbSRLfuvXhmeaZsiqch6KsLIRGCQmWD0RSQqWIpCimA8lGviz6vcH//sNvvXHtxmRalmwZrGNgFmAHqohWVIkIQEJfoSqASw880n8iIQEoADOHED/aZLgMERlaW1s7e+rU1Uvn4tiSMcIuTVV8WWk1T6eeNwHBMxtLVowBEkILIX1NgkQKSEXG5ayYDpSL7kHvD775Z9d+/G5/Tk5qLKooKAoivODhQICqyqyqikZRUdAIhIAJMIfCvLA8QiBvBgiRPLMxxhpUdRfOnnn80vnN9TVAtIm9e/fu2TPnEE2We9bYOe8rLrKKDBKRMWQRDSAAGUBLZIPBULkqRvNxj8Dv7Xe/8c0/feOn705S75wqkiIws/fsFVQAITSlCADGGFVBJEX1D+tP2Bsv7h/RZoQAVIxVS3zp0mNPXr0SWdOoJwiSpRmLzGfpaDQK8gL7olFvdDrtdqeVpqlzVafdtgikaAAsYIQYISkAV+m0nHcNuO7h4H/+r2/85MbN8Vwqto69AgcZhoNmggQPS4+KBDKzQPNljuqRiqFLPAUAJDUgSHDhwumPf/zZyBpL8e7O1urqKqIZj8fNRuvYxsn2SruqShG/srLinFtb7UwnEyJqNhplWVhYdIwWiASUEHw6y2cDcMV0mn7jm3/88/fuz3MoHXpFVRBl8epBVUGFGIQAETCs/iGewNGmjlQeQERmNoZUJYmM5+Ls2XOfevGFRj0WccPhqNVoWWtGo1Gr1Wq1GipKBuI4KopMRAbDHgDUk/poNC7Lotlo1huN0BOjLtpIkDLN5wNxaVmWf/Knf/nue/cOujOnsYgRYFHwAqwiAkhBuAg0gVXVELHwUVnVR2rTshpAvRYHmaS9UvuVl14+e+Ykklgbj0eTMi/Fy9raWppm9XpNRKqysgSzyZi9FxVfMSBMqyypN+Kkhmjy0ltEVFIkBQLPVT6fVlXmivnf/+21a2+8+WCrr1BjVs/ei2PAI9jWwG5Bgwi0oCtBxwAk1PCGR0oVqAIwxwafe/4Tlx+/6H2RzUZ5Ntvr9pvNjc7KaprOiajRqDPzfD6v12JVyfKcWdlLHNdExAOUhbPW5FmmqjaIDigIXJb51OdjdsX7799+7fWf7ven09IxEIfGUsEieQQRFlY0C9MygIKEVnpZsEIXwgSkjKQQ9hnHptUyr7zysghkeWaN2e1NmvU62pat15SlESfpZNrstPNsTuydQ++5WW9l88wag6wgatP+urGb7c0HWToktWAIgBBNlWd5OmVfHu53v/MX39vvDkfjmVLsnCgQsIAiEkHoJ/ARjhAY56JIPQIxiiGuCNGAGoNnz556+bOfLouiLIvJZEKEtSQWdfXESpVPnUciE5ler9dIEou2RrUMitF4ZFFNlrfRXFw7ltZPZtN5TUwxnXEzsaCACL7Ks9lYfFnl+V//1d909wbb233VGMBYYz2LqC4bI1zWzYcRgoSBcj4aMKDGLvoPHyXRp178xEc+8lielYfdXhQb76tGvd5qJQDQqGw6n2tEufc1tN55L1acm03GsZfzzVbbWK5RyXkv69/Z77uCUXal2RCvlgBUuMzmrpj7cnrj+s/eeeeD4SR1GikBerDGCKsXJTJEJMqISIQc+sAga6nqsq8Oi/feR4QAagw0arV//pVfY1/t7++6ir2v6o1WLU5azUaWzpVdMU+ZJVnrqGMoq7ZJsjSNI/PRY2vleL5Wb+4dHOzPJiPkMjHaatTXksTGluxqq21VuMym2XSkLh8Oe3/3+j9M0mowLitPXsTamMUDeAQJIMgqzKyCgemEIqWLhmRhfiIyxnhXtlYanXbzS//485E1k3Gap0VSj1drrUaznhusXFnmBbNrtlYq50U0BoJxFpM1cTzPZ0lSu9ftv13tVI1EW42k0WzFDWzVAKEsK1+U+Who2ZU+m4HLi3R67Uc/7PWH3f64EiuhUVIGFRFWZDAGZaEQAWJocBVUVEJPF7jNEWLW6tH5c6dfeeVXJqPhaJCJh8jW4oiiCLNsJgKuKAkxqjXz0jFCOp3VWU8kTc2L+4d7Q636k1JNHG10NjfWyBColkVeHI4ns3lzrfOVr/z68x9/zkpV5WVVVm7voHf9+vujqaQFlgqEoAvrgrEGiUBBgEkVQD0GFmEVEBRVFv02qiKSq0pj6YVf+viF8+fzNJ1P58IsLO3mChlNZzMiIwLsFYDKwlXOOfHieZgVWeTzWQ5xPbGt+kqr2WwaYyvnJ+NJXhRXLl/5V7/xhfMXz/X6h2mWfvjgti3LXMR773907cd5KaPRrKzESQDJJYPXAJ1A5hGQQWRmAQ3MPlAbAFRlY8wvv/TSs888vru7Oz2YrLRWrLXHTx0/ONhfW1stJyMRb43xqlVVOsfM7L1j5tK5LHfNZqNRbzVXWhX4rKpG+/tE9qu/8zuPXbp0//59JN3d3wPlNE339vZsWWbC1e7ezvu/uNnvT2bznDlWFrEP2wsEsDZiEIWHfeACQxVCLAXsF3ZRZD75wievXnl8MhoXeb7WWS2rqtNZ3dnbA4B+fxhHtaqqytKFi5m99847BYyT+spmu1ZrAOF0nu71ui+88Mmv/u7Xyiwrimxv90FVzPcPu865brd7cHA4GPStK4ssm//0Z29mBY+nOWBsbaQIirxsjkREAYJarAvZHXQxk1viqQgnSQQKLzz/3JNXPjLsd0fjQbvdrpxbX1/v9XpENsuyJIqKovDei0hYvXPOszfGNpqtdntVBLI87w2GL37qxf/06tcmk/F0NlXvhsPB/t7ObDrZ7/a63W5ZuHanc/LkSauio9Hkgw9uZ6X3gmiMAfUiSOS9DzkQ0CYwzWWf+JBgAiigJknsyvypJ64+9/GPDYa9yXhIIJYILRRF7j2LigKVrvKenfPB8FWVq0Kr1VpptW2c5HnxYHv3xU996mtf/3qv15tPRlxV924/GI+Gg+Hg7t07RZqvbqxfuXJ1ZaUzHo+LrLTOuXd+/l5euMkkYyVF8urIgGc1RKKKRJ5FRBb6Uqi2j5B6VUDCsiyfeeqpjz37TH/QGw16jXptZaU1z1LvJctLRPLOM7N6ZuGqdJUvRcQY02636/W6COzs7KxvHv+v//2/Oef7w0FZVQfbd3u9w73dvV6vX1TuzLnzJ06cqKpcmEejESLO09Sms/mtD24d9meFI2tq6h0iKiLwctqpSoQIpMKssNTZANAgoCVkVRDurLS+9IXPdQ928zy1ka3VaqUX76GsWAWdcyKqol7BOed9adQ367Wk3qg3mrOs2t47+K3f+s3zZ8/m6ZQA7t7+4LDXO+z29/Z2q4rPnTu/ur7hPWdpgcbkZfXmm2/1BwMRtFvb2we9wTwrKGqUrgoDDicqYUi4yGFk9Uc5HdpxBQBc0KM4ib7yz37t4GC3f3jQ7rQbjcZoPEa03jtXeWsiVfCevfeVL72vlF2r2ex0VhXp7oO9U2dP//7/+P3peFyU2eHB4Z07t8ej4c7OTlFWp0+fXe2sAhnnfK3WuHvv3ru/eD/P86ClEhp75869/nC6uro+SVm0FHYCaI1VZgAQkSB+hCQ2uNT0UEAlCFAE8rnPfrZeS+7vbVkbEZFzPoqSIi9FgMgWZem9ryrnvXe+EJHV1dWVlXbu/L2t+1/+9a987Nmni3yGwG+//V7/oNs97A37g3anfe78RWMiARSW7e2dmzdv50WhZEANkWERFrF37z5Ic3fy/LHDyQ6DIGIURY7BGBOEgxDsQUAGDT08GNDCO1VsNOpXr1w9cWxjf28LQVaarTwvS+cAUAAFlIUVofKudCWAuqo6fuJEnNTnWbm9u/u7r359pdNmribD7nvv/aLb7Q0Ggyqvzp2/2Gw2Su8iG/V6gx/+6BqRYRYA4ys21gatGwDs7v7BSmejdFxUFRCigmcRpWV/GOReJUIVDIJCCKpGUgODtVry6U+/dHiwDcKoYoiqqhwMx63GChrrHFdV6b0rioIInHNr66v1en0yy6dp+l9+7/estVblg1u39va2dnZ2Dw4O19fWTl08jUSl9wj42mt/WxSVtRbAsDCL2MiGuVOtltRqNZtl5crmqYPhDIkQQMU75wCjpdZ91N2CqoCGPaISsfcE5qWXXtrf34kIksiur544OOgWlavXmoIUxtPMHPrgsixardbGxrHuYU/AfPWrX42sSSy9deMnB93u9s7eeDI+eeJEu922EQHg9vb2rZsfRlGyvrFJZMrK9Qcj8YwoSRIntcQaC6hWNQayZTrnqiQkMjUjnhlElMgSgaqCsAGDBkvvFiVYoVZPNjY3T588vrV1T3y1srLihI+dPHHn7t1arVFWhStZlbN0VqvFlS+SWrK+vr53MHZev/bqbyMIAf/oR9fGw8n9ra3RZPqRS4+trDRUgIF+fP3GaDzttI/VarVjx9fjOJ5Pp73Dg1ajEddqqhDHETOjojUmStPUex+h8aJeRZUQlxOeQI8RwzTO2nC6hQGART7/+Vd6vR6qtprNPM8r59I0q9cbZVWG+URZlcbQbD5rtzurnY3xeDZP06+9+jVhz1y+9847/f5gb6+b5eWFixdaraa1EQC89tev5U7W1jejKLl44QIZiCIzn06iKLbWIpIx5D0HCmOjpDbJMhZWUDKm8hzyloiCzAbLWQMshiJhEgrPPvtMnmbivbHkvE+SJC8KVfXeq0JVOe8cgIhwFCX1eqt03O+PXv36q5aEvXv75291u73dne5kPDt1+tSFCxfyLJ+Ox6//8Idx0jh+fHVjffMTn3jhqaefAGBL5lvf+hYiIMhCATEEQMaQVaSiKFSVyFTeh4lEWLEIq4IxBnQxEPXsiQgJ2p3O00888cGtX2xubNSTWpanWValWRZFNeYQgeK9B5TKu3NnL1RObt2+87v/+VVLisQ/u359MOjv7BxMp+nxE6fWNzZGw9FkOr9+/TpgfPHS5SeffJLI9AeHb7wxqMWRqpZFhioqSoRkCJGIVEStADAzIi6YDgAiEaEIE5nAfzRgKCIQKiEhXLx4LsvztbW1Ii8kMYhQloUxxnvvnXhR7xTJpul8Y2MDibZ3tr785X/SWV1BrO7cvj0ZDXd29ufzbG1tbW29Lepnk9lPfnqj3ux85uWXW80WooLwiWNr77777mQyfvKJJ6uy9N4bE4Oqc84YssYika28C4gZiKWKggqikeWZF0QQAA37QwDQ5kr76tUnet0ue1evxcyuKAoRJRMGphhEuqpy9XprY/P4vQf3H7t08dmPPgOkvV5/a2t7e2vPFeX6+vrp06fJ0HyW/uz6z+q15iuf+5y1SVKrra2uJpEZjw+LIi+KXJWd87G1wAx20aKQISJjsywnNCCgIkQGWTwrIngAwaBFESAwKBEVWZYk0bmzp0A9EjdbtbIoxQsAGjKVq7zXqnJepHIOSU6cPjGazhXN57/wRUSZzwZ3Prw9Hc2dk5XO+sVLl5qNmqp//fXXTa3+qV/+jAj0Dg9GQ9jZct7lK/X43/yLX/3BD35AUtYiEJGk1nThaByiKjCzzYtS2HhhZWVAQLTWiIaBLhMREImyIghikiRJRB/96NOzydh7bw0lUTTJci9Bo4AA/IDK7FbadQDY2t7+4pd+NZSRe3dvl2XhPbc7q475wYMHzWaj1+1OJ9kLL72sgPPZlF311d/+j40GHdtcmw6Hs9nk/p1zB91+p9OxcZTmeVyvhf7Je2+tXR41WGr44tlYY8h6V+EjF5FR1XqcdFZb7P1kMmmvrJRlWZaF9x4NsCzwJ3x1FMVnz5y/e39nc3PjqSefcK6azVJQbLdXkaKdvS6irSX1svA3b949f+HxlZUVg2Y0n//7f/cv79/7sNagD29XpNZGpBiVjjc2jwkARVZEVT2ATRLrvScACqHCoKwSJFvP/mhAy+G8l7CIeO+ee+45V7k4irxzFtB7j4TMzOwBgHmR+isrK+Px7ODg4Mtf/qeeGUCJ6MzpcydOnBDRJEmSWg2MmU3nUVQ7c/oMe57NpuLL9fXVKCZlRkCwUVY4x+q9Rkn96aefjKMFxBMRszjnbJBoGdRaI4CVE2F+VGBbUmi11pLKart96/YHxzc3LNFkMsGFqggA6J0rikoYAHF9fWN7a+fq1Sc6nTYQIBibJBurq/v7ex+5cvmSIkVJWVaI8Pidx4aj8Xw+Fu8+8/JLg37PhuM8AF68IXrm6We27m8Ne93Lj114/913jG0CEKgiYr1etyAYhoVApMsjIgJAhKooIsIMiCIKwMc21r3nZr2RpmmrXq8nyXg8ASIOlEeEiHzl2murVVXu7e7/63/7G4tWThUU87x6/vkXZtl8MJwUlatcSQStxhOicNjvf3jr1vPPf7x7sBuZKBTQWozMfLjf/a3/8Juz2ezbf/7tk8dXDwYlkvHMosreWxZhlSDKMjMoEBlQFfYIYJAUCQCIhAxeeOz8fD5G0EYtSdM58+KglffimYUFAWwUdTqr3cPelaeejOKEANg7NMTKhuyt2x8m9dqVK1cVoXt4OJtNTafNDCdPnvqlj31s0B806yvsKgUF0cr72Fhj7MHhATN/8YtfunPnLimAKCoQIYu3SESAKFrkOZlIWEA5xAwhqJJoUAvRs//I5cem46GKA7XWmDRL0UbOeVYBAAQq8jKp15zn0WTyy5/9nIKIZwJBUSIqfVm3tfls/tZbb8VxfPHixatXr+7s7ORp5irnvf/YP/qlOLK3bt4qipwII3BVVamF0pVxEnf3D7yvCIWBwwiUKAz2uBIRY60qRJFVNM45VUU04fiH9x4RkiQxxhZ5vrq6mqdZWRbW2oq5qipREQ5nJaDZWnHeJ7Xk9KnjzJWyz+az4XgUJ8nG5olmpxFFUVBUd3Z2tra2Outrzz7zbJFmB93u/fv3yZhPv/xKFEcf3r598/238rxcXV11RemdX13rvPjii6NZXpS+8myj2sb6Oj519WLlF0dKFchXHM6peO9AEdEIIrNHhHMXzj115dLh/m6rUa8ntdlslhW5kplnmSiw56rwhHTuwoV7W9uPX378xRdfiCNz5/at7a2t6XQ6SzMBatZbj128ePnKlc3NTSQkY4wxhNBoNC89/vhKZ3W/e1gU5cbmsauXL8dE/cHgx9d+uLOzLezZu/lsCpY8AKIlE0VRYkPjy+xDyhobeQnTaaMaTtcBsyfCTmdlNBonSRLH8Ww2LYoipE2tVs+LcuHQOEEyWZ5dvnyFELMs7ff6rqpGw4ljVcAI7a2bN2/futVoNS9cuPj0009vHNtEgLKqbt68iYgXH7909fLTg+Hg7bduXDp/udNZ/8xnPhfHcVGkb79549133/FasgjZxNqkVqv9P6166ZWZllFUAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAjDElEQVR4nFW6abBm51Xfu9Z6nmcP7/ue98xDn9OtnqRudWvq1mBbki3s2DJgy7ZsAzYyOMQgwKScMNbFqVSRgkqqQnJvcCCkMhDDzQdICkxIgJAbkPEQy5ZkW+pB6nk8p/ucPuM77b2fYa11P5wmde/+sD+vemo9+/////4bT77jfcJMxjBHMsAC3ntnrLWWFSxh8CGxOGtFOc9bxthRM1KIxsv5yxef+fQ/+p1//DOl9vuDHddqDQaNFYXoo9SsFCJFjiGGGGPwgVk1cUgJCWvf3Fi+ubW5MzHeXVxcsNY2dUCgyF5YQmi8N2vrN5Zvrly5cnXtzp2qqhGwLAskBEJVMkAKYIkIAVTVEHHixFLkhaqICiGmpArorAOEsigDh3o4BICmrqBY+vnf/md/7yNv0+rOSNFlY0a1ZWgY6wSY2ElKTRj5JjGnEH2KHFJCllFV39nc6PV2JsbHjh45ZF0eQ6yroYj40IyGfnurf3PlxvWbF9ZuD0ZNj9AS2FbZQRQAVAFmJoKoiqJWVVkYkVLilJJxVgRElBDIGmbPURJAUZZN4xP75NOggRNPf+CnP/vjT9w35YcbQTQjq6x1Ck3dcEqcYkypqUNKHENMwkkkpLi906sGo5Biq1VOTe23hmLSuqpTDKOqunPnzu1bt5eXl9fW7gxHlTAYY9pFR5UBwBhkMQCEpEYQEVUVjbEKCoghBFBwLpPdw1cFVKsQAufOVFGhGSLScDDaGHU/+ws/+2OffGcLsRr1gYwRDbFpQvTeE5kQfGhCiCHEwFFVWVE31tfXN7YR7dzseJ45AWWWGLmpm8FgcHP55uVLl5dXVkaDShUQwdkcLQCQgjIoWasA1hLsjmZIUYWUDFpjTAgBiQyiAomwiOZZFpNn0cluUSWl2Demdad/1WUP/97v/84Te9op9NhSRk5UQgg+hMDMwqOqx4l80yhQ3cQsw2roz5+7gAbn5uesM4VxKcbAKfi4vd27fPnqmbNnVm4tIyCRtdYxJ8TdpQYFUQiWMosWUYFUkzBgVEU0qEYT4JGHnrTWAiEhcRIko8IKQoZaeT7SRINBxKy/1e/c+87f+a1f2zsZdvqDrsnAUUwpRm4anxInYe+9Jq39SACaOojq8vKtzc3t+YWpsbG2957QEeioqjc2Nq5eu3bmzNm11Q1jyRqLhMyiokQICAAIgKAowNYgKihZNQYSGzSElDglbkDVZlmGhKqqqinFLCMByZwrisL7Oo2EOW2Pmmc//uP/5Fd+iqpRM4yFNQmtcApN9CHGFEOKwYcQQvSBFb33O72d1dW1Vrs4fO9eVfRNArCD4XBrc+3K1etvnTu/trbBCTrtlqioKgiAAhIpCCggIgAAgLGW1RAZREEOrIAIZKnVnSq7s53xGWuIRNmH4KwzzqhylpE1OBoOAVLte4GmP/v5X3zhg8+043AICo6cliMdpSpJTD6EEGPjvfchhmgMbW1u3rq97pyZmZkqi0ISsiTv653ezqVLl8+cOb+2tsbMgEgGQvSIgEgiqqpEhIiAICIISMYSWUhRYyPGdsanx6YWy/FpciVLYl83ox2bUlRUY0iEAdBltijcYDBEMMPBYHWH/s9/9S9/4D2LTZ2iT5RDKVRXlYSkKXrv6xCapvE+xMSgevXKcq+/tWdpzhqHYFJKADIYDK5evf7t73732rXrhpwhKywqoAoCjISICooirCrWWlU1xhAZZhn2t8v2+OyhI/P3HE2UVf2d/s7maPu8+AHHRoStKhqkOvnC5tZKVfmdnVWiFjL33NJfvvSf98/zcLijkHmJJmnThOBTlNg0dfDSxBB8AmUiPXv6Squ0Dz54rKqGAMocWdLG5uYbb5x59dVXQ4xZnqkaUHWZQwAVACOSAJDUcEaZsmUQYynFuhn5sfE9D739ezqT+6phffPy2Wrr2rC3HkJwNgdEk7nEbBUoSBzLLZkCTQjbg0Tt5OPY9KF//89//cBiqraHFo0iqNpR1WPWmHhYj4KPu9rE4jfXe9eu3ZqZn5ifnWqaBtTGGHb6/UuXL373O6dWVm5ba/O8ZGZCUFRQJTKKmCRZ4wgsC7IygBoFCamcWLr/bSepMNVWf+XCl5v+GmmTlFw5lbdQNRERJ7GGbIKYGUFsEeGw8WBhuLN99Inv+7V//Cv3jovvjRyBQvKxburEkVlC3QTvo/dNDBySv3lzNfp4z/4F40xskqA2TbOycuvU6TfPnTtXVbVzOQIIKwCBCiKoggKKiiPLKoKJEEOINsu78wem5g8jmWHv9tb55TTsKyFZIshzBcGkCiJoEa0jQLUE4rI2AVXVtjFuNBjtfezj/9c//dm5jlchBWDVEKlpOIYhs9SNxuiraiRi+oPejZsrExMTk+Pj1mlISY3d2tq8dPnCK698e21tkyjLXLb7iQMAETGGQBUQdvVVFZVQJYXKT+07su/IiTAMG+uXh9vLEBkQbVk4QxJjYkgKKGStzTJApKapE4vNMUtEYfuOy6fu3HnrwIkf+tIXfzVV2whWjSVshsNUNwNJMTbgg0QZVVWQZLd31t568/LxBx4wVkV8DAYN3Vy5duqNMy+//CqLIIFwTSaHXWFidkSqqoCEBKCqIraEpoeYn3zvD5oiu37ulZ21K0bVYO6KIokrjXjfxCRgbJYjcxpWg2rUZHn74MF7jx87jieeeX+9MyRrtwe3y6mn/uD//hezdgDOGZc7pM3exnBYi6TQNKNREzk2PiaOGxubKzfWD9+7VwAISCQliVeuXHvtte+89eY5okxBRRISZi4TYQIEJBE11ogIABhCkYRJy4WjBx96+9at6+vXT8XQz11u0TEKC1j1AVqkjTXIgqAwP7d07NiDT7zticcff/jg/nljAY88dJLV1v2dvY+8+9/89he6MFBJSMaCeE5b64MklW988Fz7gY/Cwtev34yB5+fnUYEsqOD29taFixdffe2VlZV1ay0oKCgzE5KxqKKExIqGEACIiIFjaMYmF6cPPIjCq9cvxNGaAUHXUhEkkiTOEIuycKvsLO7fe+KRE489cvLRk/cvzE6pMoJKbGIMtvLo63554B3/7Fc/33U72kAUJpaq8ZWvGdSH3VsbmkZEw6XLN7vjrYWFKRBRAUDe2N44ffbNb37jW73BIM/yuy4GyZABVQFETEpgRFQRkESjxrj38Dva0wu3lt8a3bmqoMa2SFPwVWQoi5ZK7Ndxen7fu9/19HueefrQvXvvWZx1YARiqvspJkRkTgBiUaqxPQ/88Re/MJ57H5gEMpdVo1FVV4m59qPQaEwSYgBKZ9+4dN+R/c4VqoEFVON2r//GqdNf//rLwSfCDFQV7hoBRFREkGCMFRYFMaYQjr7R+9/xIeb65lt/3YyaIrOKyNwokis6WvX7W5vT8/s/8+KnPvqR985NjDlIUZKLKaSKJaKCiDhryZAKYDl95It//Gdv298RZuMsJ6580zR19KGq6yS+qaCqhiE23/3OuSefOrlrS+qmiSGsr25849VXXvnWK0SkakSUEAEBARUUAQSVEDmJIVUqlPti2o8889zqtYsb19+yxtmsDL5CFGOoLItBf2Cy9gsv/MiLP/6psRxC3TSxQkHCFIOUZUmkksRYE2M0RE0I+Lv/z5X3HO9EDaCACk0TB6NhSin5MKqrppHIw1HV3FndOXnyuCHkFL2PTfAXLlz6yle+evbNi8KJDDEzEBAYBAIABQUAVVUyqAnIAMexmXsW9j90+/Ibw50VV+RENqTaoLFgwGRzM/PPvuc9P/TDH1+YbSc/TI0nIh8DAFm0NmMAA2ijD4aIRcgaFbXvfmA8BU9owRqRYd3UiTnGUNdV1TQqtLPd296uHn/8RKvlJKpYRxSu3bj5Vy99+dTps87m8DdWDBUAQEEAABABgFCVPZHxo9HeB55w3ambF74Z616WZ4iUUrSCaA2Q/dCHnv/oR77/wXsXEMBXg5S8IbTGqBqTtUI1VLISwFg1llSEEGLwhGRjjGSTamY4xQQxJR988N7HGEIIvt7eGn34+Q8QQAxeOHLiW7du//c//4vTZ84CGlHeTXeigIpECLv6BKAKDCbLsqYe3v/O76+r+vbpbxpkMnlitqoZuUjQmVr4R7/8i0+cvL9wkFKdYjICOTpyZPKMa13b2vz7P/Nzz33kwz/6I59ohn1OEQGMMQZBVSxCBDZKurm9LZpCaGI1CCk1AerGn79w5yd/4gcdWbJsTSs29U6v/yf/7c9efe1UlrdiaCxlAACoSnB3ZzBaNUoOkAh5Z3P9+NMfHAz98pvfKMps6I2jyhqxdhwsHd579Df/5a9PdDDV/RRMbEIynGVOo7fRpGoEWesrX3395bOnuTX+Iy+8QIQ2z4BT8I1vfErBggKr9Ho9UBkORzGlJqoPUNfbO1vNZ198ochsjDWBDV5i5C/+3n985bXXijJnYSKC//+jChYydgSclNSwPf6u53xV3bnwcjtvJSHiEZHL7HjZzk48+vZf+eWfabmUBo1y8hhNYZzYGBkVRs2oaLVgNPrg+554/duf/t73P8P1WhwxIFRVpYhkbdHu4uVry1XdVE0Tgq9GozoErnhruLkzpCdPPjA7U1rTQkrWtbPc/umf/rcv/Na/3tjYImNjTMpirb07+t0349+kwcS8dP8J9c36zYtoDXBCciklV+QT3cn3PP3MZz7zfLug2ASHRMaoJSSMtU8Ku6HGWAPMWZEDZCmMRqOdzJZlqyUiaCClaJ21ISbvgw++aZoQUwzBx3qnl57+nrdTpLpmogZJ8tz0BoM//i9/urq6YZ1NKSGqAigAIe56NQAQVSKyaHyojp5439b2rZ2Vy2gzVGUyRW5Nnrda3aefevL5Dz+Vp8oPHBAmCyBsoiqoiHJMBGoNYeJBXUNv02auNT5VlK3SZawpxVjtDNrtzpsXr9jhsB9CDD4EH6qqidGfu7zy2Nvf5X3pDCfIQZNRoBT/8Et/9Pqp01le7IIDZd1dIf3/rFBOECHr9zeOnPhby+s3mzuXsrKltY9onHPWiDVjj544uri3e/7CzeFMZ+/cpMvLpoG8LIAUQZu6BlUyFFMDhBNlO5tdStJgirFuhk3jma3L8874K2cu/PTnfskmEeYUQmgaH2Lc2BzuPXC8StlwYzjRLVqFsaQYRySjP//zv67qJstyZSEiBVBlCwS74gugoEkzTf7A/Y8qNfXKm0VnrgnBkCszyvISABeXlljk6pWba8V2fzS3Uzcta4/dd0h87QEjJ2edsoSmQWurmL7vI8//2Is//bc/+X1xa0hFZnNX2pwQfYjXlpfRGiucmqbxPnrvfTPc6TfQ0d7yyuT4VNLUFWOtmciLN159+fKl68YaZUYkBUAkIlK4G8YBAEEjp4nZ2SJv3Tx/qmxNcvROki1KVVDFVmeqXw2blTDWKhcWzK07vXNXV/ftmbuz3Xvk6D3jnU5mjA+1IcqLzJXtN18/tSP01tm32q1PRshshs3I++EoxlDm7oUPf78x1tbNKIbQ1KFqmitXVwO2+pcudLrjHOuy0xFmKjoQzRun34zRt/IWCBARIKqqiiDusjlGFRZqT8zMzS1cuvCmcy6xgEpW5GQMojEuFw7Ba4oxs+bGjdvGrVvrhoPhzvysj7J/rrtvYapst/KsjBxj1X/8wSO//g//wbEHj4yGFYYqRZOU2+2SxXFqRv2NR48ftizURI1SbW6sDndYW7UxrhqNmpAaHySJ0ubI2uXV21lmFdAQ6v+WKkQkkURI7LJCoh49fP+p09+yRkAzkZQXJnHKjBNmVUkpoYB1tL29nTsnokW7vHTu1ODe+/rDA5euwvuffOTBg8VguBWqWLTHTZG975lHQ1U1/Y1Wu02ZLRNXw63eTj30+u23Ll9eXrW9XjWq6tSky9c37Nhkqn1REEuy4FKMg97AGB6pqRpPRPC/tx1AVQlJQMlGULfV23n4wSfOnPk2CiIZFl8UhYggGgC1zjEnRs4oa+rGOSsxknXD9TsbGzsb299+7/fNFuXMf/3r0187ZT/w1CPzc3sqnzaXr3VbnXJykkBi8Lx9p+fl4u3Vy1dvfeVr3z50/MG6buyx+49vbqx//Vun1E37EJ3LQ0yFyWJMAhGpiCFE1tm5Jd+80h7LQBVUyewGKzVoWBJLOHb0ZB08x8rZQkVdRgASk+SZBYAQAhrjnIuhAbTJB2sJWDJnxidnO93J3ubOzvrmWGdcdezVt24t3b49O9OxxZTJyu07t7e3+ytrmzc3q/4obvarr37ly1tbq1S0x8em7Utv9qaKopg+YPuXOQx3sSkgKSCzxOQ1phTT1Mx80Z5IcZRleWIGVUQEVVEAgm53Oi/zm+fPWJupJjLGkEucjEUWvksKVTklAFYQIpuYQVGFOuPj1ur29nbZalNdsfLmd/rdye61y+fv3Lj+oee+v444qGLjU2QvwinEyckpAqqbJuq6TVp++8pKGK47aiGykJIKCxhDisCxURZNEck+9uQ7X/na/1AFRBTdtf6IKP0qHXvgwLk3zzpLokqESCAqWZGHEARUOJExDlWYEUUAo0YERAMJkKvgnOPSQmDfDFR7ebsUxEsXl6vR4BtvXJmc6hJACtEHZokq0JmaNq2WgqkrxbUdfuVy7xtv3Ij9Oy6D0dZ2SpXE6FDV5tZGiAEFmhBrX5974ztnTr88NjGefNCESqqSHn78mVsry4OdZYKMWYqyEEmdsTEQrOqGOQKAMmdlLqAqAGSIDIJBVGNyRCJrDSIREFGe58ZZ9sm5bFQPpqamREVYmBmNIQWRlBjAGlXlQJYwPXEwX5w68tUzE7dvXENnSDtofJOaTANhDsYIJsrICu8/cqw/6q0uXzLYJlcrU2d6r7Nxc+12nhlRabXKGOPk1ASzxF3SjyiJOTGEgIaIDMjuVoEhCn4IYPKiiKrWOaMkddQGcpdrTJ1OazgcIqJ1DoyJKRE5EVRAVE0xGDC2Do01dk9HvveRuVPT9tyF9vbGBkpBxqCIgDG7uNKKqra7+NBjT2ZZuXL9AihVvjl+4L7XX3+5yI0y2MzGGMcnxlXvJjJjiRl3h91NagAqooCABkWgqSoWdm5aFROAkknCxromNZnLYmoIYTdACgARjfq9Xm9zZnpRDIlIiGL/3b/94vMfe35hfn4sD+84PLlvdvy759ZurqykhkCEhDl5JCREUHI2dy4/fPyEsebalbMPnXhquL2WuyylaG1prbZaHWtd4hgjkzGxbnZDDlkDqKrASYiMgkpiIrVZ7pBEAXfBnSgoAiqARk1kM7TALMqcRMjY/vZmNdzumbw1PhU4ARh7Y3n5n/+Lf/Xzn3txae9e9em+uWKmu3j6cveN8zer0Y7EhgwqMye2mSMVl+VIiMa5dmdmbOpbX/uOyzNAgw6IMM9zUfVNyPJ8VFVgSJMgISjIXeXGXREUFQVAtMaYlNgQAYAhEgBlHY56IcTFvft9SKAKxgCAD8m1xgrgrMhVFDglP8SP/+hnDdpbq2t/73M/9c6nnrRkEckLnVvxpy7e2NjeSqHWFJQZOKmoc8yBh8NBr9e7dP61teuX2TfO2agwOz1ujKvq2jnXNE0TIydRESKrqgAsKgaMKiMhEomqBcegu9YQAHYTKZIZDbetMVnZzawjZ1NKhMYYi8aJNBwlKYnEsuzgJ/72T0lSL9ofDp993/uf/+D7p6fnCFWIVjbTqes7N1bWfD0E9r5pVNggACfxEsL21mC0duPc9fNvhmpnrDtFRndrQhWtGs+SAAAADVkRvqvfrICCqHfrAbC7oUIQyVgA2B2Uk0dVtAUiAikRARoCAoDEwWRFXnbJAMeIn/ixv+t9SL4JHDd6vb1z85///OcPHzxEZBRwu9bLt/qnr21ubd9RXwmDSEjBIzNz4NCMRoPV5Su3rp1vhjsqTFlpCZp6lAREFYSdzVKMkaOzzpichVVFNQICISqrAhJZMASAiEhkAFABdps7AGOsMcSSLBvKyqLIy9Q0kXF8amZqdomURUV2b9Xa8q1vvPzNT73wqS9/5Su7iKRr/P2L2VPHp/cu7oNyOivyImtleVutMSbPXDE+NrNn/7GDxx6fnN0HQCLKLIjIImQMIAro7uELqIgggYKicQCkCjGxiCKA8N1Lwbw7kIioKgKyY+aAlOdj7TEIsLW5Njaz8J4PfPjgfUe31q5aFo4xVdWoqZtmVNk8U4BPf/rTv/EbX/joxz7WcDSGlsZg7Oj02W751vUViMMiOnWdWG+ZLDOSxq0lsiYr2p3p/vbyysqNstV2RcbRG9KUmIicK1JiRGBg66wKCgIoZy6/u/27RICFVS0iqJIlNBiZh2omOgtK/vadlek9R7/n2R8dbF/+yy/9+81bq92JaXzuBz7jYwDks6dPAwOD7hacW1tbP/HiT/7kT744MTFmbAlJKqHL6/GtK2s7/S0JPkUOTQ8UkSOn2O/3B4NetXN7e235zq3rMUUhtUTCAIjCCRCBEYwQGlUUERRGa0QVRPGuz4VdhxVTcllhrKWsyFyWIkPLHTrymBM4d+ql3vpq1posW0WsvGUVUl29s9YEn5ksxYCIiDgzPfPFL/6Hq5cu/uzP/dwjDz0AmWmjObaIE62lszfat26tOhqxdlGEJTgbuqpCkBelc2W71Vq9dWNjZwst4G5mRiJENKQIuzYWiVBRRUABVWWXZACqKKCAqqhYIhFJSacOHsmdvfXWq1trN8Dkre4MCMc6YVbg+z/8I7GpT519g5xFpWo0MMZYYxHQWAfIUxPTv/prv/Y9zzzNqkRFVF3v88XV6tL1lWq4w7xbbgQOPjVVXTcpNL3e5mB7tbd+/daNy2gLQFIQg6SsALorCUQIgqq7VI8FgXbLbQUENdahcQKY5a3pfff11m74nduexVmjwoA2b3ezsgNoiEi3drYzV1gwIDzW7rZb7cxlzlkiTIkHw+Gn/87f+dJ/+VNncuGEMppt6QP3tN5+/NDM/AE0JSsEtoFscJktCjXQHh8rpvZPzN577MEnKj/SmLiu6mE/pKGKgKJBo4xRVCERCcLu9IoggEJIIBhiImvzAi6f+mrVW0mcDBK5PB+bHp9bsmU3idbVwCro1vaWsYYlcRKzy/ZpN7eoIxuFF2bnfvGXfnFje+uHP/mpLMutwa6jfNZ0uwuv23Rt3cThtmsQtTBl4TTVVdXpiMM91cA+/cwHz595dXtjqKooJRtBQGa21jpVTlzHmGWFqqoKgioAEQFInuXCcXN1s5U5TUo2t1nhiq51edOECEwoRGA5pZQCEomIcQ4V7loxBQQQQmCug++Oj//mb/xGDOEHnv/IzNw8Bi4dkjOPHJnpdFrXlvOd7b7iQJEdTEbEskLiEZjZMOzfe/yxOys3VpcvpxAMWVACY4RlV8uyPAdVAgJAQFQAEXG5VY4ppYwsCLisNHnLZgUjhhSss5ZyEZYULJFVAURFQGFFg6q77QQgqrIoolWIIXzwuQ986Y/+6GtffukX/o/PP/rwSaJEYBbaWese123nF26ur62ThKFoyOwUqAUx4qrM4rDn5pbKicmpmzfO9Ta3MperouzWlWR2SSSzmN3MDeKKTEVAhBTIuawo0eZkMgFUQUsgMSqGGL1KtISOyKIqgljcRfx3OyIF3f0iAaizZmlxz/9YXz137sKzr7/mmB9+7AQZMBY7lI7MF9Ot+dOt9tXl1SS9PMVscpxR2oE8UrtbNHVPQA4de6zaXHnzzbOZNcZQUhFBBCQEawjJqGrhMp/iLq50LrOdNtkMwAAQIZIiJ5+CZxEAyTJHdeR2d0IRAYAlwd9Qzl3w4FMUFjRmdnbm5a9/nbla2nPwEx99ds8cnT51OsaYG8ldVjhcmCgePzz98P2HWlNzao2yjk92yqJbdqbzsig7nbI7laDdntr32DveDSb3wVskQkIBACBjmNk6G0JAkRSCMVnR7lhbCAMoxRhQ2Te9GEYEDCpZVqaohNbOzC0FTgKKhAqgoKK7lYU6MkA4GvSXFve8/vppMq2///OfNeC7WeueOXPjwqlef8cJOyKDONZKj+xpnXxo3/zeg3l7InKZtcdcVhatTp51rHV5uytmXMzYiSfeNbtwkJMRZjSEoMzROSvMRMAi1hVZq43GpsSgIDGgclMNOYokZAU0mCSgBUpC1rkDBw+DMoARTXJXXUBUWdgC9Qa97kTXmmxuad9z73/cglWIbWsXFtt//t//8vbOOkskiDlhUejRyezkvdNL+/flrS4j54W11iFZzHIyWBYFmrwWs3jgyP77jhbtMR8aTjGzNvmQQojRE5msLJUMIzFHST76UfR18l6SZ06iKCkKM7oxW2S5j9rOu/N79l29cmms3RZhAlJVRCLjWHjP/KGmCXmL3nny0ZYx7KFiLFXzzBrLv/U7X/rMJz5wcN8e0cygkMo940U7MxOd/OLVzmD7Tl4KWtI+GXISPZLxjQmhyidmj3SntzZu31q+2vhgCJQZiMpWG8ggkQBCZABUBURji0JBUtNo9GOze8cXDrcmJi2hFnnWEIzR0syCDDaWi6JIKSISIIIm3/gf/NAnXz/79Z1+/Z7vfbdHJRmVJvmQKLqlxbkDhx7+4z/76ic+9OzS3nlWtQajr6Y6RW7Hptvlmevt27eW83pgumYwzLVoAMk4iyNjohtV1dTikZmley6++drW6s3ctDrTs3lW+pgs5j56VUJnHTrlFKMXMpP7jswdeCRrtW1mFpeOWC9ilQw6sOnggX03pBkNB8ZYFkFEY8i0yhMnD/7Z//xP+5aOnXz4XtckRfCR2LbVjo1S58SBQwfumf2fL/2vB47ed/T++wjRFVZVyUp7hjqtmUtT5cUrt0b9za6jeoDaxhjqFlGorc1ao7qHgvc/9PRgaXN7ay1EXwefOSupllAnUOJI5FrdmfnFg5Oz+1pjE64oZvfcs+/QvZMzLXviiXe98do3DAOANAEW9h68tXJ91N+xRApKiPcdPXLmzClfh0/+xPPtzCbPEalotf/j7/7urc3ww5/9BSqLrrEfe+69r37n9Gvfevn+48cmp6YkKmaGAZa6OJZPTBT5hRut9Y31jFqjelAbStYqGRcjoknJ16HKJqbmxyc4+KaqhTl4Xwi7vGM73VZ3pt2Z7IxPtSemJhcWZvbunZoen+pANxP7Uz/+wl8cPvgnf/CfOQRTGBKzuHTPusGdrQ1rzebGzrPf+94//E+/PzWx8KHnnqTEjBEVHNHvffEPT370Z/fNT5IJEVyWl0889tiN6xdf+darR48dP3TgHlIQMQDQhXR8qRxvLV2+1b6xtsVbBAiaFYgkHNUaFx0ihdgwR6WsNdFBdEhojYW8VXS73e5sZ2KmO78wMzMz1m11u2Ysh7ECSkd2tpQfePZth48/8O++8NvD1fNg2mx4rLvgq+Ha+to7nnnfwcW52zubTzz8+PzCRBgOpImYq2kXL37u59//sY8bA4YyFlWyNsND9x7tTsy89NJfbW6snjz5KIDEwAAGJdwzaaeKbqfbvni12No0TW/TubEIjclchJqKaAEICmaPomWRU1YW3fFOZ7I1OZ9PTo3NTk5MdMdyW1huG20ZcKJOkRQ0Q3z7Yvb5X/7c7OH7vR86sq7TWjh8fM+eQ7/8Sy+eOvW61OnEAw/bJlEKWMTkDUva9/Dji5NtRFIFZ0wSACAkNzMz+4lP/NDGxuZfvfRSXY2KMrcWM0OIPFZmJ/dkTz24ePDgvWMLS64z1ul0u61icqzT6sy69ljeccVYaccLaNnWeHtubjZfmOvOzSzumV8c78zndszJWEktq4WB0hkLgLe3vaogoCLeCfYPfv9PvvnlvxCJhlx/sNkq8L79i3/0pf/62//6N5956EiSgJrA2FfPrJj5B95+dDFYQGFLxImtdSCy+z8Fp3Dq1BsbmxvHjz+4uLgEACqgiokTMKw3enEjrqxub2zcaaoRNw3X3nNoYqMpskpm87LVnZiaLWdmpycm2+28281bzpYFZRmUDnKDuYHM0P8LZld+8BwCnMIAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAZF0lEQVR4nL16a7R1Z1Xe88z3fdfae5/7d8n35QIhCSQBSeAj8YYJJILcRKtcxYGKl1pb2trr6PBWLVWHWsfQemmHowMEKTholRKLWlQkYEFroCgxIERDDEk+8l3PdV/Weud8+mPtk6T9UUsHcY/zY52z1hpnvvPyzGc+c5Oj43giPyQfuzYzLj+P3pQkKUIeroj/7V0AKACEHiAhACKg5T3I8hNnuoTBvsfsx2O/CoAUEaFQaPjD4YOEBEtSwHoMVgvS0uqcaWa5NONm5Qk8AABJJAGCIBAhkpIkBxCPt3t4jodHA6SAQM+WSCJlS23J7WpqjpTmCGy1tzXZ1hN4gMGPkgARXNr0fz7zqMlDaKghTQhI7WiUJ00Zbeay4bxC6RLYljj2ko3MvUedPrEROExVPC6PBnsJyIB49AbNFEZrWktNa/m4t+tldBJ2AmlNqaEHo4dm0L5mCjMh5dQ+sQd4XAEvM4NKYkAiLRAQCFjiOLexOmnby6wcj2ZT+VjBBkJEh7pQtxALrIEmEskAsjiqAJ9oFHrcAQgKEpCIJNTMyK2lstmuX+OT6ywfYRTWBLhiaqlGWOIorHGMKRAiQcLMUmOWFMG/yQMAEgwUczNqV4+UyeW5uS7ysapxQlV3EOyYGqkMNjISxJTIlAiEC5ZIU3RkKBRR/+YOAADJ8mh9vHJFGV/lo8syjwKV3nutYmESMA4kA4kiWxDZBttFKQIedMkRlXChSvWLWQN6FKgBGCAkAYQDSFyZrK+untLK9Z0dCc616D1dCJlZsTI2msdoeI+iQUABIrwTQauBjgpEh3CEHApWuL6YEZBEgwLAgP0BoBlNxmtX55VncnSpxyh8Du3Rm1EeL9Amy4AFKWlo0qK0rNMZ4AwgxOiBGnLJFREBkkGZpS/qAUAIMAFKbEeTrdHaVRzf0OcNA9EpsGNpJI5oOaLJRlqKcECkIBAu9JBDAfahCBdEg6QATWZkgowgmUR+UWFU0ZTVvi6ajfXNE893u7bKGFS33Wk2KpsNT1YUBwylWB9GANkoKdRLNaJC1VARrkg0AolWBBkNlgMEUqIBsmwpfZEjgNH6U8abz8nja8nG66yvPWhmY7MxLEV0mSATKIHGea0VCmNE9KFFaGBQiUwJRYSYkpVkySgYU2pSMkqEVZdCX1gEBEE20AKAlEQQoOUyOpqPPbuU63LejDjo+j0xpTxJ1lIkffkGKyhBEbPwBWVCX8MBBkkWSyMiWbKMYsMnL6mrpIjoau/uEQIGXvuFREASODCYpICxhpDbZrx1c7Nxk2xcvYbPkkYlr4AMFCABTlOCCTViDhcigB4IVwIpGi1bahRmls2SGQpJYqjZiAAU4QNjikPWzS+8BgiKQSCRVcaVS55dJqdyPu5u7p3RSjoCawYaJ5gxzCR07i5FRA+AQwytGBtjIS1kicZsZokQ4L3H4HIpgqJgHJwY5GB8Iv8/5gEZGKLazavGG89uyjMXOOijEMgckyBKKGBmKlAPqtb5YLeEYaqRJYApZbIhjECW0URQqh5VXn1JUAUgJIrLlKGRIA0w6f85Alr6H4JSu7p14uaY3JQ0mXZ7NDVG5KPSFKhUSXRgDtaQR1/BngiQZEvSzIwFSgym3DhCcjEUkFySIIN5VEFAYJnsBiQM4VseLCT8NQcQls01gS5aKs2RG5qNm70c827RxUHJBczBxNgFBCSgCtNwjwjBSZDJrCEy2QCEUWBCobmjRsi9GjuKS3NBkGCihQBDysiwTCQqhCpFgEDJuf1ri1gyMgpZ08rR0ZHbm8nVCqqXck02GQgiaKFIBooRnWsRcmLorYksNlQzB/fSJGMKzSNcNIUDdTn2SDQzpuqgJRs4CUTm3pTQGlCxl/vpYn7GFw//3w4gALJiUdmunLhpsnFrTRMszgvrbqWpyVPNTFKkTLPo+oU8JAcDNDALmSSZCUIyUBQBqUY4zQFIcZgS6fEJY0EkORFMQJOk1O/3s48f7H6827tgZBIC8ddFgIbJ0SPHXtysPL1DVTe3PIKTBCwJoVAyuHehBWFLkQFGZlnCkuAs4cOAqg6SMcyqRAERDgCRdTj9GJMsBZuSSvQ73c49/f4n1Z+Luh9EqC7HCgBEPix0kI+lPAEJKWO0ecPk+MuCk1oXJJGKAqRAiwhakOFeBeeytkgzIos50UhCUEhyKZwR6Ewm60NSZIGgSSIEyyCMYzP4/MGY3Xdx589j/jlqGRcIDBkQHCpFEPJgNgYqNlwaGLB2ZeXYSyarz+wtot83NhIJg2SWMIzf0YdqROWS1CRJZMaAcQAiJIVcWkCOJMqEmWo2NmEIwGCGimTJRmZpsf3R/bO/1/UHtENFYPD1AE/Lk2jQAgRkLeWbAaUHPsu8fmJyyStLPt4h4CWxUbiQaUHDkhzKFQ7ClqlisEQwZIRJvkRw9EIF60AkoAo0YgR6CclTWGFp+undi/P31Pnp8LlE2tLewe+PyUmkWJAlBV3kAKPSoVgAgOXoc0bHXpxTGx6MIkDoA8biJgPCvRN6IcyKJMIw4DRNgomQSz2WQ5MIARQRCCOCFbIUjDxyXPSdTx3sfNS7PdBkhAUjGBSBQ18DBI1QAqCQNyyjNN7Ko0vzUPUDB8xNM77sxWV8s8W0+qQo9dqmkqzkwogAPSIiHLChMkmTaJYIQwyY0kO9FEJ/6BUDgmSj1IdHRsKa/PTskf+0mD0AASoAIE9OAAEENeDgIWuACWL2dqPZvGZl5SuacnWYXFNidGzArzw5vn7y5blcGQPVFQNCwBJoBkFRQzUgHsolEklKZrShpCD3mA83Q2I2KImVASOCORm1/+n9c3/UdWdAh8dS8nxcDiwhnIBoANpxs/b0dnJjbk4grcCM8CHhAWYAGSqrT1677JuDkxjKRhHLDkUSUER4hC/bEA0CNWCBhnJWVCgAF3w5FTOh0lhdnopFNJp+enrxfywWp0NhMkY4IdpAyyGaEuCwgacdL+PLm41npNGTaJPBYCkQzkMlEkAuZD72tI1Lvq1GL19yJgkEkyURwMBnXQqzLAKkIqQYZgMpBAU6KEgN860kJQ+fQ6vkSd/71MH5O6zbjpTCQZFwJyiSDBbCoQiGldXR2jNXtm7zdj2iQ/QCLZZDZJCKJcoHlECuX/WasnZL2IgMU4L3w0xtw/TJ3j2kkNzMAMQy0x1DOJYC6OD7gSdz2UikJh/ppn8+3f1d3zsHRSQzBNwEl9mQBplFqmFK46eOj31Zaq8uqXWKtYPEIU2YoMGvCgwNkKRRyO36CyqjEKGGNKBykIMHE+RSAARMGkpWS69HGJczoEtkkAGYQjSSRH9h7+xv99N7QwACHBo3QAdgEbACVDST0cZ1zepNbC83h1hFJRBMSku3AyaEEFjyPAoQZWBGRDJ6SJgLppRIN2WTgK5KBoIZEBVSj/AhgDS6J8LNkDMt6Daid7QErvY77z84++EBCIYmb4IQwwAKQYwmmbZuXd18saSU5ozZUq32oZBbuBPuAHPpOlWwMMKTiyV1OWep5CBMAigYTDRQjeSBSmmY/5eogk4h98pBp4Qn5DrQor53o6HCRj7/s/0zH466K+NAnQMCLTAgJCzC2o3J0a/C6rNgJXRQWAR7dJmw1IVUzWxYyFAqSVBDdDmjZFnkZjLqu8hL1Z5iMhCUQSF2khDFshvNI6ROPndRCC4pDsQ+Ea4uDIlNCu2f/W+L/U+EBBgVAsXAIOhAYZbbo83mLXn1+szVwBThpoyEYEpDhix/zJIpZGyogCAwla5oRHnXdZnt0fH4KTdMMgcIT2ZmhAmomiL6ZK1Z4zEla3hXfW7RR2QzMkAjIKbqfcm5YazU2ae3z9xh0S2HDQu4MYVgcCh52GTlyAu48dXWXkhdopmFOZInAvOG41B6rBuIqGEpXNNcRl0PJnpNvti+9trVr33FjV/zNc84spG29/czgkowSyQQkhYUiBVKNfbI1NeZR+fRAUpMWK7aDKQ5lAl2+xfe2e3eT8mH5QoAh4hktChuGB15xsrWy2ANtK06DusBijmzgim04gEoDtsuSViKxFXWjYsXdlc3G7Jj5S+/7VWrTXKp252e3mVpxgYjYKIroBh0vEREhAuovl+jRnQGBwhraC3MYCLCU7H5/XsPvq3f+SzkAmwQjmBDj5OL4xPrT379yrFvFBpHBIWYm9w186hek0dfa1+9znv1i+ThnsMr6txH7eK2r1l50498+cWHdvpeNN1/77nR0ebBR7qz56dCRfaM1CQiUIXeluMSggr1jpm8Kg6MC0lIKwNQwlIO9TBe+J/bOx+CDyiZoT4AMgizlMHR2olb89ptgR2Fy/JQxEvOCCQlSxkouS2uvT4YCYlpfn7xnC+d/MAPvuLyJ4/u/4udPlaaBjm1HfJP/8Knpj/0gZTn3/KGZ73qNV/aTedZxLI0teSMklBr1Bk1C8zlHVIBklmhktinypRi78y7u/2HKAOTKEQFEhiGJti3R09NNr4ubL/37cIRjT084VFQXfKYGvNiI+M8NG4KSo44WPzae75+PB5FHFw4mxuOjp3w133L8999x91j2g76W265/EfedOvuvmZdF5VGCVE1WC9TKLxzn8Jn9B4+BwEWszGRAJlPop4//8Av93sPA1mEuIAMABhgwrhZv/Tr1468xk3GSbGsrIragKZDKMWSfRpTnXkTceOXrE4PznmPL/vSY0fXxka78wPdz/zU+y85abPt/tu+62ltqkiatM19D0xzm0+f6ayNtpgpIMSSGSnovXwqP/CY9TGlckoJS3GAbkT9zPYjb68xFVzqoDoMLmYA0I4uPXHFP9b6c2razyYGxRLBjOyHW1dpuADUqV+88R9d/8E/fONNN1zlMZl3/cnLjs3mCzbt+37nro/f9dDG5DjENp+77soNT6mfducOmpuf+5bv+NYPfeTOe9YmTQ5Wkwwg4OrpM9SF1MF6QyStWRo7QesTxou9O/fOftSiBAIsRC8AsGR0pNVjzxtvfMXM0GIRXQmrZAo5qaoEeT/I5lCytkWe9vtveN2p2553zX33fm6mRuHjtSN/+omHy+aN0wsXf/Inbi95fN/DDwbtYK//6pde+9Gf+vDa0fGVl6x88+tuecHzn3zZJeFzM6ORhpB7F3UetYuYS3PJCfMip5HJYjQ/91/nZ/5QUYOVdMkHHp8glK3Ny1/dbj23Nxkoz0hVBOBkjhDo0KiUkoiMpimIHKnkMxfONSVHmOUFrMLnf/Wg3vNfHlwZtTHjfDE7c9qY+zQffdVtJ9tm7K5rrimv/8Zr17PC133sRiugudx9ETFXzIhFqJdItkQBFkk2337P/vbdQjmcDQeS0ZiR7TVbT35jTK40MEcjmbMCRg0bxUipIdk2hbP+hbcf/dlf/FqrXZmsu/X3P9gbW9piZTRmXY9oOLZf+g+f+uEf+s12Yu1qe8dvfvLuPzuzj7I2Lq9/3XVtHn3wI+fnjn2OfuO37nvNq343I2rEXHEAn6p2HjNYRyNUzMZuoK/vnHsr9x82MNgjAGbkCgHsx1u3tMdf0mPa+KbbLlKYWURhCAgwWQqJ7rj2SzZ+/E0vn+524Yv//O7X/+wv/s5735UXl+2odjQje9WeXPE6r7Ny6suuOLO7OFbyZ/9qOru4fetXnpwv8Lf/yan3ffj++bS89NXvDm/GpfWoBu/gc8SCsaDmNFeYYLDkGhX3+fm3+PThPlEAQ4QEV03GtbWTL2u2blfsN97KZmAikioJd0ud1Hfe107edLODhx7c/dV3/bEmKY/bbnHwff/8G376515Wp9ujZiV5Kk0mTOpHLFdeevCKVz+n5JX5VH/68TPbUR46t/IzP//J225/x85FC/Wr7eb6yEr2nGChhfssfFFjHuj7mId5UtNqLZfYOfPexd7nhyIZljEymGQpr538+mbtFOGmFKhUICQ5UgiAu4RONS3G49nOL/z8q+cX5297x8Pf9NK33vfZxcbWSn/+9Klnr/zbt3z3/uwiADND6zbKzvqut3+b9TwxOrjn3t1Yb/7knt1XveJdH/mjC+O80mdf1FXVXthwNhUdN57xk6q7iHloGpJSNBrRJkAcPPIu7+YxUBQlwUElgM3xjSveUMsoK5ayCmyYnSWCBlVxQcuzi3z+bSe+/00v2L+Aj/7ZuZ/7sQ/4OJvSya349n/4lc//8qvO/+XDtrXW+/7vv2/35/79R45sbniJPJtNVtqV9bWzFw6CSUEpuaZsLcWx1riY3jfd/kB/8S889rP6GaKDeiGMMG+RJ5aa3dPv0nwqSzApMhkY1iSTJ00u/YbKtsikQaN4TEsQlZLkuWCDvv3Gf/YVL3jRlfunp22Tb75pfN31l/z5/Rdqxpnt9kf+6Z1Pf8bHv/O7T91w6fr+9sraxgGVO1TUlFcnu7LpRQ8mx0I2cvYZG75/72L7t3b2PxFoyvo1o6telpsbM2IeMYc6IMwKzTNw4eG32mLqbMnOwgIuE4J5/eq1E6+ttAZtmFAdj/tmViBJBKywnHvk3Jv/48smq/MdHeSNNUuzFS8/+m9e/prXvtVirY9ZbdoHHrK/9113PO/F1/34DzyvWS2h3jDukX2e3OZWwRT9om0Mdf/u8+ffk7xPq8faJ792NLre1AZqcJebV39vlQKLTHNLk3Tp7pn39fNPhhLkBoYJosma9aeMj7+SqQyCE0lFDKIimQCIFhAiMtigfeGLNr7/J17yr/7Fn9z5G+/9pXd+d9s21GKvG33na349H5ksam6SmJJFVt2zKIuWCSXgWWJEv3KyP/MHs/N3cLG/un5THDnVrjyVXAlVYxA9WCBx7arvDZtTYcxZWwd7d3Z79w/Eepi+qUTK1q7cOvaNHdqUjHaoRXFgzhxE7WGpCJJCSTlicevtz3ronk9+34+/cH19kgswT5HnH/rIzr/+lx+0TV8txwI1Gcgm2Lvn4s4GrpHv3t098lsz7E1Wnju+5BY160ZGhEEgApbyyFDrfC875+ZyU+ZaPbhrsXtvYnI6lvseS1Y02jp68pvmMAuBIREiOegUwzwdZsutkCQZO69E+f0Pfeam69evveLktHZ/9NF7bzp16d4Fe8Htxz/3+ht++Y5Pp9aTcnUlmxfBFGjXu73P7D/8q+7TcsmNxzdeWbNUaVoo2pbsFME29ae7M3cd7H6smz7Ctau/J2qnNPHuT7oL9wsdYtAOKwBD0niyeenfoY5EmdErDeQway8Vz+X4QokWQTNTyEAXpjlPpr71JDT92n33nH7zO776issv3T+oV1yx+v0//Icf/OOHVksjWo954lbMz+5//lfq7POrx29tj7wopQkoVzX1Llou9DjY/eD8zJ2+2BZ6uGUwg53nSaqn++2/FCqUwABjmRWjrc1Lv8XTmJwamiUhXm4ysPxyjIYVohjIZjXCLFVAVapdNTt7rr36aPeOd7/kmiu2dnp4zP/B373rY39xbn113HtHG6euzs796nznY+3KjWtPeW2MTwgzJ6Sg5dC6pp/b/fz7Fzt3W70YkUk3IBgVyEI7itnOxbsQGahA5fDVwYSE8cqJV1NHEfuCAR2zFIOUDYDOMA2VgIANEzlpgyoRCRNLo5J8fvGtb//2B+/b3onx6c/X7/mO949OHCnZ5t3B2vjY/tn3bX/+jjx++ub1P8q8FfV8gaTGwqKMrN/be/BXphc+lpkS4KRQB+WTg7K3/tS/f3D2/ZifCxjoFMOUOYLZ6uVvSOVE1U7K44iaUA6lm+UqMWgcVF6jlkrvElWh5AiplrKSrRxbXfy7t3ztj73pD+7644tl0uaaOiXpge3PvsXaaE98ezu6Nti3kMuNyHn1YOfjB4+8L6YPChFwCjBRJcKJTOsDQRrHJ0/Ndx6ygONRpdtIrF7+dU17qnJKJiCTYbJD2RvLVSLKsoERgpzLXcAgAFaRgcwoZVwDXTfP7bhpoBp99br7kYOzv9+ufcnq8W+ArUaaJZQK0ZLv/+X0kfd2swdKlFCtrDaoTBaBDAajkA4FLROjkwSEimH/lRhhKyeeWzZvJwIRoEPZ2AMJGvSzNGC/AoPKHUAsI8rhAKLJScCseJqlWG3SQkDnJdvB3v2/1k3vXX/SK/PWrR77iEQiC9ls+8E3zy5+IqhBpAIcsZyjiZLMyCQa8trKJaeatRdm0hi9QCCBla7R2jWjzRcFZlKT2GPYYiAJhfCUEpAet3x7VGUY/uewuaEgmNWIrKEfdvNFqBlzcf/2A2+JZmP1aT+Y8gS9W87JIpX17uF3nz33ewkaRFRx6JIgllv0UNRmsnnJ3xptPovphGyPSmR7fFiGC8wwjtbXT34rSism0CCHQKShbR0qiwaQ4DLbhbBhZUjCRhNGwPthFd3WWJBK5jnK/tn3Ty/+9/GRr1o99lK3OdOaoy9cwezM9kNvWxx8hoOwQD8khodeIsr4KaMjt42P3cw8hhySgmaWl3vXYbGY263LXle5mtUHUwvvh9UqDsFTgPFQVqAtk2YAUYKIEJGhkAYNr29JGRCTi+fe2V345OoVr1wf37Rg12VrQiOs7Zz57YNHfn3QWgUYnSggFCHzjFWU8fjEiyZHb8tpRqmXRCQYCEH/C/cfWfxNXhpSAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAApQklEQVR4nD2a+X/dR3X3z8x8t7vrXkn3Svdq32zZsmV5d5zEiRM7JomdhWxkgUASIE1oCpRSQqFAS/O0tDyFQGkLJC9KSYAsJXscx/Eq77Jsa7X2XVe6ku5+73eZmfP84L6e+Qtmzpw5n895nyGVjRvh2pKIKAkRKCkAAJUAICRoKhOcSwBCGEFwJOqaJuyiYvg0ly+5ktA0pjBmm6amadwuMqoVnSLnpqq4QUrTdhDQLOTdurekPOINlHOpOyiEk6SYTSXzKmUaY27d5TjoCYZUZMHSSCJlVkdb1u+8oWJNs7e8UqesioLB8sJDE8iWTZqUmEdHQYVE6jcgAKAgAIBAKUiJlFAgKFFSYAIRCRBCJEoClAAwxlTdgwLNYk7TVCkFF0JVVcuyKOG25VBFQYmOkJZpMkJcLlcgGPaHYhyVvJnhoigLWSaRokpIQVN0n78MNM0T8pUYVf5gZaSppbGlOVYV8ejg0gllvChhQZIVmyYdJh2qE6JpxK0SAKkgAAEkjFEEIRwARgkQBoAEJCAhhBAAJEgYUgSp6m6gmm3ZIB2FUZQSUDIGjl1EKWwJlKFwOHdQogx4/d6SkOErp9QoWAVeMLlIc8dUpKpppKTU5/VUaYYnFK4tCTfE6lqi0fLqSFD3qaqmJ5BP5c18ihYdEAwMt6bpSsRNnHxW5jKQSuWzS+ZSVqEEEFEKiQAICNeiDYAIAAQoRSEooYjAKNMMry2sfCHpNTxCIAACURDBsooopZSSI0UBKATT1IqyKm8wYiOxuCOKCZ7L2MKiAGXugKEpTevWaYGKAtebWzdu2byuvsoT0JR5nh+xWDyr5hcFUaTH5QmWot8PVspOjI9MD/TwuXmWTuqkoEqZLRSltEm0qUMKKYRglDJKpUQEJISglAgACIQyQihlTHEblm3xou3S1CIvGppLWCikVSwWVVURQjgOFygBlJJgqDwaA2oUbOGYWbCzwskr1PC63dFwuDxSGa5ZGyhtDEciOzc2hStYQcrxIh1JO8sZhSro0YjbJ11KMb+QmerrW+zuk0uLfpHRFEEoEk3JA1VVV4nXZ4R0Ulm//lqwCSAQIhEABSWEC0GZghIYo6pmEIUVLYugvHZdKmNCCoc7kjsEiCMEFxKQuF1aoLzKZfiZqq2kk8TKSG5pGjN0j8vldWklreuuW7t5U9u6hsrygOZV5vJixoREARzJPD7iVaRZSObi04mZydTAqDM9r8qCW+W6x2t4/cxNqKEDNYCwnLBShXw6RxUABijJ/yYOkSBVxqSQhFKJwBRFd7kkcrNYUJkqBSeEoKJw7li2SQgIFFyA5LahuWPRepvqwFxpK6/kMlDMSmK7DQ0kocTVWLvlwKfvbGurC5YFlqUYSMJsChxONAb+EGWMz41PXbxwySikVwb6yxSqmsXKKr2iao0RCK/Y+fjCUnwpl0wn8jnTLFhcCAAAxybR+g4gAHitaEpKGAAgJVIgVVSXoVkWd0RRoZpjO6oCKKXDucUdTTOEaTm2IwFDIX+kptlm/nw2i1beLGYktcoCIUWwbJa2tbY/+ehTO2+rEYgTWefKvDKV47ofPJpeGiDMsGa7By5/eHQlsRwFQayFssrSllUNobqqdEHvHhy+Mjy8HF8iqOhUYQQZESqlIFBTNMOgpLJuPcC1EkoAgBAJhEokhFBNN7jNJTqEUCkdAApCONwGkCiBO1JIrutaWXmVJ1SRd2zLyrO8SZljaIZhqCr1hX1ln7r7zlv33kq8YiJL+5PmUlHTdVkSUPyU+KEwPjIwdL7HWVyQ+YWKsjJvyO0PVeQlm5hLDI+NJxJZNyWI3K15OJN+lahMc3kCTDMUwwUEiCDXDgAABIBIRCACqMKYqiiqaZoKVYR0EAVlTHJu20KCQM6lEIg0VBYOVMSQGFYxb2WW0S7oQU/Q48O8WF217rb7PrVxU7sn5u1dtidTJJXXqSHKXBAqQSeXzg6P9pw7NTM+EQt662sqDL+Rttj4/PLiciGVWLRMwSRnAD632xsIeHxecPvAYSCdopkRdtGrGH63X/F7SLS+XUopESlhQIgkqCo6ZUo+n1cZ41KoqkLAMU3rmiw4pi2FUDQ9WtPgDZQWbDQLKSsdVxj1BMKG7YQU3+cef/j6e/e5vdpAXp6eJo7C/Lr0uCDiYUoxd/RPB6e6zsjUVHNTy7r2tryhXro6PtA7s5JaVDWPTg1pZtweLeAtKY1WE1dJtpjPLMzTXKrEp0RjdaGKJn9ts1MSFF6PzSiJ1K6jhCIgAGWUAWOaZmQyWY/HbVkmoRTRNE0LkBOktsVBSs1wx+qaQHXnTVMU0twq6oYrYLh0ym7r2PzY1572VZalLHF6UU5laZmHGS5S5oWgk+07e/7t19/w5LKrW+ujzU2gaBeGx4b6p2yzIKRjaG6f5nJ5jEBpvddbYoOTTMyiUywxvJGW1lB1oxatAZ8vh2BJQgUQAZIgidStp0AkAmNMUVQBxDFNVVWEFERhlpVFLgAYl+CYpgRRHooGymuKUvBiziqkFGHpbp8hlR2ta/bfd8+W23elQR1eEcMpIQiNBBS/R3rM/GJ/37mPDhYXF6prq8I10SXbGhyZG5+JW5mcS1EDhh4IleqBMqoalmlT7hgq80fKvVW17kiUlIQd3eMIQElRIgGpUIoAhBEEScI1bYwxJFTXDMdxhJCAUmEUAWzuWE5BUVRhOsJyNJcrHK6hobBZsERhyTKTBlF0Q/VZ2uOPfmbvQ3cZ5eVDSXkpIwUhTFWrvaRMsXKTgxfeOzw7PNi4urG8rmY2lTzXO5adz1BDR5MHVFoarXVHK3PpnJNY8CqsbFVtuGU1lNY6RsCRIIEIYIQgAySEAqNEIgGUgJQxCUhiDR0C0XC5uCMs22YEKCGSSOkIzoVkaBYsQIsxvb6xQ6qefDHHCmnkWU+wRFdUV1L8/T/9bcvOLSm0Li7rV5NYykggQMJ+CZPTPWcOdx8+EgmGtu+/dXIp2Xm2b3ZuSVcNt0S/21MaiZZUr15Ixgtjg5HyYP2OmzyrW03VkzEpOIQqnDCGEhlVAJBSIogklIIUgIBSMsYEQRJt3MRURQpp246iMBCSEOlIzi0HgNqCE+EY/pKyqmZKjFx+hRayhEqX1x9RfR3r1z3x3OdLKiPztrywAMkiDwb0MsOEleVMT++5ox9LZrW2dhCf63zv2NjIjCLRp7m8AZ+/PEy8JU7eZGayorY62rEVIjVFVPIm/u8miQSghEoppEoUICAoEgEKEASUAAiAjEgpFcNwFcyilJJQ4ILrilosFhGFRHC4g0jKymOBcLUDSiG3YhUSHkX3Gb5yw//kk09s37VVD3oupOhQFgBIVUwp5fmF7nODh48txmdb16/xRMvOX5kfHpvipuXR1NJIsCzaYIJqJZdLjWLl2iZW30xKohmJzFZAgqYCIcglMsoAJHKiMAURJYBAJJRIQlAiIAAScBAJIZH6jQho27bP6yEoM7m8xigXtm0JW4hwuDpYWV/kFi8mCysrXq9W4imLucu/98K3Y6tjBaQfzpKMZBUGhELEY+Y/+c1vJs6crF/duOOmHe9f6Dl5pt9DDcKdWDhcWl5OXYHZmfEqt77p7nuKFc3zoLk4JUQyQiQlVAXBUQAwCkiAAAgHEFBwQRkTDlIAQkBKiQQAqZQSJZJI/UYkqKmqFMIxLZtzCli0LCAkWlkjDY/tgGMmkRdDhqfU41+zZsM3v/cNb8i3ZJPjC2hJGvE5JaqdvtJ38cjR1NTQ5hu2FtzuI52989PLPsI9bn9VZUQQmU+tlJeUNey4UWvftAS6U5QGUQiThBAEpAAMCAIIghIISEAE0CgRCBwkEAAklEgkTAICOghAATmSiqbNgJJSahWLUnCByG1BCFbG6l2B0uVsjqdWdLRVt9unBx64+56HHr3LGw4OJHl3EjQVqksoX567+MFHiwN9lRX+cEtT/0R8eGAul7dKfEbQ8GguYucz0dKyWMcO1rLBcvlMB0CCqitcCoUq5JozQSCEIEgkBCURAhGoQKSAUgKhTJqWoWhEI5YEgogcEQE1qqCUlIBlmiglIcSxbQ2USG2d6q0o5lcgl2K6pRtBn/R+4UtP3/PpmxQduuIwkLVdut4UVsYvXzr6yqsBnt26c9syKh+f7FteylBQohUVPr83GZ90UWPd9ht9W6/LKCEuCRNSU1UCRAIwRSESKSGARCIKlAgMJWGARCAqQjpIgVIhKRXIlIxEyDseRpiCeokKBJe5UADQcbjkAlA6nDOqVNU2EHckZyZ5Ps00DLqDhJR887vf372r1XKcTxJsMp2vChotXufsR0cPvvq7pnBg+523nx+a6j53haHi0aG6qhIkZhOjW1e3xvYdmAvFZrPgoUxFR2GaREkVEIjX7KOQiBIlEkmIlIIgkQgEpOTSYYRIoSqqzYXOeMiQmlcuZc2p0dm+C5fHhyYKs4ukrKaNIJimxaUAgFiszl1aVcil8qm4rukul6eypPrLzzx9/e72nOTH42gWZTQInkLy+IfvDZ48276uoayh9lT36MjYbAll/hK/vyTAC+kSw9Ow82bXuo0rwsU5MgUooyCEQqhEiYxKRxIgXEoAQEkkAl5raiUSAQASFKKrtqLohWI2m4ovTS/EJ2dX5maZhSpVXG6P2+PRmIuUVq2WXFo2J4yFwxFPqKJo8mJmwa0QqrpqK5qf+/qfb93WMGlhb1YnHGN+kInZ93/z34mR0a27Npku/8edXXKl4NLUcDRsScdr84a2tsD26wueWJEAQ6RMURgABS6EiiAFOlICUaSQIEEiSASUhCAAACJXqQyWaLZ0xsZGek9256bmfZpaUVERCIaTudziwkJ8dlbkiypApCJESivXmKYpRLG6boMrGFlYnMynZwKar6KsPBTe8P0Xvt9YLyZM/cgkxALYGKBWYvSlf/4Pkl2+45EDV4Zmjh2+yJjid7Ha1tWZ2USJl62+8VbesX0xi16KjDEkSAm7ZtcFkaqkjpCCABBKJBAF7Dxy4lCiFsDxCxIqBZNbH3945MKbh8oE2b1vb317+/jC8uEPPxjtvhxEsn7TuptuubGloyUYLSkwQrylDQiktDJaUtZo5xbn5ue8GpQEwy3Vrd/4zl82tlT2p6FrkVcHlZiK8eFLb/3yN+ESvWPXrvMjI1dODzCC0fJwsKLcsrJBb7D2+lvydS121lYoUFVDQEIpAAohCaEOIJMMJQiJxOEIhBHpqFQFcBHB7OLi3MTlroHpvqmaMn+0eZWpa1d7+2f7BiIebVP76u07t9e01Oo+PWlDIscTDrUkEHdJXXm0zh2OCbNYTE8Jk5f7gu0dNz339aeq6gKXk3Q0DWVeWqezsZ5zb/zy1011FTWbNnR1Dw0NzfjcRiQUdrk1xu3S+mb/ddtMPSxQ6ESVtlB0FES7lhYoUQKglBwUxZHgCIcxigAMgn6ZTa4MdV9cGpv36f6yYOlSNj09PpFMJOv83nUbW1o3rPKWh6WuFjhJ2qRgUosx1NCjqSoDUr1qW0mk3iqKfHIGCferekv19r//yTcryrXurNGfglo3bwrRy8fP/uanP75r3y61rv69w2ezK7mIpgWrG5JL8Sqf3rJnH69ZlwFdk4JJB1VKVCYIgAApECRBJEKCRCmkBAmUUHQKpQGPRfPHfn9w8lLvrTfsjK5f/dGxc10fHa4Phj59161br2uVQd9cHheKLGchcNQUhfqJEUC/qjn5ojk3DZMzpG3XZ00zby7OgGIHfJGW2rZ//c9/VBSnv6jMZGmN2ynT5amDB//4bz+597bbWVPD+x+f4StmaXnIHw6ZC4mastLIHfvM6Cq7yL2GYpmOQlVJBFIiBSXAQVIpQQgQCACUgqMTKqSjuu2Lp09dfvv49eu2tt3YcfBUz4lX3rx+Vf2TX/zM+u0NcSDn55JLRb9GmaYRqgu3YctC3pxLxHuvFObnoWCHIxW16ztIy4778+kF4EW3y10b2/IPP/xKrKG8O60nTdIaBMwtf/zOuxcOfrBr1zbbW3r4VK+ZS9aW17sDHjuz1Lh6tX/b9VaoRtjg8hBCgNsCECShnANe67IlXNNNgUCp9KnMMVeGenoXesYjHq+/qurqyPB873BHbfj2/bsr17Us2HQiYxdNxjyq20MZz5lLc+nxGbG4pNiO4XZ7KyppaWmBkvlifno6Q8qbt6FZKA2EKmrW/90P/qa2OXguhcWiurYUIJd497XXeo6c+NSdexeIcuhUF1lMxxrrAj6vyKbXb99J27fkFb8UqGiMUKCE2Fxe66RRUslBMCJtdBAUaVMgEb88feHSxJEzW9e00vKyw4dPmtOJ+3Z37Li5w6kon06RFaarDhMK+ELgrCT7jnTmZudXV1VHW6r1cu98io/HE9PxVCqezc4lVhbHWJ4TVyBaHW0sCt+//+LFLVtinStkJqfuKpUay7/877/tOvTO57/waNxx3jl8yW3LxubmdDYesO2Om/ebW7cVioRRyhQGcI2kEiLQtkxqGJxTRwgEphEogBVUZaWX/MsPflmxnLj/W8++cuj0oRdffu7+2/7s+cfiiv7JsGWC219KNeRuNwuq2H30dN/p0xs3btlxx3Uj6cLBD8+NDw7yZSIK2Xx8EnOJqtropq03x9auJU2rdvp9kae/9oPd+1q6l3CFK5s8tiZyv/vDW31HD955+56C2/fB0QvEytdW1djCDAq5fs++fOtGy9ZUhdD/D4IRuRCg0IIUGmfIpSVBIUxT0E+LI5cHTh87tG3tJgz4Dv/x/bVu46lnH9Ba6nuWzKk8lHjdbk1oBFRiyqRlm1lA4Q0FB2dXzp26MDsUV/Mm4ckSXygYCa3a0N7Uskrzu0wihcNIbdMNz37lO5/57K4zKZbI07YSUak5v/iPXw8cO7p33x4Meo+cuGIlrdrGKOd2TMG63fvspvUWJ1RlQIAKIlECXBNUJEhNBwEFMiI5r/WrE5MjXe8eraqIBcorz505Ec6bB/bfENu5eShN4kXUKPV5maHZTipr5y2XTguExROLU5OZ6b7hxNiYT6dVscroqtry2ppoWcyvU5dOBEiTo0UUkwL5/g/+9LmnbxuVZHhZ3RwSYQN+8dtXzr3++mcfuiel6B+euOIkl5raWguLifpgsO7u+4sV1Y5JVEYoZRyvmRcpJQAQxxYSCEMqGGg5WVHFDr391vypngOPPXH01Lnzb735f77++Ibd286lybglvG6PG2VpEAqLc1NX56vraiMhz5mzl64MJhfmZlf6rvhCrl179q3a2uE2PBUaKXMpK6SQFJ6kAAekk87m42PGwgqZzoh50z6zTDr8eqNXvv/x4d/8+MUDe7b765o/OH4mu2S11EUzMlPr9bXdtn8l1oa2UAwFEAQjIBAlCi5REiGJ43BDAQlAEcpc2Vd/9YZ/2dp04I4//OH31Xb6B996JhmpPDnneAxDM8CjcTcUhq8Me6haW191dWjqo5NdqYWCSExHSkO777ypY8PmgE/RRM5BNm7hnKNLaonsMp8YgZHR5NjVpfk5n0LJUMo+myAVmtLkkxf7el/6p3/esrqhcm3zicvTcyMTNTVhRZCQj7Xdfm+6phVsUBgQSoQASqhDBErgEpADFSABmZBlbmd4ePjMu0fWtbXmJA4e67p3Z8u2A7tH7MBMgftczOslmp1fmJj0qbrOlOVM/mjnpcmhOReVa1vqNl+3vmNts6JDvoDLnC0LUSDUTExbw3250dHc7JQL0V/i8vv9Ppd3xcyQN6cE4bjJD70jQz/7hxfWNlZ37NjaeXGkr2+0vj7mmIVKj976qTsLtaup1IBSIAAEBUdCiESglNqO1CS1qIMCK1T+zvtvw1Ruxy17Xn/jzVjefPZrD9hVNf3L4CiWz6VWuNWFkel0KrOpOTY4OnbswsRYz7CdWr7h1t233rC2pSWqanIwjTMJWjBoZRBlLnvo5T8YK5NepnoMF0csLfVMzy0sLS0IC0uCpYo05eoAdZz0L3/+06iqbti2uXcmPtg3Ul9TzaRZ5mItt+7LNbexAjKFXHusAJSQa2wATFMYCi0yx23Ktqjyws/frEyk1j2y/y++9M1v337DV1587vB0fnLacfuhMuCNgt138UIgEmusrP75S2+tLDpXL59Z37bmqz/4QWulJol1OU7PL1klHq2lQXMx3nvh6rFf/tea+mq+eq0P+fToyPzMbLEr275m/R33fqF0TUuSMnJh0WKF3Ev/8evC+OBt9+wfji+fPH2lxOcpLy11SxHr6NA27CgI6jMMSQgKgXiNlhEpJEoQKBCVgAIiP//GK++si1Sxisjp1z547K5t2/bfdGoaLUJ8LhJwWebsVHopH66qujoy9smxLpHOh92eew7cvGPHqpyAibQ9klF1N0R8zCkkFsdn84Njc3PTXIpsOm7l0GO4Y2XlkYZ6LVKZlcr0YmZ+cWlhJUmGk87vf/3Sufc+3P/wgQLqn5w8qwhlVVMdmIXopm1a+xYOqosxIR3CVC4RUArJbC4kB0ZpkfIaQuIrMwdf+WjX1vbugZHUxPi3n3kQamP9OdUpQmUp5Svzy1PzzbHyntnlK71jfRdHKlzKXbdv3X3jNuLRehLWYhaIl3g9xE4mhjrPZcbHSC6TSWaBilCkYdW61kA4spTlU5n8YrqwlM6tzC1nFufSSwlSzCnvf/jW+cPHHv38g8uCn+jsl5asb6nQFDCqGgIbd6S46lU4AkqmUiQSJEgGQFASRihBUa3TmcToS3/zX9/4xhP/9LNfba2M/exfnpl0PGMJ6vOplTUi3t8/fjWx//ZtL//p4Nh4pr/z/N23XP/FL+8L+gODSfvKuCjxG7EGNDPp9OW+/MBI8upgNpnK5LJrtu/Yf9/9eemcPDPceao3o7uzyaWxy5cWhsd1KDa3rNm6e0PTxo3kwKc+t7GjpbypZmBkprere+3adeGyStWtN336vmVTcxEhHAVUEAQQQALhtgSUjoMKYJmK6dTSKz//zbNfeuzH//nKRhb6xo8ePxu34pYrWMbrmBgbvqoXSGNT5X/88dDw5Ioan33isw/tvm3dYtE6N09zIGvDmuakR05dMlYSvLBwoWfE4lp90+ode25NgtPVMzUyO1UsyszM4tJYP7fE6tY1G2/c1rhtuxbyoEPMLCdPfvbPdtx8/Vwy3XWxL+Qvqwj7Nc1Te8uniqFKXRLKgEug/zvLp4JRzhEFRwFRF+/v7r/88al12zo+/NOJO3auu/uR3X1FtoysRkdfMT01OuZzuxYyhUPH+2dG5zc0Vz/52b0VNaV9STqbFm4vUWlOjo2kR0fSM9Mzsws5U2nYsC7a1Jjisn9idnI2kVlYTs5OeVSttramYfOm6Oq28vKwCpJxBFl0pHSEo9y09+bFXO7ShWGvx1dZEVYlr7vxJjMU1pFQlSI6RGFSSkBiC0RA2+YqgbCLXOwfGDpyYfPtt7z2s//+1lP3Nmxed3xZegy11Uv48uzAleHrtrb97I0PJ8eLi0N9z37x0T03rs0b9MQUdxQ1WgG5yZmVgfO5udn+4SGQnvaNOztuXD8wvnRpKj69tDJ0/vLCyMSaxqr9d90b2rDK7QqXERE0qCR21pFLQlW9Hp9XegQo8WRiNl4sFDONzZs0UYhuu9GqrteKKFXKQSjXWA2hDgcuQFhC0SFA7Om56aP/9tqjX/+z7z/xnX994csdt7S/M+GESrUYs33F5OB4fN+ebd9+8Y8rOVGcGn/pxR9VVkHc5mdGmV4qGoJi+siJxND41NzIYiK9Jrb25of2j1u51z8ZWbHyo729oye761pqv/q3Pyzb2KzmeVSaZSVW3DH683ZBkIqAFqW2Nbc4cHTgwuAY+adf/O5057mq2ho/JbG2du/G7XmiugiRjABSKtFGAchsGwUiRWReTAyPn379vb17d7/5uzeee3h/7Y7280tQ5mM1LuFyzMnJOVvTX3r14+zsckXQ+62vPxKJeAeS9mAWKwOMxMfGTnebS7MzU3P+cO36LVs8TfVHLg7MjszPTExk4rO14Yrr79q3fvuWkELR5A6Yk5Yw0UB0hJlSErnhybHJ8Zn4Qjafz6tCUY6dOF5RXu11EY+/onTr9rxUFUYJUAUFB7gGgIUARGBUulWcmZk79/tDO3dvfe1f//jVZ+8Jbl11JS3byg2/YnmkXAZtYiF16MzwxMXBO/fteOrz9zHdPh23LE1vK4Ox8xcGO49kl9JFx1m39frqtWtmTfP0+b7uMz3x/sGmxro7n3py/dr6uohfSJjL4zynqttVsKzk+GByIhFPL8/NL8WHpovJpVCJUV9fW926Sgm6oz6/6rdY7Ibr0oIZCnUBtQhwYEAAKeH82gwTNJWkGb76D//+3W//+fNf+8eX/vGLRmND14qysYwFlSIFLCquc2fPH780NXb+wnNf/dINW+o9LvvDcSwt9zTL7PEPDvZcuixSRWrDXY8/nHMFTg5fScbFx2++G3H7nvrK020dbQ1e5vbQ8zOFsaJWEVMCruKlT3oGh68yxb84P9Vz4qwsJvftvWXHgUcCjXU520hmiRIMGCpS//rrTE+lj1AHgKMkklBEAQgqUxwmiGBCmCL38vO//Mw9d3z3uy8+9/nbtFWtlxLO9kq1hFiKoicy6e6egffeuxwfHn7mmc/duqMlT9n7E1ZdGQSLi0fe/mDg6tVsOl0brtvx4P65TP7M6Z7JwYnZwYE7PrXnoUfvqyt1EeYMJczLi9TnFmXK8siJ0UtdPbbDkzOp1OxQOBx45mtPte++OadCNinsZatSsxs8hPzV8z8or10fu31v3mE6VQniNccgQeFSOIwoXDCgVMv/8Wdvbm9uOHKia8+6+m0P7hvIybagVu8iBdu2mDh65PL7B7u4mf/Cw/fs2FI3k5VzplofZHLu6sVjJ7t6egvZ4p69+7SaxkuTy+MjE/3nL9RWhh95+J7tG1crmhjLkEmbKgpmUovxvsmZsdmro2PW9KzX5129eV3jpvZgbLXLEFpRqgCqAVxn6VwxmcsoCnVHtm/KFsBtUAYgECVIpEwIkEipAwDC76e/+vG7GyK+Dw99cl1VdfsDt17NYHtIr1CJ4/BlLl9/q3O0dzw1Nf033/3a5jW+rmVMmlpHucwMXu7sOn3qyPnKYPjLX/3i0fH4xHiir7Nrrq//3kcfvvvObTVhOpbH/gnpDqo+Ct2nziZmlxcXVs4f+iBWEb3n8w/51zY5lmuVi9T67ISjzHtpgvPMzII1Hw+oalW4VGnYdRuWlPuoKoW0FUkRBOC131qIgkkWKdVPHT1XJ230V1arvseff/TonGwNKlHGgeKMjSePX5q8mlyZnfn5v/5dOAJnl6ycTdrD9vzglbPHzl++0NUQij341cf/eHowkaMjnWftzNJ3fvzdHesbPCx1Zl4bzlnVIW1ydPTModMuQnsvXbGWV7707ac3XH/LVNypE05rHR8uKgeLUCwUcgvzpdlcJKQbq2oG0rkTPfPk7YSVyhE3SEtRGQOHA0fJHUAJIERQwaGx4YO/eW/r9g0n3zz4k3/7614SiuqisYS5AWct8fGpi5+8f5mkV55+9oENa6pGMpATRp0rP3fpYufJ41cHR6/fet362249dOlq3/jC1KnutauaH3nq/rba8ix3+pN8WSo8uzLY1Tc8OpaenIa82XHdxm237w34vUEObjfGOZnOoZ3LB8ER+RS3YaRoDw3NDV+5Ys1NqcRSinlRRjVNoUmUEhhBBAEUKEGpqswUqY/++8N79t/025def+GHz8woJWWE1XhsHWmWQOel4dMnBu300jNf/VxbfenZBbMq4Kt2WX3HTp449nFuMf/gvQ8WS0OvHR8avToxd/HC/sce2H3bljWlvsGMPZOnip9NdHYtzmaHBgeXB65uu/m69l07GyLlMTckuDkttXRahKTd7pIrluyZXewdnb3aO7A0MBRgzuo1DasevTMQjSilTC0SVB3moXaaS2Koqk2LjkOpGvXyv/reaw/csfdnP/n1T7//dPnqaGGZ1IVFqXRPzs4P5u3X/+fc7JkTL/zf7+1aVXM2no0FSirU/PlPDh89eDKbydx394FCrPWDk1eW4ovz3Ree+/HfraoItoTFsRlzRaqxQOHVlz4gmnb18HFD4HP/8q2gq7LVXwj5lA+maM7h5ZivKpolQfX3Z0d7+6anrnTyodm6xsidT95Tt/16oTG5QqvcNnl3WjiIKHhApUVB0pQSW5gUqpG/+v6pzNXeTEFrDWtf/MvH+lOkMaCUc2tkeiale1986ZA5NfjYE49v31wzuoxegzb55KmDHx89dMiHrtaNLbKx443OM3KmmJmZe/xvvrImHKkJs7MJKRTBE1N/eqcTU2Kit3/DDVv2PnIgpmmREj5v6j2LIuoRNbKQzGU/7h08eWo4PT/nLmYq6sp2H7i/sX01sQELZqWHegJGRqBiEVSQCo1xLiWAI9EgJMxgIr6wNDC+ek3bWFf/s197ciAjyhgNKnJuesYMet/849l0X98dd928a0vtSFq6Va1Gzx7+n4+OHT+J2eQd9++b04JneubSQ4vhgOeB73ytORbwu8WpBLoN7Dnb2dubyAzNUw+9+5mHN7StavaRpLRPLetgWhsDZjGTOnR1qfNi79UTF8Pl+t7t1zVsWdNU21DuI5TzFSmWNa1LKNmpTDGXV3QpTASGtABQ5FKhVCeoQvFHL7z6mTtu+O3Pf//qy8/H8w4SLeLBgVOdpWvaXv7D4Xj36M37rn/o3lsmVkymqK1B/OD1g4dOdcpk/pknHrvCvf2J9ODJU9Gq0IHHnlhTrgLQY5O8qUb8z0vvcEl6Tnfu3LF5y6f3t+haTbnSvSjiqBpLcxtKfccv9R+7ONvffdZjFx94/MH1m9qqSwKxgJ2RYjhJxgsKc4OTX7Jml32AzbGg4iLUkqgKkpJoEuInGKTFH7/87i3NoXdfe/9vv/mIWlkylVXqfMScG/NVhQ9dGc+Nr7hVvP/h2wrSympsqwdPnfjkTOcZXMrfte+GeFnD6UMXRy/3V5WFnv3GXwTBBM0+NU9WVzov/fRPhGD30XMHHryzfd+BVTQbKDXemRUujcTMpWCZ/qO3Phrpn4PJkRtu3HrPA/eVR/SISpIoj88r8xZjhu2iWTmZqnRrWm31eCrxu2ODCgMiCWSBS6A+oAFF9E8lycyitrZ5g0Nu2rO5KytCOhZmRxbGx/MlNR++3SkWZp//4V95QIznSXuADvf3nPjkWGppZc+NO83mjhMHz89dHWgoL73niYeDWMygNpWxfZD6w+9O5ZOZldn4Q1/5UvvG1ma3mQR316KsYvlA0RqYif/0UFd2aqyxOnzLo3/evr7BkLxgicMrJMVB15RVXmoQJTFnJqRz9MrUUP9EcmHRTW2liJJQAggK0AAXOWL99B9+uff27YdffvtXv/3hTAE0SoJ29v1Dx7bu2fOHt87z8aknv/tndeXajK3GPJCYGT18+NDM4MzuTRtY+8bT3TP93f0N1aE7vvzY+lLPzEou7wFYWTp8+HR8ct4lyWeffzrsrWotscfydMky11JnOZN8o3PkwunTmrQffey+rR2tlSXaXMbqSioZSRUmygmp0sT05MTRkcnphWJ8dKIYn4xFIltuXBdsqlakrkjTQVTdTPiQ/9sr79zS2nT27bN/9c/fcAf0QQe3+dkrv3j95ltv+ZdD3f2HTt39hdvbG6sdyihIn5M4efSTq1PxUqY03nffK693js+O+1em7v67b20N5hcdBoa6vDD1+1+9xQTNTE0++Z8vZFZwVx28N22oJLdBVU5Pzvz+za6p7jPta9Z+7vkvrC8xNDc7PsvHclowyMqsQq2XFZZmXv6vy4kEMdPT1uxI684N133lr72VIbkk3NmcUhQcCUMuvCgXrUyufyq0Z8cqIbe1VY/nnJiLHPvkCPPpH06tzJ8b2bKp+c47b1GpPZ/FBsPpPHziQs+IOym+/KPvvPhhb8rOyrm5z/7yp1WqSNq+vM4TK/OH37/g8QUxE3/m339i5Oz1TfqRhBP18jJOP+obev0Px7XUzIOPf/rO/btqXWTGkucHkenYGCLl0uZW9uDHlzuvTAPPkmKqbf26nX/xmbJAmZCOmsrbHpbwkf8H4GpltGBZ6C8AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAiNklEQVR4nCXaSZNmV3oY5nc459zhG3POqsqaJwyNqRsAgR5oijQ7RCsUluywHeEIe+lf4d/gHTf2xist5LAVkiLYokjR7CZNsQFCjbmAGlCVVZVZOWd+4733nPO+rxf6C8/6wT/9X/9nMMtGDgcFdxIUFb87qm5vj3uuMSwMOjIS65gDuq7AwITzRZM1IgFyL0sXXOE5maGnkGXWpt5+DD+9XS9nU8tdp+2igck8Oy4DM7tZmteJlgTjSTP1hTnymeJ0nljcPJKV5xqJAVRDFkXuzPqlE8BUVcOkM8w+5cX3T9xrd1YdWiXWqiCV04RlCa4zyBrZt4AEnCCJckO5YM7ZBBnFkkBGqNU6EmIi7x0ohIJEMkKvaxO0pYg0+bzne7ZkVAS0JCJ23DTFxeJV7PrL5tFySUAXGkdNnucESY0F1bIKMWLkBAheWewEiZwz707UmEOKSZbSO512ziCqeqSCeGFqoCSSPdWsRE4zCFICJEABMkojV1mUzhARjYgAFp57gOJcYbAQI0BVtShtVi64F9U60bPF8vBstmxpuTibL+rJ8pyVmrzkcgyaGWOCmccBie9AA4WMuMhaYZl1KegtL7KUiCSSBWYGpShEuUjz5JADmRh1hiWpy64FQ0YEFQIiCwgssCQYmCwcZ9EUOxTNDoZgrfM+awPaM4wMfaSlZAGFxOVSF9PztHe2e3yyePUK5ymJRtNQYpedL2oeSE2+Qi2zWYBtsggOnDpl4egHLipFkgI1QFjz2OQM5LKHAOBaWRRWq5JDVs1GoGaKDogIGMXMEFQ90sygQCSzheNCFEyJ0BzXhB0DpUQUvGYHpABL4qpNs5Npczw5/8uXF2cHk0USEGdUgE+hCGgMnGsbqS1cqE1bpqYAZ0hgKlS6qAnAEQKYs+AcZpSABNJP2AH2smTRpqAeG4qKM2wVnHeEyWOBZCFBSuoX3ZLrLiBnaC0jFFPTDSJll10cZpwjocYGHIgNJc36lUfWo+OX3z0+/WH/ALMDqFxVZdf0gxdAgLGnQqUjx12M5FZR0aGYEKAkm5s6o6lhha1GTsreqVBSECfOEHLWTA4VMgKJcslauKHD2GNNbIUQZCXijiWihsA1UTZjMlErQcCYvY/OFTl0vmNVECy9Uc7zpPj93t7+89OjSRuzlW5Y9FgNPdSKylwyWTIwja7oNNbBt4Cl6dII2iyYVMksY6Ss0oIJtBmsWLJpatEF6tRYJJUudaRdxCJglzWYa505BfOZ1GRe4LpCm8w3sXXBq3pAJClME1kFoJqcMZlJm4SZ1cFCZi/2Xj76YZ46MQHs6dAPPaFiEEKxiYeQrWsMA6BCh53LeaLUGnnJwbqJtLmjgqMXSlFLgJizlI5EcgJDBotJM5fgI0yNLIkChRaNbeHAOVB1SKZtEl+UYjkyupxVs/fBZ+1a6wi90oR4yJwFLAqp+raZvDqbP3m+dzFxLohx1asSyMAFj7hEjC6DpCKiGDBHM3YiEDXlzIh91UnWHlA2qEmjBg7ShzBHckGFkV3+z0S+y4iIojmbWa7FGeTGohPqlobOJDNR23FSp9iS1ZBbQlY1s05sRlCSKUPfrFUh5BDbi+eHxw9/OOoWrXJVl9mFHpMj62sJOR8C9BFTzCLWU2g1ZoWksczQYPbeikgRfY+RUgbiMlkD6DqRZKxtqXlIaAALRQLLYorgxYiwVqxAo3cAgUwqb9mpMvpl06oTdnmYYSmWAFlIVJBwYApoonlGMPQDOJvN/8Pff3lwfFGE4TCsG3dqY6Zp9kWQo1LXp7lUrQkmOaFgkrzIGVCYfCyhwKIfgbuYOdFhd05xBHaWrIgNZN+JZrU5gYF4hJjIB3WJF5hDYQCgEbvCs0MwlwNxwImLppxZIwFb6rKiY0AkRLCMDVql0IkG5+qFXXz55fTRo+8Zyn614Qv0HtWqpOcZXdeeM9WTNFdRhIu5ZFRGU6IBB+0AFdxiqXE5jdG3IllaMMh6ilgYLBkdGJmxo9JM0ZFpxegyd4xDYEhAqMnIYorRODWZ8ALVOQ+FCJgsonLMgAiiC9S1AGJpQLgkc0L28nT/iy9edkmLsBEKQwZOVWCJIgZ1XE4LtARdzmmJroxcVM4FMy2OW5t1uW26nJaSclI1nJmVRiOE5IKoJDDM1lhynChbo1pGnXvnFAyDeR2QU2QQhyYBClZA1oJVJEcHlEAtae6kHfYRkA1Z8iSnbYSpgGbhb3Z3Hz3ZH1YrvaIoe5qsBGDh+QJyUgSdg/qs2hhHrcberA9NVx/E+Ww5XS4tqpgAEVIIDguRKGopTjHHpEoiuYPMRDoVl9kqzx2DN+kcB1Ii6nISEELMIFEsE1XITiCDK11jyWU6uYhcsJmAcExkVkyaxhPNJxfP9l5Ol7wyXnfYR4eK4CDHNFM1aFJCixnYAIGKsgjgThs5m2u3jMs0R3SGDK6HQAxt055Lmy3HKLUzxDKR9VzQqu/JFUV5DRnKggFDVmKHTI7AGYmBaTaWZVTrYtS0TKlJHaRu6bTjjpJo6vth2wBQnKcoymq0+/z0hxdPeoNRHZzrOaIGrYip1a5ChkYF3ELm7KkvtdUepk23O02ny5abzvkAxYpCkO7c5odxsYwpCjoicq6s+0V/gOPibRrbqL+qWISVNfLJsfc4Es0cMpEjUMkKzI5w1mVPmBezwutsLk07W87OpxfnLosS6Ma4LosuqctKmB0DfvVgbzp7Nay2QgjAIWcO3KQsSTujLs3RpMnY84U4Xy+j7p42iwV1OXuuoa5yVlnM2uU0d0syQx/K4dpoMO6NRqPhRhjUw/4a1Z6LGtECVt6ljAWSqZhZga6XUnZs4tkV5MQ5pwYq1cAHKIYq0qUsy6k54ILNrfaXimHRGEBcpvnxwUnpeG24hZ5K7+aGtWvapmFVjWIkTUzVsO55Opv3z87obHHRpIWjERY+N12enS9mM8TsuSp8f7w6HqyuDtavjnr9cnUcghNjH1AkVFUNeR6wzAgiTsGAlSBrEiTfdJ1BRC4hdwpMkgVKSct+yYmDEYQhOESXWUtcS3FCxPM2/vDk3HmoepVnIh4apxCjxNRZjBEwIXFvuNq06PdOi4NFis0zx9u+GGh3Pj/r2vkJmVZlWfTHg5WdjUuXRitrVTVyRJ0vBj3MGgrwHfh+3TWC3o06ywkcOigssfepG4BLAm1deDUoSGzgi0YXRXBzKarA3lA1RwNsXQ2tIEZsPMvh5OzJ85M6FGXVw6IwdoBoXeq685xT0zFLDvXQOTqcr76cns+bGXJR9+7FuJhePI+LSBx79XgwWlnZvDzeWO2NL/UHXkyNxgYwDCAcvENGDoVgDGUAAzDjCp0CBiY1Voo+OLaB04QV3Lm0g4FODs/qicpqHqyunp0eSXQAXe17DvoFdkiWDs6mj59MxuMtQ+N6BBRzJ2bzyXQCwWwWO+hvb660i+ars/rV6UvWqq7XxYXp9Luzw3lVl8PhKNS90ebVnVs3ymrNUVcVLlsNCs4RlEFFKaurWTnXoWwxl+Sla9UohBIFci6YYx0QQJSYPP7evdeKteJ4b3Jpa+0ITv+7/+l//OKzb1+OygfffMeuCo4cJQw+PH726tn++WjMzqurSoQ2R02pzSJmmBbG/dWR9y/OlvvTPFm8KMMIXD2bL7rJ9wS2tXa56g+q1Utrl7fq0UZVliVF9n1X1RC5dqJkJGUuUxmcWnRa5EzG5JkTlEyoloQQUA3UwIERo2yvrP7ko7f+93/xrz9877WXz1789//0H/+rf/Ory9tb927ffb57vFg0KYmr3OKrF98/O5wMedWVhVAonY/tEiVZXApothx6OyFfPL2wVydLFVf3rzbxYnb00ppJUfaGGyv14Prq9k452uhVmbj2oah7PVEkqFc3qG2iiYnrSigJM4HLmEnNISmJQyLIOQcAbxKJBYi6TpXh9Xfv7e0f/pNf/jxnur55aYYweX7BRf36a6+Ph6PZXNDN6ZsXB3vP86jaKFZK7+vSYdMkSmmxnOScyXy/6KE2X09t9/AYqOZB3UzmJ/sPQmz7o5X+1tXNnR9funGnP+yPKwm+H3zbc4bGCua4yzFBEiPpcpGsc8oxupQgGcZkcYEQg2lI0GSLaqZqKWYDMrPNwcbm5a0/+w9/51yoR6OzZdOCNMvWUWi7ZUFLzaU7eJXrXr8oiqLyXYcITtvThTrIlqGoQn2wgKNJd76Q/uhyl212/LSbpdXBetHfGGzeGG1ulFXtgw91AdQvQ+fciGuWjGXZTyq5MaMgETwmT7RMgAVwdgZJxbvSGFi1MaXCSWcZjLs2hRKBpEPZ3Ts4eXU8DDSdzbF1DfgNJceynM4QfenVIdpgMFQuJGaSuEynJknjsoOVldXww+FkdxqbBY1XN6b5fLL7DJ32xoO1te3RjZ9wYOf6ZU1lUZbetjb56vZruy8O5kkCoDbRkSFShy26HqrrGsMwD6nupBUpGZiyii2MxGOviV2SgkDRkUNedPby+Njl8mcffDS5OPnhfLq+sTHuFf/FT97+5psHbWqBS6/ZDUZb5gi1iRKzzXOXcyxdARWFRwdnz87Ec91b600n50cHDwoKg8H62vW3VreuaOkLpH5Vu0Iq5z987913f3zz3//1385ilXRhSBmhcn4pibJXgIxL7x2qm4EW4B2CgbTJBS9svQip552yAZIYF3Xpuvji4e7Wla16zf/q15+trq8Py8HHH799sbz4f//2a89lVE6anau5sN7SDkFiN1lmY6xcwfjNpNnfj/Xq0Hh8dvB0evqo58bDzWs79z/ytaPa970PXBYeNrZu5C7/3s/fffpy7/HTM18E1ZAYCShGUUQVJBIzhy6ZespBNCun0unKgFZXxnXd73OI2DULUcvLdn7t+q3D45NFG49fPBkPxlevXLl1+0ZV0O7ei/3dw8vb/S6m6blECy6Qb+NSWmi6JcCwKJqC7atzPjw+G69f6bKc7H3VTM4G9aXR1avbN99Xj1xYj4a9MtRV8eb916o19+zJ4d7h8wcPHv3Jf/mHDx8/2ts/JGARYcJAvsWM2FqqNbOh9vupTzIajpAbLjd6Q1f48OD5q7S0rm1UqC5DcXL+8MlBEvIeD0/Pbty9em1n41d//puYG2bfrxH8cHPVRZu71HUaz1O3oI6pr876357r0em8Hu3MYXq+90Oczqt67fLr7w4vX/cERORdrwjmHf2j33//tdeu/en/+S/LYigdfvD+6z6s1oPy6PS8jZZAC64yZhXniVZXyo01VwQk5t39yd75omk64pfbq3W/57GVXggr/TERlAGGw3LnypgF2q4L7Ppsuy93F4sJgs+a5m2UePGjN+58/e1zF2FfM0ln4mFY+U+el2dHe/XWtS4uL559kRr2vcHOux+ubVzBSGVZWI+KIlMVhr2Na7eu/at/+++lHc2jPdo9evj0+X/zy1/s7Gz2iirGeRmAfTNf0t2d3vp2v227l69OSz9cLuL5aVxZ9bdubHTabF9aOzuZrK70U4ZmMaco2u/PL17WdVFVdSiADNu0fPTN0fpGLzZuXNVZWyEYrfQ9Dlxacrvoirrwvvxqvzw8Oxyv7XTtbO/pN6KyunZ7594b5co2ArlBQQVVPpCrS+tl0iePH12/fO3993a+ePB1jFPMfvfgpBoVZ7OpAG7Uw431tR+t4IuDxadfHrMET3DpZthYKYa9adkbvJxMY9Lnh4epS6BTMLy2PX734xusYW1jhGJHp/tl2euXvWnXcgYDyoZNXjZLyNa8fHk6WEHXtK2G4ahe+eTg4tnzJ6u3bjQn+Wjv2wKoKm/ffudDK4ZceKLkEHvk2Y0U1JxgXq5ubhnGUT98OHz31cF8Y/Psx3dv/Om/+L8I1t69VYeeX2Dz3cPZbBpvbq2sblRN6+bzuH++kDbHZsmlRzJRA7XCZefdeD188/2T3//oZ5c21798+Pjvv/zuD3/284vl8sGTRx+9+caVq9fPmsXh/v6d21djq8+fff1CyEnE9XX95nCyd9CNL7/dnp282nuS2zTYGt55+4+1lgIUUIpQ1/WQCgTLTBVYmov75B++OpukP/r9D3q9sm3Ptjcu//Wnn+9sXlofhd3D+eHD5eX1wb2bWwp0fnby/e6kWWjMySgwOiyzecREBhnJ51QoyG+/2K+Cvv92K6Cei5PDxagufvXp1yeHkz/4aGhGD795rEyv3amOmpO9o8PPPt+lre3NF5Pq0cHFeDBo2uWrF19j14xWLt9764+x7Bx4obr0hSsKLhyD91QhJyMmo0F/mLpmerj8/KuvP/nsyaNHj968e3O5aD/77kXs8M7OymoPIcnLw4Pvn53PFpSgMWZHpNCCY5DMaIzZCMyrRhHzbedOL+Y5p7pXjofD/qDc2z92FRJ7MDhv2uAYDE6OJ598dpC5otPp7JvDpS+H6uuz5w+6LvrB2o0PP3ZcKZSFB+8WOYQCewDZs2aPpQ+eXH9QrfXq125u37/vvvh6953Xt3/2/s/qUbV/PHv3+r37N3fm0+bq1sa7b19rl/PCcUzIVjFq0jmay7EzDUshQfYZJEIiU8sZcD6fMHIVyquXVmNHi0X2HAwhSWy6i6rinCG16pDJgvvdkSexojfY230Qm6NR/8a1d94vQp80aDBxoXBFFbwV6F2VgEgsAiHjct7sX5zUxfDP/uqrd++u7T6fpvzlravFe2/emHanl6X/z3/58eWbW199+7hZeu/JBcwpm5Cn0tTQyi53wRWmEFGVMoNINvLufDoBAOft2s7WxWQhZITMRERhPpMQamBcykKdagZqzs/rtd7Jq4ez45dFFe5+8IveaAzqwWlV1P3QK8teFXqGPRAwBXSFGKBgAv/992dVMMLy86fHi0Uz8KffP362f3D0yx+/9/ZPfjRYH6LQ6nirPyoFQDTFjGKiEgmIsPWOmNOgcs4ksKIBARLafBljzIHD1ubWgycPiFacY8/BMC8WszKUYDCbLCmXBUdauf3e4avl+cFe3R/dfPOfluOSqew5gqLyRFz5svRUUO2TL53nYJoCcVasHf3Rz+69eHX67PnRrY31t9/Y+u7Z+flF+7/8t3+EvfAXv/6P5+etEQ7G9dpKL2UVMXCIQAmrZGTgCByTM1XPznOPqSBShzadTqNGQuz1e18/fErWueAIKYtJh45CNlm2SQGzZTo/eDB5+ZVz/dXrH4x3tkzH5DH6UIWiDoVzBYAHYzSvIFkjOeyS3rgyvn9n7dHunol++N41I8uz9kd3t1+/ufNq0k4Wi6ML2Ts6TJ0Oi+LG1R21BKhISpArlwuIvQrEtI1t08UOoM1tykrIqGSac6YQfBvzZIaoqdcfMloWRVLHoGBJyFBSrOj4h+eEMl6/deO1txbaYZ6yIDr0DggBwQM2IAGzISRPlrp8dbO/uR3+4+9e5ug+ePfulw/3719Z+2f/5A+e753+f1+9ODi8YEFZtt8/frXI89Lj9tb65krPFHOrSbFL5oerzSJbTiWbOYLY9ZkMrc3UmjUZp/MFAZxfzEtXZA4FSlYEBOcAKYBhWbhQ62rIJGlSji5df+/HWecVOChKKLUuggt9q/oBM1Ig32WXJLMa3ruzNRiUf/U3z+/sbK2uuL/49ed/8tEbf/iLj/fPz9A7y3owa7lg3wtHp9PTi0XKurYyuH/vmkgiF5KQKE1mF0aBncUI2mBHbtqRmDeKgfHiguYLMdBQFgqacgIC77hp4vmkYYdmMjk/Sy215Kni8evvfyxuqFSQK8hX3g88FwSeqTSGjKQps4Qmwu+9eyPNpp8/PPzw9Y1QxE8fHP/jn79z90evodJouPbBO+9trUMznTPWrija1D19/sOyTVXhblzd6ffYjAAJCUGcckRjKF0i8R6d69Alr15VBz2+OD8VBF8VvkAAG62M1FQURIjIcpf3T48IGDHT6tsf8nitwiWhGUvloCiKUIRxz/W8lL4orUTfE6fv39+JGg8u8J3bmy3zt8/md7cHWRZfP356nprg+Pq1zbfffSc3M/CaWgGtnzy5mEyWjmhjvHLz2oanDCgIiNhJ8mgEyRHVmNmMTMSBekSJtlgs2KBiGhUFi1bsEMDUqrIggOUyx44sNdNstHPrPmUyLtlx4IrdkJg9EyIoACK70Dno/vjD++fz2cuX5z/94PLpZPJk96Tn5XyR/+1vnnz52cMvHzw8P58Nq/K1Gzfu3t6ZX5yvDGuE9uhssn983EUd1O7O1atVAQhmIFkdsyYgSIm1VZIuG0EpkFVysuzLwiEVPqyMBwBIzAaQYxz02HtvLA5wis4nJVP0FJCQnXMuFF76XFLhkjMPZUYRCe/e2f726ZOTs3z/3rUXT5vjWHr1YsWkibXzj6fxP/3Dw2Vuu5yHVX3j3vXtra219X5MCXD46OmL2aIlwu2N9bogQmfglUTVd9r5ukrZ50QlmEIXLXRYVRXfvLK9bLPzPBiODLXs1Waac+pXjpBym2KzDMZLcKRGQsmxFRTqwrugHZjm7A2MsyZ4/frgopOXe+3b99aePtt7dnjIKTGLA6ycb7KWkk4a+OyLxwcHRzF2l8e9jdXVlUE/eIhx+fjpwauDCzHoj9fu373FZGgRNBCklYEvXGJIALkzx6qeaVQoml+tq6OLUwAreh4sBnaEuEypCD1F6lJq1TsUZ0AFJRSP2DNHzCFDD4nBqkYAAt/eWGUMT/eXv/zo1qP908PjFj0DKJpDn0E5sZI6Z+77J/v/6ctHT1/sEgVHsL29eeNKn1mWXTw82Guj+WBb25vDAoGRICH1TPiiwcylGgKigBLLeDzU2E7i+au94y6qCCF5USOytsnOOc+4aDvFHJEYlITHyOyCeu+M2TGUJOx4FMq+6fW74/2zs9uXeffo4mKZM3PORJiKik0RuWMMhIyQouiDJ0dH55PJvBWRna3ND959f9zH4Hnv8Px0Og0Am2sr29sjsFx6bzzrOmDLBcaS1KsRBXa8dzr78WtXDg7Pls0yeHf50rZZLIuQhWOaD0Z9M4wxqpUs0dATcw6eve/3fIVclI5cgVVJDtK9ty5///j0/rXthO6Hw7mzAhUcSumqZRsNEcR5ABeSQCS0GOGrb3ef7u2dzdthxdeur/38g7crTwen5y/29qWzlX59/95Nj+x8BBCDnBCjcVafXBRxKVNs2nfee+P57rEaIOi4V46qUBeeSHLUuqrJdNkmsEwGGZSKbOAGxAFRCydGxkDk3ds/uvr3f/+8W3aJYXe3YWnVYhJCLpdpjiTOGFnBNCYFxixlq+n8Qj759Jujo5kq9ov6zo0bb715W5K+2j+YptZ7f2Xr0sZKyMl5K0RdyIgaAZW1FLSAVBXFxtr45fHkYt4Bal26fh2YGAwkN/3+gABjNwNISySFTFgxBQghogtEkKEwqq5v+N2Xr8jx7ev1J5+/IK8Je6xQhmy5YyqTQURNmMyiSEkWIFPlnQK/Oml+9/3z0+nEI6wM67fefOO1uzdevjw5mUzAbDzs3bp+mR2JgSPL6JjJu2DcrfaBNV/Zck0rwyq0zZyQiqJcXxs5D0hODcaDAhA1OqOywugMaHPsVwpgrGNWIV/BcmNU9nrl+Sz8o4+u/903J0hVEy3n5DyBkmHAjAgVkwSribhAy85ctej3SuQFsn/+4vnp+UUTW8H59trgrTdvli48efaq7XJR4cblLUTx5CwTSDZTkIVK3bVyabv8H/7rP5hcnL5+a+PtuzdythTl4/ffG/X7prK1ubq1scYOYpo7ajMMM7P76cc/XbbTL7991QmSKAZ35+bq3/z24db25m+/fGxQZhBi9gZLQ9SOUROCI0apBSOBojNIQFQ3TedwlDHG1i4uLnrssCoH5YxxefvWxrO9vdPbdzb9YGdj69a1jeVy8fjFPKE6oCTgWduIG6ujTz77Lks+PD6+vKOHv/0H54k9v5rOggtocnDeEHbXrm72K/jz3zxxWR06unr9hiT5/tHzjgY/fmPz5auDq9vrvQHvHZTCVGASUAFGBYRa1AEsRIhpIlqhWcYMRik55mgmGVQVFwtd9psnj57/6I1ba2tX+r3N3vDFdHK4suJGg+qPfvHeb379aZcWzlFdhX6wqiy8o6Zp2yhALOCe7p6wOuckW1SuQKViePhof2VU9Qvc37+IKoHZPf728EfvlZd3Ln+3uzew6eGunwPev7f2b/7i8/FozbIqsAvYSaocxmxq0TvO2lLuOY6dFSEH4oW4ZZIaqcU8QjdtY6vQ/+rh04318a3hcH11SMXdi5NXpfdts/z0d59/8t2zSxsr46L2NRDS8XHbptzELIaSoiMXRQ0skLAzhI6ZCsLhyJ9NzyxBcHD/xpXnRydub3K0dtDf2l6xyD/7xVt/9tdf3Ll25bNvD9ZWR10mB8tMNWRl4ygKRJgxi6oxsqkSQRIQUge5QBZTF2iWc3Fysrhz+0rbpt9+8aish/dubK0P/WJCf/m3n85ms5Th7fvXjg7mx3HWHFPU5JRExYyVusJVKSdVIohimMXU2iSOUM8WXVkwORtXPe/TpdW+e/2NK68OTrbXq2sb/X/3m2/J0GN7cJ7XvbTQAgBiBsgMPmNGQ0IDIo1kuAQgQDUHIIZOUICRBBAIskRnPBysnBwcsnQZ4q9+/cn+0/PNcU+Zf3h5NumyE2TPZopWJItMTqgF9W0kchmoQKs8pYwZxAf0zNGrA+zaud+fX6DBqAj4f//uaRX8n//l3925uv13Xz7++K07//DF/us3x1/sHqpWaAicRYgQGTJC+s+JUTQjeAJRQoMMimqEJp40AxPx2iD8yR++92LvkGUe+r3HP5xOTi/G68ODo/OjpWMwB2QmmcSUyTIhEAIaAeakJIYghL5FYEjBWEjbpM4TknkwiSYGxNK53f2jR189u3V188nhZLMcfvHw+CfvXKkLL0/PGUgwm5IjA0tmDrGHFgUNICB3WQGUADKzAzVQzkAKHUo1X+ZRr+adlX/97x6HQT30OEl4sHuu4Eq2Tk0wx0wue8SoRgqGhkAGUiAgmSgKSxHYWsqipNBDyhnVU6NY+AxiXeRATbeczpcff/Dm7Z2tjz+81S0Xv/+Tu7998D0DEWS1jgA0e4JS0WUQZQ/i0EXKbMBoBtDLGTSTgqo5hBKo3Rjz/sGL/+Nf/g0WtRf4ZreJEQlqzpJjB0qERJANfNaSOJIKKpIgqKpBAhQEQWq1SmolLxnabCAACgWopmyMjBTxf/t//tpTb7xaLBbtJ7/75r/66Qf1sHq2f/6b336lMFBYOiwVxQDQCE0dS5e8Z1BFo0jKGYWMEUCBUAE4oUmvrAhha6Pa3T9ZdEzgFdVR7KQIXkEQMioIMJplMKcq6DIKMwEaGYhayBbBgAGFPGsE5EJTgx40A6CBEhk9+PrRsxeP//yvvo7T1LXl44ODk7PFlc3xzcurWHZRfQQBRa8BDBE4m3p0CsqkYJ6IDQAAwQjNEFoENiPv+N7tla++e6WK4FBMWLhLvgfkRVU1ugZYxDxABWSIrMkBgQJmrSJZBnFQALhkHGCRAVVpCmiiESBhEpOkQm/ee2PvRK9trn354PGlzfrTL/a+fLR/Np/dvX25FPROgqEPPktLyAgAVppvkUHAIUkyZSgRI6EwA7tKRDZXhjvb1W8/O676I8DgsjGAsjKlhnKjCQkwIiiCinVdNvYiBKSqkgqjJYqhEmAHpkwxW/CQMjcEjs1KExZPiAz8/wPDwv7jmC614wAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAfoklEQVR4nJV555Ndx3Xn6XT7xpcmY3LAzGAAggjMMEmJstaSqyTvai19Xv9X+8216w9b611XeddrWiZIURZFEHkATMbkHF4ON3fYDw8zHJAAbXe9uu+97nu7f+ec3wndF/3X338Fpw1pQFqD1loDACAEgBBGSAMgAA2vaBpAoVcNACCEXjmilFJaAJzNigAp0BiQAkReMQ8g0Ei/cnkAKoU4+4MRIgij762Mzl2/g/6V/T/cEAIECJA+naAtQ1sepQEA8EvoEUII4XOdLwkAL4PV7dnOdZ6iRN9Hil76+rcLgAgiAK9Qtnq1mUGD1q8ZopScm0hrUG1NtMVGAKC11lr/u/UM8BoGtbG8Gs2Z7fU5xpyq9NWzUYTPmUap9tTotP2raABea4HXCgCgX8MHfMqHMwnbkiCEEcKvFJrCy96BvucB3+/5HtIfGnzl7a974vxKp9ARgFZaaaFeI8B3Z/ju8+2mlPr3wfwhoyH0mpCCECAArfUZe9vdAErDawT4rr5fN+srY84P6v41IF8EBK0BQL+gjNanlFXtBZXWABoh0BpAKwRACDnTyTkk6GUB0MugUDvUadCg27e3dXfumbaiXi3Aa/oRxoCwkBK0JhghDUprQAoTqiUAaCWllCkCoIQggFQKqTWhBGNCMAYApZTWgBAghF6i0A9z/XWj/8Z+/ULnbZNipBRCgDFGWiut0lQkaULaCUFpqVLQAFphjAkCjBHCCGPACAABRkhprZVSStOXlvqeD30fzb/i0N975gXusytqE0ArLZVUQgqMEYAmhABSIkkJRYZBCWbtjPrCHlJpLZWUSGFEMMVEYQDAUqtTCrW/zoLYOaDnosGLwfOS/ZA0WuvvCaABQEkNCiFECAZQABohjREAgOOYABqU1looAW0TMYpNzhBGSZKkcRqnSZDKVKYiEalMzvnAC0m+i+q7TEAvkV4DvC6knH8cIdSOLRpAgdagCMWAMGiEtEYIMGiNlNYKQFOCMKagQaRJFAZJmkRRFPpBGIWBHwSBH4VRlERpLFKZ0Jdo88LK5+hy+ltp3cbdhnNGMPSDCevsnm8VgRBGWoJWWikhtFYUY8YYxqCVlELEcdiK4zRJ4ihqNuq1WrXZbLaarTDwlZRCSCmF0lprhQAD0vR8sm0bGWlox+Hz+RhJCS/F5n+96bPUcWqEFxEZAUIaa8AIY0wYIVqrZqPZbNRazWatVqmWK4HfTOMkSeIkjkUqEAAhmOD2h7WVgjHBCFF1xh0EoKG9CDot3pCGU8UjgHY4RWfXMyVjjAFBO9lhjAG0UgohiOJYSckMg3OOEFJSAmiMACEFWCmpRJw0/aBSrRwdHJZLJ41qLU3jNE2RVJQSgzGTUoVIG66SSiYyTVMhhBJSSaU1UH3KmjZHyWkld0oCfd6Jz+M+lQQIQhhjdf5u0EorSgjGSCmNQKN2BQRt48skChqNaqlUKpdK9WrNb7ZCP0yTmCJiGNTilpZSSwmJFDJJ4iRJpZQ6TYRIUgBg1DANbjGbMUZP+aoRtPMe+tbtAKCdML4XUr8tNAABRlIrrVWbdlq11YyYQZVmjBHDMJCGZrPRaDakkHESFEtH5dJJqVgMWj7SwCnj1LAsihTSWupYJHEchVESxVJI0ACIEsxNaltZ03GcjJPxPM91HJPxlxPZOfRw6q9wlvROq5SzKwAoraTQCCFKMSFEaSVSAQCEvshRhGDL5FEQHB7ur6+th2GQJlG9XlFSYoQ90zWooZUWURIlcRJEcRQpITBCBBGGDM8xPcdznKxpuQYzTc65wTnjhsE4Mxil9DRKtNUH5wX44daWQYNWWhJMACGllRBpKlKMMQaaJrFtmhjjZr22vbX1fGX5YH8/jmJGsEEpIYwRihFOg9RvteIglFK63GaaYWx4rpv1spZp25bluRnL8gjhjBqMGYwRSqhhGBY3DcOg+MxjX861Z/UgOh80X1Z/m0KYYoSQUEJJqZUkBFNKKCEEoTSJisXi5sbGzvZ2vVYFpWyLm9RgQEM/aPj1MAxFLAlCuWw2n89f6OpLo1gL7Tpuzstaps0NwzQtg9uEmtwwLcvi3CCUEEwQAoQwxacbmjM3PW+BM8RwLiXrc47RjreAtJZSaWkwZppcSxmGYRyFmxsb62tr1WpFCdF2d4qQCONGvRr7sZSSc97Z0dmRL1zo7e3q7mGapHGKAbm2k81kXds1TZMbJjVMjamQSqRpmqZRHEsplRJanyunv408ry/Ozo4rTn9pAAVIYYQ1QgRjxihBqFJv7O/tLS0tVMuVJAoNRm3LllIEvt+sN9MgSVspRbSQLwwPjfT3XfAclzFKEXMsxywYruN4bsbiFmhIkjROkqZfi6XSCBDCjFHDNByDYdyuRs+2qKjNaIVOXQG3b0EIISSkNAwDtJZSIoy1UmmaAmiEoV3MEIIYYQjpvZ3t+fn5g739wPcJQp7lUEKSOK7XG0GrpYWymX2h/0JHvrOzsyufzbuO4zmubduc8Z7uHqRACJkmolZrRFGklGaMccfyXJtxw+CcUgJICymESJWU9DSot7MYtLfIWmulv80ACCFGKSVESCmEUEq1q1mMEQbAGDhjBGO/1VpfW3/6ePakeJJx3Xwmq4RKgrDRrPmtltK6kMsPXBjoyfe43DMNy7Itz/Fy2Wwum7NtmyBcrdTCIEzTlBBicbvQ2WG2XYYhgZUCHasoijVgTQkxbMaY/d0d2YvYf1YHtbfTGBNCgjAUaXrKNA0ASmlCkGUaoOTh4cHC3Pzq87U0jjOWR4FErVClMmi2Aj/IZbKTExfHRsc8J4MkIsAc085ms7lcjnMehuHh4VGjXs/lcpZr5a0CwVgqUEpGkBKlQWoAwBQTihVAksT1ut9qNcMoomeRHr0oLTUghDE+I1Kb9GEYtlotSqnjulqpJEkopabJDYqajfLC3LO1tfV6taY15HM5Tnmz3oiaQaVYoYhMXZy8PD3TUShgRAhQz/Py2bzruoZhhGG4v7/vh2Emk5m8fIlSClpLKVIhNAFMGACkMhVpGgdRuVY5Pj4+Pj4qlk4q1Uqz3gijiJIXJebpfoC8IL0G0EoJIdqcEUIwxizLQgjFSaK1ZoaBERzs784/fbSxvh6FYcb1DMplIsqlRq1SAaE9yx4ZGLsyc6W/pw8UYIS7Oru6unoAoFgsNpp1xliuUBgYGaaUaAQKtJRCgE60SkUStsJqrVYul/Z2d7e3N+qNWqVSaTabUkpCiEEMTAk9H/UVaCUVxhifekK7KaUMwzA4xwjFcUwZcx0niqKFhaX5Z49Kx/uWaXR2dGipmvV60AojP2o1Wt2FrquXr06OTtmmQzTtv3ChI9+ZxOnR/pEfBplcZmhoyLRtRLHSKhUiFjFhLIyDaqN2dHy4ubO9sbm+f7Bfr9eTMNJCEgBCiMkMzAnSSGstE02/pTsAKC2lVOpbGRhjhFKtFCEEAJIkQRhbphnF8erq6oMH9yvFg46so2Qah6FSyvf9crGcsTM33rw2OTbl2RnHcjqynR25DpPxSrFardYQQf39A45nM86UUhoBYwwIjlWytLa0uLzwdH5u/2CvFfpCCqU1xphjajCGFcIYc2Z4Xqaz0NnR0eG52ZecmBCMKdNnMRUQACYYIYK0lkIIg1LOjSBozc0/e/LkSatRd10bY2Rzt1wqH+4fQoq6C91jg2PjI+N9Xf02tzpynYV8p5aqWqkJqXr7+zL5jCKKGFhpJZXSWhcrxdnZ2Yezjza3N0qVchhFjDOTG4oxAM05txnPGHZfV8/wyNjw0Ehvb28+1+F5rmEYtL0bbZcKlAACiKKIGxahRhjGhFKTW1EUEkyETKjBAr8xN/d49sn9ZrOey2Yck0d+dHRwFDZDjs1cIXf10pvT4zMGGAPdQ45l26bd9Fu1Rt31nL6ebsJpqEJJUq21YRjFYvGPf/jq66/v7O7uBn7guR5W0JvrwJhKqbRSXV1dIyND46MjV2YuDQ0OMcoZ46Vi7ejoaHN9KwhbL1WjQggEyjS5lELEyjRNQlgYBmEYgpL9/T3VWuXhw7vPVxa0ELmsSwn4zXqt1PQbgYzU1NjUuzfey9g5i1iT45MYCDXYweGhAtU30Ge5lgKJiDRdHil1Uj6+f//+Z7/95+fLzz3Py2cLBmNY4bznSqE4M8Yujt688daVy1cGBwcsx6jWSrs7uzvbB/v7R5QYlLKjo0Op0pcEaJc9lBpCxEoLQlAch36z6Xq2Zdil4tHTp0+WFhaisOW6FgId+WHQDJs1H0v8xvSVW+992N8zoFIoePnOnu5auba3t+d4TkdXwfIszFCYpgLpwI9m5x598eUXz57MaaX6evujMGzUGh2FDqRQZ1fn5MTUrVu3xkfGKaXFk5PZp7Pl8kmlWoyTZGtzr173b936cGBgsFguJmH80uk0RgwBpKnEGFNKUxGHUcvgJJdzo8D/5ps7c3NPCUa5bE5rFfitWqUZNcPOTOe1K9dmpi873NFCTYxNOLa3vroRBEH/QL/j2shAYRoqIQXIhfn52198trK6HARBHMeOaVumySjllA8ODb5z8503rrxxaWomTdPny8/3dvZ83280avuHu8cnB0mS2qZ36dKlfC537c03/9ff/k9m0JecGCMCSkshKaUIaZnGnmsbhFbLJ7OPH2xurIASjuNhjRuNVqvpqwT6ewZvXrk+Ojiacwue4/V092oF29vbgPXI2DCiCAwIk0BocVQ++vIPX967d7dULkkpM5lMxslqpWQiM5nMh+9/+Mtf/LKrq7NYLN69e3dzczPn5fyW/3z1+c7OdrF41NXdMTUzM/v46cm9uz/9s589X18fGB56PPuIopcoBFohhDBCIKUgBGddu1IpPXzwzb37d21u57NZUDpsBrViPU3EyNDI21dvjl4YoojmM/m+7j4hxfHJsdaoq7eLm1wo0QqaiOH17bW//39///DRI6mkazm2Z1mmqaSyPOftmzd//OMf9/UOhEFw9+69leXl+flF0PCr//grznmykPb09IyMDQuViDSN4/iNN64LIRcXF3//+z9MTY2ff8WEQCOMASGiVYpBGxQXS4cL83MrywsgBWeEaNVqhY1ai2jS1zdwdfrG6MBE1nQ68wXP8+qVRrlSzncWOjsL1UYjEYlQaaLTh/cfffb5b3f2trhlGIbBKSOaylROjE989NGP3rx6lRCytLC0uLh0dHTYarXSROQLHQpBlCT1ZmNvf3d6etJx+L1798fHpmrV2tyzOUros6dPDw/3T19wIASgEUIEYQ1aCGlwLFWytLgw/+xp5Dd7ujqU0EkU+fVm2AgHB0Zv3nhnpH/Es7xCNs+pUavUwjjOF/KYoFK1YjkWt4zNnc3Pfnf7m3t3yrWyaXLLthAgKVRXIf/Rhx+///77juMUi8X19fWV5dWtrS3OeavV8tzM5MWLjLLDw0OE0NDgUBxFc/NPn8w+mZ2dz+c6Pvrox2EQUUaLxSL9lj2ofS6vtZQEtIiS7e31+/e+icLAcyyb8VK1XK82tSRjI+PXrr41MTrp2RnPchgxarV6nERuxnMyThAHSRpxRBdXVz/950/v3b+nQOVyOa01SGRwY2R05JMPP7n+5jUp5Pyz+XK5sri4uPZ8fWNj4xe//IVjudtbW0oI13K21reW5pdc1zkpHR0c7VFqaK0ZM6IwarZajDIA/Z1yWmstNJK2xXd29m/f/m0cRZ7tGIQ26o3QD5RQk2NT773zYdbrMAjLuhmLmWGzGYQhNxnjNBGxYbJIBA+fPPq7//N3Ozs7hBLX8QjGURRzk7914+1PPv5ksHewWqksL690dBbKperqynpfX+/u7u7K0koun8/lckmcRGGkpdrf3Ws061Kl3GKul0kTmcQxKMAIaaExJvTb8x/QCANCIKWsVItzc0/L5XJfd5fDeRyGtXLNbwQTo1OffPxJLtMJinbku1zbDZqtJI0yOZdbnHCqsaq3ancf3Lv95e2DgwPTMimlSKM4TAr5jo8/+tGffHDL4vbKyuriwoLWMDU5HfiPOju7LG7/+j//5r//zd90d3W1Wi1O+aWLlzCgNI4NQk3PlioVccoNyyDUcxxGGUbYsS16dvCNQGslAIRIw5WVxaXFhe6uLoOyJEpCPwqDOJ/tePftd3OZvBY6YzsZy0njOAx9wzKcjA0YUpVWa5Wvvvnj7d/dPjg6yOXzhBBKWRql/X2Df/6zP3/rxluBHz59Mnfv7r1nT5/evPkWZRwQEkJUqtWRkZFcNlur1cqlEgHEGQOlHNOyLNNyTaHTKIxbrTifsR3bOdg/wghhQPRsCwagEdZKpifHByvLC2kaZ103jtMkCJu1Fgb23ju3ujt6Iz/uLnQXcvlmvR4Gge3ZhsOBaKHFcfnwzr1vPv/y8+PiUWd3FyFYpjqO44nRiZ/99M/fuflu2AyePXr21Z2vn83PCSmkVifF4tLiUjabvfrG1YP9/YnxcduyFhcWPNeTibAN8+qVN7q7uyQIjVUq5PLSGiXctqwoihBANpOlZ+81ECCMUavVXFqePzjYy2XzIklBaZmqMIgnR6dGh8dMbjvcKeTyaRS1GjXOuWlyCUohXW/WHj15/MW//K5ULXb2dBKCpdBKyanJ6V/8/C+uXnqzclKZvT97/8HDlefPS5WKwQ0AYIaxt7/f398/9+zZ4tKSxTnF+I3Lb8xMTzNCLk1NIa2LpZOm3wAGnZ3df3Lr1urzjUq54tr2QP/AT37yY4wRwvjFJwhaqyvLq8+XMUKWaSKMmk0/ipKsl7t+7QbBLONls9lcmqRhEDq27XkuIKWJKtXL92cffvXNV8Vy0ctmuGlFSRqG0cTExf/0F7+6dvV6tVxdnF/c3NiKgyjjZT0v2z75xhhfuXLFdd07d+6INB3oH1hZWdnd3eGcd3d1WZa1uLCwurJqW1bQDDbWN/L5/MylS36rRQn90UcfaSUxBk1AE6SVSA73dxfm5qIgzHqeTEQUxs2632qEUxdnBvpHXSfHqIWANpu+YfBsNocJQQxFKppfW/zsd7f3Dg8s2yaYiUSJWE2OTf7mV7+5OHqxfFJaWlycm3u6t7+dqjifz05evDg+OoYUgFCXLk4/uvfg6PDwww9u9fb2ZjKZlu+nUgBBiUgwRoTgnp6+S5evVOvNja2dqZmZTD5vu/bExYlPP/2UkvYrAKUatfL2xnqtVHYM06BGq+7Xqy2QqLOjZ2ryssntjkKXQXgcRpQYBucAgDAKRbS4ufy7r75c29ro7OjAGgOg2E/6uvr/8i9+fXnyjdJxcXtrZ2lpcWl5PgzCNBV+HHf3XZgYHx+8MICE5phVjkuFTLZULHb39Vy6PGNaZqrSk0oREzwxMd5oNCihNd9XClerdWpwoZSTcTdWD9I0xgQhjCBN4v293Z3tHUIIxkRLHQZhHEQGMz5491Z3R7dj2pZpa6nSJDFty+AGIigR6eLy0me3P1teXs5kMowxpXSSJK6T+c1f/ubKlTdL5XKlUsMYb29tra+v5XKZqamLWsnny0uhH4yPjaVxUq9VhgcHOwodDx482NjY4KZ5UjwuV6vNZnNraytOk0wme3x8srK8opV2XLdcqTDGarWa1rq3twenItVaNxr1ra3NSrVCGCOGEYaxEgqkGh8ZnR6fzHnZjO1FfpDGscENwggQjCg5LB798c7X83PzpsEt00wTEYYhJeyXv/zlrQ8+SOL4ztd3To6PCcb5XM6xbYubnR0d+VwuTdNGvY4xIZTsbG/fvHmjo1DAGG9vbS8uLi4sLNSqVQ1QKpe2Nrd293b39vbKxVLW80aGhra3NrOZTKVSSZNkfGwMa62FSA8O9g8ODxAhmDFAuOX7Usju7p5rV950Ldvhps24iBKCkec5CCOFdblRffjk8dLzZUoIxVRJraVK4uTn/+FnP/3pT5XUDx8+/pcv/yUIgmKxlMlkLs9czuVzpWKxVq1alkkIxgiESLa3tjjnN65dK+RyBqMEY9u220c9tmVzk0shWo2GzfmVS9MijueePBFJ4pjWk8ezkxMXMaWkXq9tbW22mk3LshAiUZREYaQ1XL18dXRw2KTM5VYaRRZjFjcoo5SzVKaLqysPnz6u1euMGQYzKMJRGN28dvPnf/bzjJtZWlz6p3/8dPX5KqMsieOV5eVqtVopl5eXl4rFk2w2OzIynMRRHEUiSRcXFi709V2+fBkBiuM4TVNMCGNGFEW+HwBAIZe7PHOpI5f//Rdf+I0G0joOg4Pd3VqlipWSu7s7Bwf7gAATIqQKo0Qr1NfTd/XKVc/xbMPkhEZBYFsmNwyMEKV4//jg0bNHB6UjYlCCiUENvxUUcvm/+i9/1dPdUy6Wf3f78/W1tVaj6di2krJcKh3s7x/s70dh2NXZOTo87LnO0eEBJcTgbHNjY2lp6cqlGc6NVqtRrVaTJBYiLZaKJ8UT0DA9PT0+NrqyvFQ6Pu4sFFzbEkkSR9GTx49wq9Xc3FxvNOuGwaSQaSKU1ISwK1eu9vde4NRkmEohLc65YXDOTdtMlVhYWZhbng+T0HZtgki5WDKZ8etf/Xp8bLx4fLK6vLK5vnFyfEIpNShLk4RzjjGKoijjedevXxscHFhfXYvDyDSMRq2exPHs48dbW1s3rl/PZDL4NDHFaer7vuM63d1djVq1eHw0OjLc191DEGKEGIzs7e7gra2No+PDF4dZCMdBHAVJd1fvxfFJlSoMyHO8NEo81yOEMG6Ylrm1s/l0/mm9VWMm0wBhECih/uT9Wz/58U8iP9xY2/zHf/jHm9dvupZNMRZpmkQJxcQ2rY58YWp6qq+nZ3935/BgjzFCMHJdB2OQUvzhD7+3Hfv6tWvtA0xucITAduz+CxdkKkonxbGRkeGhIddxMEJKyonx8f4LffT45CiKQkJAayWFiMOYAR0dGM66WUKYadipkEprwqjl2AJkKtPnq8vbO5saFKFUipQAHhkYuvXerZ7O7qXFlaODg1azpTVcvjSzvrZmmRbBuNVsObYzPT3V09utECrkc5bBHcuilIwMDTLDkFopBDs7O9Mzl3YP9rnJq7Uq5/zqG1ez2awWYnx85OSkhJnR2dMLWo+NjRmMlE4OcKlUlFIQQpRScRRDqjqzHRMjEwwbtmlbltXyfW6ZxGCIEWzQ/aP9ucW5crVEGEYYwjCghH7w3gfXrl7bXN98vrzyf//+HwzKc17mg/c/KOQLIk7q1ZrF+fTk1OVLM6bBm/V6V2fn9RvXOjsKBiP9/X2h39rb3SkeH9mWlaTpjRs3erp7fD8YGR6bmrrETater5+cHEdxUDw5tm2Lc6Orq2t3d9dxHFwqFZWShGKtZBomHBtDvQMXei5QRBnjCrTGYGc9bNAgCYlBHs0+er62igk2OVUilWk6MjT800/+1DT47vbu57e/2NrcWpyfn3s2NzQ49NGHH5ncJBi//dbbU1OTvt/a2dnxW600iY4O9vt6ezKe12o2Hde+MjMzNzc3Ozu7t7c3Mjw8PDzc09N7/dr1np7e1bW1O3fvlMul8fEx13UYI/mOXBQF3GADg4O41Wq1XwgoKWWc2NQc7O13TZtRRgjxw4hbFlCagkaM7h8f3n1wr9Gs25aJNERB4DnOu2+9PTF2sVVv7W3tPLx7P+tlwpb/xe3PZx88vPHmtc6OwuTExYnx8eOjo6//+MfNjY2s544OD3cU8kIk+/u7g0MDP/r4o2qlorWuVMp37tzZ2d7JZDLj4+OmaT589HB2dlYIkYiUUDpz5TIAlMulpeXF6ekpjAErJQlBAEqIREuZtd2eji6GqGVaGJE4jglniUiFltyxvrl/b31rAxGsQYWhr6SYGBt77913RZIcHhyura2Njo56rmvbtud5y8vLlmVRSsfGR9M0XVpa2thYbzUbSRKD1p7rHR0dEYRc27ZtixCSy2alkATj7p5uy7IAtGVbIhVJnHieZ5ockM7mM67nUMpM23ry7GkYhZRQgjEWQiRJwg3jQm9fV67DwIbJuJSScoMwrgki3Dgsntx9cK/pNxknUoooigq53LtvvzM5MX1yUFx7vp7G4tb7t46Pj//5038aGRl5+623AQBjzDlnlOay2e7ubkA6SRNCMQLI5zK93T2zj2frjcb0pekwTUKRXrn+JsF4a2vLNqxysTQ8POzaVtCqdRYylm0dHhx09faZljk0NPjf/vqvJyfHKCUUA0pSKVNlW86FC/1uJsMYQwilcWJzk1LKbW66/JuHd3b3dxQohBgCCgr19FyYmprxo6hYqTCDbW6tW5wxDLc+eHdkZGRmZnpxcWFsdDSKQ9M2p6anuGXUGzUBanV7i2B08+23qpVaLusFfqt6cnTjjRmgRBO0vrqMEWTzHfuHhw2/aXoetzggcXB8Ytqemy1IjaTUFy9enJubw5RgkQidAgVmWU73hT5u24bJMcKRH2OJOMEElEHRw4ff+EGdmwbC2PdjRpzpi9f6+yfqflwL/Gqz+s7bNwymCUrfuj7DqHr27MHC4lMv7+0dHTzfWg9E3Dd0YXR6IiWokoT5wf5yq1Fr1RrNksPRcF+BpL5sVfKczT++7zeqFjcWVpY29w9qfrS2f9BQwLIdTaG3D06CRHx972FXZ2dHIU9Ba4KIBk0QdWzPdbPMMDFhSBOb26ZpIqUNStfWnu/ubimtMCEYIylVX+/A5UtXLTdzclzE1Gi2/Lv3v+krFIYH+xfmn3788Y/+x9/+bzeTtyyzXKnsHewNDfXv7u2XqqVcb0/Tbz16MpuzrdGens31VQuhrlwujPzxi5P16kl/V6G3I49UmkRhsdzoHxjijv3b259nsznP9a5ceuPWBzoOw9WlZdfC/x/CDPpXjhykrQAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAd+0lEQVR4nDWaWbNk13Gdc9r7nBru1N1AowGBGAiABAQO4iCCEinREuUwLUXIssLS/3P4xfoPDjmo0BAKkTJnYu65+/adb43n7MxcfqhWvVZFVO19Mleu9WXx333lexExnU1ba0Nr0/ls2GyNWMFEdHFxMdvrF9fX/XQy09n5ets5pkcHaCNM2SRzs0DmUIbZnvzRBxfvvX46rKOSPzv7iz/505jP//7H/xDjyFSVJ+kbqymCpGlmVhZqjFrH8PnB/mxv/kI91MlEq81mdSGj1a5jVRGOLLU6YTqdShuJedNaRBYp/L/e/v2+77RYZI7hsR0nXc9E7u4Zfd8P4yaizef7lFgNA48k015Vr9brm44mhO5g8e6blx+8/2heyqUfQMpe9943vnb3/v3P7j8UVavMaRiZIrteXbKtUbqawv3e3l6d7u3tlVqne7PrWFtyjr71DUXLYdupLa8XxFBRDkwP90mFpvXg5lGt3Y2jF41YSCwd6dmRcjcjQiZA0vfder02s77vt1s3LR11aSAneOZ643Vy8eIL+OD316+9eKFUVgOOT97+/ndp2v3iZ784Xa7MamfV0YSlFG1Ac2bVMrFJ388n06PD/ZhxjuNidf7o3qlfniwXy8V2tV1v6sZ9bIREwnMEs6klEQvbvHfl/cPDg8MX+X+++e2ImEwnq+vlbDYtXXe9uJ5MJvC8cfNGeCyXS1G5dfPFZ89O4N4f9utsKnW+zV/Myst/9d9/NkER7A1xefbsv/7tX334yacfffz5pMxUey/k4zif7m2WKyZm7YfwWzduHB3uRUYajxg//OWvNicnlw/uyzgg14noSGwVEwjaiASxkiSUJRNERFRnEyf2QHLl//3mByJcu35EBHI2m63X62EYjg4Pl9eLrpsAuL6+ntaOqmxomJu5ylXG9Qu3Dv7yvz3suoTH9fWU5N1vfPXTk8enD55ZrZpqWhqNpIXDwERMe/vzoqWb2eX1ybhYnX5+7/z+fdoMvlzUBtqOErppLdw362GjvM4NQwECg4Q9W5Uy1S4jeFKJ87CfWyMczA8Wi0VhnpSKjWvw0cGN1nwynW+323A/Ojoi5Nia1klNvSSN916f/OCbd6kScr06fenOF9575Xd+/cmHy2FbZpOO1d3cpEQmlQE829/rp7VSDMvrh5/cPX76eTu9LCdXfWsjJ0Z/ttgsr7fLdbseN7nf9dP5gUz7Wb8/P2qDa2dEZGoxpoldrzbHJydd0ZPTZ8Yln548OpwdtNaYZDqbJjBsB1FJoqkxzw6ul5fFhMA3rvuT/f3VN9/iP/jyg5XOFdunj29M+6+99+5PfvzPWbifTgcPtr5YDQQrC9nN/f1uVtr1+fGj+x/95mfj6eV8aCXpalhfbTdPrtcnxwtonR8e3P72a99/7Y3bk329vN67vCzc1qtxS9GJtrbN2FDppKuT268sXpzdevULw/4h/937f1hLt7hamJlpVTNhBqh5a60xsnS9dbrZbCf9rEm//P63lu+/fjJu9sv+1aMHR4fz3/vDP/jxP/4zwFSUuyJahjFU+q7vN1hNQdb89OTp+eefn91/ULee4cvF6nS9OV6v1umH8/13X/nCF2++OGMuvuTTS396PZtOusqqQymdcFHWiEZEVsqYuW1OVdeRo1bzCPdtmU7adiuGdA93Avb39xdj25/M0wPCXOqS9eRbb+u7b5wzz2T69MGnd27efOP9L//9T/61dj2TBLFZnwETZdFh9Nrb5fWz409+e33v4a3TrRG218Px+eXHi5Oj+dF7r3zx9tH87f2JrVd68mgS3KG4KO7c6HUanCkws/REQrUn4czsi0oOndV98jYM1lm/2qzd234/Z2ImMhAADuz103EcVSgHPDG59d1vylffWTrNa137NQhvfP39n/7mt8TVrZuwMBABkWKcUup62J5/dvfhr/6fXFzOB18uNg/a9tPPHvT18INvfPtrt17cO7/K09O904tuWlmNtduj6QZDLaUNPjAJ07h1ACwQFUcWYW9JycPSIaT91IbN1rpuNptgNSZAHq21rusuLi/W6/WtV15cFlsvhtmPvnf361/qF8odxs3V2emTv/nrv/0///Rj41K1b42yYyYtzAEvZuvlxcPPPnn6yW/3H1/2ZMeLq39fPt4uxm+//877t+/cvlrLJ/cOdHJz8uIwo4wNUyrlFi0zh+26iBKPGVKKEWH0ABORgyyBrutGb6qy9tFK7S/apujedFo26w04xRQqpe9n1aTBUs++8668/6UcDdUIm4uL4x/84Hv/8G//UqVAappMawlPMNzH2pfL0+PPf/HL1dNH/fJ6HMYPn5zdH6/efuXlt187ekvr7Oz6pWo5K12dIoi9ERikDgYC4aJlZLIUMCECDBAJq5JFBIRcnCQyY6JmGx/m3SQ2w6oFgFnN9RqbNhRQwMfUqy/cPvijP75reGWgFptPf/2LP/vRDz+6d3dgVasKm5Q+mKshIiG4PD7/1U/+qZ2d9athuV5/fP/Risoff+Xr367TuDx9rfSz+d5GIsQXOahop+ZbZxEmCgopNYlBQRoersqZrFpGH1WKVo5ACEQtPAxsFE4DZnt7l9fLTEjVg252Tb5X6qLKM9Pxf/zxouGo2ajbx/fufvDBB6fnV56FIsqsFypozi22lavq2ePjX/7rj/P6enY5bi8Xv332cHZz/6/e+cbR2apv69v1oE4m6zZ6MeZahFrLxhIswgKAmMAgogCIeNJNB18Li4qCyIThEBIOdrixZrJV0Vk/WV8telMwKdm6MrYuG3tUefK3f37BUzbrjS6fnd24cVAmk8dPjrWbz/fkchwnfeEQU3aJR/c/+egff7q3Wcb1+ODps/ur86+8/voHR3f2zi8OprP9Og/kGCCrIEagK4UtE6lqwpKZXDnDRUghGdU9q1Zi9ciqpqrumZnC1qlSIimlq2UYNsN2ULP1dhtKAe/ms49ulYO/+dHyxs2OC1GullfDevvld7/+6Nm59Z2IDC2n/aQvJoZB/MHdjz7+l3/YX57FRfv8+OTTy+Pv/d433p8dvKx6VPv96cwzq5XIRsTIFFUwJ6WYiioJg4mJ1QRIIiqlK6UTNVNTViFGpBCbqKmqKhN1pZqH167bk7octgdHN68WlzZazMrq7VftS2/5widV95J+9dndr33rOw+Oz62bJEfXlWzsEevNpRIePnr45De/KWeLtt386tHlxfbqb777vZvnq9f2Dm9Jyf297WbDvY7DmibK7sUsgUALDhMjYYBEmTKEBULW9d7AxAkihpkRmJmJiIUzA4RalJOMAEpERLE6ttbvTR3l8dHeF/7oPz1Z2h4MZXjy4cdfevd314H1CDWt1Si4R13Esvlwefzs4c9/QY/PaI1fPzhBbP/snS+9vBpf0zpXZqHtdoPKgjayC9dGo6gYqbdmzIAXMxAyU5kpwVIyUoUDWWrJSGZhSGYSBRGpQVQoEOniYMocxsE9Zwf7074/Xa9u/OiHy70b1JVR2ur0ZLm4vnV4uLxaaNdLqVxEiOtsmknh+dHPfqbHZ7JoDx+eX7T1D996+4uD366Vb0zGQtdwV2mZwQKRiFDTqqUNYylFWFWZGMyEhJkSE4EUZKq7ymECMUBcqpVqZoJkhoDAxjKbzBNU9qal6OXx8clikLe/2L3y6qp5xtjX4dGje29/+SvHZ2ure8JEBBEhpuvlOiI++81vr4+P1duvTx5/tjr5y69+886ivXrjhfl0KlqJCyCtjSZlHLxYb6LMPGwHVVVRUwMRMYuIaQEKs5SOrKC1QdgBEmFVAbcMD8+I9BYRBEimiBBrrYvNUjn3J/OzyeGLf/kXj1rQpHQUT+7eq2Wyf/OFUGPT+XQy6ztOHsOXm6t7n3x49vFnB9Q9ePxsvbj60ZfffbnFC7duTmrPJEj2QGe11sJMpoaEaTXRSZ1Mai9gYxVRgABWVQIzCzOYYVWMo7ISChOxpCiLiLDV2pkqEauqDK0FcDSdd1ZOVF75z3/y9Oa+TzuuOev57PT6i29/5fj8QhTTziIGykRgHLanz+4+/M0v+8Xm+snZx/cefuf1N9+A3ILcPNxXRraxCAFOlCYQSmNhMAECMtVMiIiqqppZMTNmVksRpiyZRDwKC7GkC0FVjBmiQswgJhImUS2mVaP5/myi+weLw4ODb7znEl21KY0///lP94726/zg6uTscD4T1RZhpq21Ybl48vEv6eyUVn7/o4+//dZb7833bjndvvVCGwYrRghvLdnBIKDYBFAVYskEIqLvukjPBAArJVpTpaLpo6hMGKaF0QCQGptZEqe3Ygal1iIBNRMW8bFZqVvCg3Fz4wd/eJ+Gm9lPRvfrq+1i/dab71xcLiezuSibaC3F3cft8NmHv93ce9iPeffu3VdfvPn7e0c3W3tx/2CRUKtjNCZWkVJKwGunpUoxVVM1lVq0anKCUlSIItxLUVWNTOYEIhxt1CQmS7UgbjtDBCCTVEVFmBkE8dU6Ls4HlfGdL+Urt8NEYjic2r1PPr6zf9Sy1L6ycRKNHKWzbOOT+w8efvTxZBMPjk+e+vqHb79zOIx70z2qpRBAMDVSYWNmqXUGLh6Zki09AG8kWqAcjKQ0ssIqKTG6oAMbG5uxhhTtEkRCzIyEggiINnIiPRABhMx6m3T1JPDy9763GqWXEjVPj4+fPHry5u/+Lgl7DKxSalfUrhbr82H76NOPutXyfPTfHD/44LXXZzEc3DiY7+1HJoUPbQSQaEkAsZUKJlKwRFEkRa0V6ZHU9X2pRUmEmIlUjFlULJEAmMBgZlHRTERkAqbChGKlWCGAiaRJnpS+vvHlZwe21037luj17r27t195bQNt6bWrpUhmCquv4uPPPr08eXiwbR8enxzdPPz+/Gjrm735/rBtSAaJkJlVFrZSrOhzqyNaRAnEkcqSQDEbBo+AqALw5gRqrQFBCBAAiUhAiERBaipmDmKR0T2RIppI2bah3br10l//+arIwGMV7oDN9frOa28NA0qtYmosqnq5Wi6uLi9++wtdXD47u352/PT7d161aHdu3tYRwiIsTERErTWinbQrEYvo7mwsqloCISzKUkoVlcwmAivCSqKUBBHR/5BMBpo3JMID6QwoM1MykRUjIpke7vmt/Z++IMUn42w6mP+///vjO4cv7B3dCi2170opCMqI5Xb1yf1f82cPDrb490f3/+zrX/1GTO3WwQvl0LqOJAOtVC2dqmmxDhFExAmAVDWJxAqJEpGoIDNjzHRREWE1JQKLhDsyMnOHRUhERa0rpZgWZWYWBjIzhjaKiNzn6dF/+eF8WaX2teMiKcb9nReIddoVAAyZz+becnl18eizD6dd//PjZ/Na3pPJtJQDmjfPURsxa7FgElVlNdYINjEzY2ICE8nOyoAIgEcTZlMGsnlrbcxwtWqliigxAi4szAywZyYjmcdwiEopaqYiTCzd+19vb7x6awFP79erB598PLLV/f0WgwCTfiJFGnK5XDy+d7c9PV8O4/3Tsz944/WjlNnePkUubEx34jQTZYiSFhVVESaQyq5JmUUiIawsUktRRinmnpFpKqoyRnhrQgARq0ISEkgXkUgQsbCaFWISFVYBgZjkhe/+3sWw1WoTNdm2zz/+fDLdn9S+dhqRqgrOIJxfnT366ONDKp9enb8+nX/n8PZMrbfSxEfOFBHh1kYCgASSgvpSlUWFhRHhHp4JAjLR2siEDN8VCGjHPPn5SVlAzMJgpCOB2KlQC2QgEpHNR2FGQhZ3bgSpzSeVaHO92psc3b7zmtbORKfTKRFNtH92dvrw7qd5cUaNPv/03p+++qZdLl/YO8xCVEDRGkGlFu2KdcV6ZRvbyCQAhmGAR61VmWutSWBigIgoM4gCQEQQ0aTvmRmZRARAZGegtKhVtaImsutrJqK+73eHtjzYvxH9emz7nX1y70FX50e3bjuBCSAgMa7b+uL86uGDbmiPT89/5+DGnVL6Ci0yiBOhFwtiYohKpKsyMZdSAKgpJYMZABOnu1mhDFZGpJl5wIwyNQnMnJkAYTcFMonQooFBz7G0ACDQzs8SMwCJkSgTjObby4vLkRDKte/7bjqdz4VLRJw/fHR29x6cP7989sM3v9RFHt7Yd2uJMBZTndVeTZhgpiAQZ9dZMhIQVTVNQFVFxFhEJCKs1Da6iiaBhMAUEZEoVlVVmEUJnFpNqooKs4iKqrGwiADYqZmUyG22Snn/s4+vx83r77zVGAk09xQdgYv18vTZExvj6mIRq/aeTqrVfj6LGGkMCXFCZGMmUSlmRCBkUrACHGBuEbS7QBZ3jwgrZRxbtUkGIgLEIBJVFQPtJgYT5W4yMDMxMTNATDtlTlElgEDCqqXotLfjx4+1727efqlaYVFS226HFL6+urp6+rSYnjw5+dadO9hupof7m6Gplk5L0apWIaTCqhzZzIQY7s5MmTspFxFBgjxMREDhyaKgNCssJqJEEggWuA/IxghEMIEJ4aNHECQjPON5DQkTa6TICJKI6aRGa5P5hJVMKKNpVfiwGTbHd+9fnJ3OSDariz+48zqth26/j2JGZloFZlABdr/V3TNzVwOZ4F2G1BIRzALAsyFTzYgpskVEJmdKhrrvGCgnh7JzIIcNRxZVFUmKJJRarBgLRzoJqRXpu35S6tXx6Wq5PLx5A5lMpMwK7rpueb0Yl9cFePLZvYNbh3uE33n5pWGz7YckL5ymbFU74SJqIsW0MrGIqFUhEzKTgoSIgCg4AJRaMlNYaq1IdFaLVmUVMhJAUxTEbFb62gsQzVmodtZ3fRsbE4cHkExhmgJ1kfHff/Jv68340p1XVYuwTbp50Zqe18vF8eOHN1p39/LqtflcxrHfn1SwgYlBjKQAg0RUDYCqEktEZMSuWYUFkcKSGcTEQi1HYRaCN8fO38UIBCGJchyHzCBiJNIFoZQcYw5DG8dRRcftQCBj4x1fEkZVevLk0XR2sH9wq9TOpBOyYh2xnj19vDw767d00cYvdwfTrgwMUR4tqQCWkBaUohz+fIYxk6oOw5aIhBlEAgnPXQMmggXb7YZZkjgoRIPYQaGKnQqJWERmMEHTRVBLqWZGwkwUEcVM1Ajc3KWSXV5cH924Md077Cdd13cQZub1ZnV+cTIuzmeOZ8+evXVj/2atk9m8ZTBTVwszK0NFd4pGIILsZhARizBFMEiSkVDVzPRsuwlVa0EmM9dSOIMzjAkRVSrnbooVYsnknfhmpvsows291DqOYxuH1pp7k5a+XFz3k/7Oa680TiKhbGYiE728Onl2/4FthkfLi/dfenlWdTadEZOaDjEKQUTAJEwEkqLMTMSZaK2VUtU0E8/jCgAQQKKESOUdiQAlNXdmZiZhdU/VDmEeSSKkJEVIQJy8e5gizGzCtZgW6SdFuq4ePzteb4bbr7zSMkik7/uAD9QWi8vF+SURXWwXL3aTviuiHEB6kAgT73iymBAyCZkZ4SQM5aD0RCAjU0QSLiIilUVMFQFVESJhSk8iZqJaukgmNmHryiQDEB49PZNJCIJ0NWEiEYkWGZEZNqnd6fH54moxmczFOhLZJafF+XJcbbDeLNu2KR2lzKZTd++6jqlxgEWYBUQAIxmBoiJqTo1B7sQcpXYZSTvcQ0Qk6ekUVou3pmYENusA5l3rIwnMxG10lcLEyQ5RYXA0FkKmR3RW1JRZmg+2XW3Wq+12iNpPwZwevdVxbDyOq9NzG9rTxfUbX3jx0GFdYQE8pCo8FaoiToiAsiRRpHuOrMSswjt0w+EDqxEnsSKJwEnp3ojZve0GNIGSEklmGjkiUolByBDwjlWTEFiojc4kzSEkiRQucn1yBo/54aFaIVBRbeNYRJcXF8PVFbtv1utbVm5Np6RKRDu/RYCyAMyswgoiBkyLWscigdhRk8wUVSahneUEs+huLgmjmACemcxatFoRUCNuaskW4EaUxciIGSxCrY1EBLCx0O6xp9jl6fnB3v7Bi7etFhESYyIdh/F6cX11ecEisR3evfkWN9Ku18bMIOG+n8CDSFSUiDKICeGRDNrVVpFoAAXv4CxzIQIJAi1HqyKszdOK+kgRu/XiQJKioiKRTExErkQEzqCUNCaAmQQcIMp0SNrZyVkkXnntCzuzxCSkCMRms7i4OOs8fGxHyf3BrAWAVFVPOFIIREiAQIlgUCaRiYgRJ9BEdMdtKYRJwcnIBKkpEC3JzAAQKVHuzGWRkpljkqg5USFGUgZAFKAiwkxq6j6Klq5IImSzWjjo8NYteLLodDIl4u2wGbfrttqs1itV20uWWjSEWKyIsEACLERCBKJUZWYlJhVFUHpSghgRHpmOCCKQsCkbt0SSiZUWOTqQBARRCGmmIJRTMznBGRyjKisrEzhADb7arCAcyNYCILterraZ0+leEhMlElbrdtxu1hvajsvt5kY/reCiVqxv45gZpQhRKndJrmJEjMaiO4e4exkABIuoiIFThN0b4J6pOsnMGLzvJ1qltSEhEcGcSWam4zCqFGUm4xibO2pffQfomKxKJtTEI4RMut4CwWqTyaTrS+6QRQQiqtNiszrq+7YZQBTjhngITgILVRYOUGbGuDP52cLHcUikPt/pZjievwnsnDALduWiaq15c48MIJmJWZjJERAKb0KUEWZmxVo0IBIUsct23FpT0cyUpGSBVXX3DJRau1JWy8XV5eXM+s1meOnwsJgFoRhKR6aMoAwlSysqEIKqFjEVFhBJEiKZVNWYOdPBmUhiFbXIUKVaLeFaZGjDc01TUVUitHEoRVVIhTxaRATi+cdIrFYSZrWkDApmlsjWT2rfF2ZSKaM3RNuu1+M4qhrGmIL7yQQqJsZsmVBl4kY5wFsEiDmyRSYIBiYgM4AEqNSiKv8RdpFJIoVZkVysgqX2fSlFTWlnpzJNlUWSGci+60UlIrrSMRNzAikiRFRKJUpw2mzabwI7E9b3/ejNikWM29W6Z+b0iT03Oe5EXNXACOYEQUzTDQR6joGYEKKSiN2+OiMYvEuxgeAERe6sMhGlOzERE7CLixAxREaksEYGE4jSihKxkgIZ7szJLBCYlcwm6+Wilm4YmpUyjGO12jKSctxsZIejOLvJxEVUGZkRuzJhT3GQVIlMIsCDsbvpIJCQsjARkwixuLeIZpJEyZRWRJVMWQgtPEEJah4OJwVRJmKXHIKSBInIfB5NxQQ7sSBCphwevCBcTSqRZGIcx73Z3nq13m2fRVDFipqZJTkrZYRQZdSifUYyw4qIFBC11nZfkcjMQKaHE1FEiGoppTUvpSAzM1trEdGaCxcWY7FSeiJWFaJEZqQyS0BaS6KsxsxsptlciBXqzZlIzk4uum5aShWRrtbZfB6BbNw2g7exVuuEJ7UMrZEos1gp7g5gHEPNWhuARCaQXdcRs0eIiHuYFRXdbSDDHYAwog1WJGJUY8CtCCSSGkk6RhLyjCSwaYKTVNhUOoZmeI6NEwxgdDQvIhQskTGdzUSViHYZnMBG1rajUE776cSsqpmpinkDk4KSMph5F2WYqLkTcUbyzvpnCnNk7NAYEakqkBFRio3jmJlIuDsRIkdREHvkCMDdQTuokSZGz0mKgERVATx/Dhk79GuiVDreQREmIuEckyCmRXk4mE170sJcS81t7BqVmXZcALtlLUXfF44E2JuzUCk1Ak4pLKBkZgY/x9cetfbuISKFRIQLZRu2tdS+lGFIk6IsjZpQZDoBbERggIOSMts4knItdTclZLG+UtHlaqmqVgoySy3p7uOwacOsmwGAiiarSe2VsLtsUjWGCLGyZGRkArBiqgZQhGfErrqYycORELFMcs+M3O0sCIJgYR2HcRyc2YWxC9OZzZRUiQkghySxAMQqWiTSI6NlGteOpHZ1RkgiUlHPkJKVsTXHWphlUJkGDdJ8hDAlkphiJFURFs8gETAxhTJFS2YQ7dTfmWnXM8X6cWiZxCzhLgIVQxIlC7RYAQFIJmdiIhMWCjCq0wjNRBgrRIQREiqSzVmKGEtElFIis5h2XaeqEIGylZKE2ndJHJ5g3uGaUguLlr4DCART5UQRUgJBxGhXNrugv1sGCz8f+aJQJatixcR4p3QstEO6IsasuyREDBbZbW6EqajyDjISZYR7JAmQlkMTz76U2bRGJrMk0CJ910mq2+0gIkKRGcwK5khiJkmw2S4ZmiooRQy5I3RUivnYWFlZARZwIkkrE0WMAFrz9N3mnZ7/j4YZyYASMRN7QzEOChaKCBPeCcwuoIanVm3u/x8NP9A5AWQ4QQAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAU+0lEQVR4nO16+a8kx5FeRB51V/X97jcznMdLFE+tVrAtCfbC1n+8kIFdQ7ZIiVqKS46G5FzvPvv1fXfdGeEf+r3hUBTMAQxwsIAChUaiOhsVX2V8GV9ENg7mDP+RTbxqB/5/7e8AXrX9HcCrtr8DeNX2dwCv2v7DA1Cv9OkrFYMvPfNv2CsEwMDw//DsBWCrOQTAt/e+xfzqADCAMQAELL7rEgIiMDICMvDqPhugFAWAEIAIiADAzMz4akPoBWMAxBsc/P2gQkQByKuZzExMhgGlfKUAEIEFoLh5/QwAeOs8MgDgbYQJwWwBMgAyMDEbRiKSQr1KAAzieeTw94IbEJiBEZCRgQkl4AsUYAbBhunVAUB8MVJW7/v5GG9jajWHGEsGFCBuIUsABZib8pUBYABCYP7eJoo33377iQAAyDeXWK0DAhC5SrzaEALCv5FKEeivUAkAgQDASAwrApsCqER4hSEEAKuN54anvLoQWKwGL2QIZAJgYAIyYAouclPmZPLFbPxjAWBmALj5YABgBCEQgAiYmQSDEIjAQAaA8CZQGJjzJDZpCkVapHGymCfxIs9iMgVQKbj4MQCQMcSMiAJvjJkBiEzGTMC04iry80zAlKZ5Emd5WmRpliRFvCjnI8EGgQBIMwk2CAbxRyExMxMZgYKFgFWgEDEXZBIpSUi5moWmhCIv8jSJ43g2Xc6nWZqwKYmM5rJiARWZMQUwaSUdx7ItLZX1YwBAISQAMxtjnt8UYJQ0QpQABsrSJHG8mKXL+XQ8TuO5KQuJbFvKtS2llAbhKZIoDZVZmsym0971YD6fFUX2YwAQiCwlEbExDCCFQCEQCjQ5J/NksVjOZ/FsGi9nRZqyKSVSYCnfczzHkohlkaXp4rTfzbJ4Pp9PxuPRaDiZjpdxTGTwR+vMMRExA7MQAgUCLaftb7JZbzoaL+czptLWynW0b1sSGUxhiixezEfDYa/Xmcwmi+VyGS8Xy2VRlMpSQRB4vu/53o/CASIGEIgSsSzLOI6TNKVs2D8+gGySZxmTcSwVuSr07TxN5rPJfDIaD/vjYX88HI7Ho7govGrNC8ONnd0wiizHZkRDZJhebgX4e+J8NcK/+i1+V0iuJhCZEphWREjj5XAw7HW7ybQtkstAG9dxtBJMpsiSIksuz8/63fZiOs7Spa1VNapUqpETRF69ZYRk4qwolnE8WywXSZJmL8EBpu/4vVLiN3kH6bswFSIYw2RAIElJwCWbFLgQCqAohtftbvuqyDPKcloOKh4ErqWVjONlt9O+urwYDnrT2Vgi16rRnZ17jXo9CHytpGExiZPxdD6eTJI0JRYoFQgplfVDK8BABARwK9UBAZAZgADo+QrcSl9dFKAEMAOy4XKJmAtRgiji6aB/fjzpXVOZ+47tWLZWxCbL82Q0Gl22Lzvtq8VyDsB3791ZX2/5vqttXRTZeDQeDPrzRZznxCCEUlrblu0IZYFQKPRLcWCVZf5aduG3Kv52DEDAAFIQCCYiNlmezaeT7rhzsRh2BBWNKIpCTzCSyZ8cHvT73W63n+ZJEPhvvvlGvV6rVKuGy/F41D8fzuezsiyFlMr2wlqktS2VBhSAQmrHsl2lnB/mAJlbP/EGzAtikVfL8Xx5uGQujBQGLWYTL0ZXnYvDQffcEtSIfC3A0dJS6vrq8unjR9ftSwbyAn99Y73Vagkt4zidzCbL5TJO0zTPQaDveWEUWbaHwtK2o5QFIAClZbuO66fZS2yjbP5W4X1LhZs5yAAgmCHPEBigzNLJqH856pxNRm0tTKsW1kJPAA17vdOjo9PTk8lw8Nrd3Ua9Xm81lWUtlotufzgaT0AK23Ftz3M8T9k2CsEMDMpyQgJRlpQk2XyxnM4WcZwlaf5SAOD2hd+syYuFyM2YAECywSJGymeTfvfiuNM+McWyGtnNeqglpYvZeDTYf/zkYP9ZrVL98L1315rNLEmn89lwMo3TVCjbDUJp2dKyLMe1XFdoixGNgbzk8TTr9kfX7U5vMJpMJtPZfLlMjRE/TOJVO4Nvy1PCG9X7Ait4RWhFBWSzWefs7PjZoH+tpKlETjVybQ2jYffZk0fnp6cKcWdr+87unUoYTPrDXq83mc6049aaLTeoGBDKdrTrCWUXDJPFstsf9LqD3ijZP7qYzmaz2SzNSiJgFgAiz8uXAcArAhAgIxDcgLndlBiBEAjASM4vvvnzuHM6GvZsC5vNqufKPFt2rs4OD591rtuu47z7zk/v7d6dTienRyecG60tLwjDas3xQ0KZlKQd36AaTRdXne7hWffw+Pzy8mI8KzKDKIUQwAzGMLFSwiJ+CTVKphSIgAIRiZkAELCkkogcS7MpmAolOI0Xo97Fwy8+rXmiVQujyLMsnE4HZ6fH+88eLxaTu3fuvP3W21rqx0+eDPpDx3Ya1Ybnh34QScsuAQsSIOT+0cXB6eXRyfXh6WVvmOQElgWoBAtbKFsqKYW0LNvzfN8PivyHATCZAqVEIRGYDROAEEDGFHluCQYqwKRZFrfPjw8ePXAVVwOnWQsATbdzdXjw9Pj40NLy3XffXVtbm84WnXbXlLS2uROFVc8NBGrb9bXjThfLp/tHX3719OnBWX8yny05K0Fo5bha2QqUFUUtbTue63me67qeZdlKaYXyB0lMUBYgBKAEwKwkAhBCCAllnpbZ0rWQy/R8/5uLkwMLivvbjTybl3nSG/QeP/m6379utZp37uw4njeZzNrtrkK1t/dGo7EWx7m2A8fxF8v0ybPDz774+tnB2XASJzkYFMLStmd7fuQGoeW42g2amzuAAgGZmYmIgInL8gc5AHSrJZAYC2IGFAIFsikSk0wpnQ7ap0dPHlK6uLe7WY2c2XR8cPB0/2DfUL6xtdZoNouy6PeHeWFazY21tQ1EzSAdL1zEdHB8/uWDbx49Oe4OYkIhFCpLO54XVuthVHe8UNuusixQOiMCFMwMzGTIGGJDTPwSmZgZkJkFMyCuvAegUgkDULRPD06f/kWY5b31+lqku73O4fHJ0/0nZZnff+O1za3NxXIxGEyU5a5vb3he5PgVP6jOZ8uj8+7/+eOD/aPzk7OBYQxCO4xCENoPK2GlHoRV2/ENC0MAIAxhkqYghEAhBCKiEBJQML1EV4KJmImAQUghJOKqO5AXy+mwfdZvnziY391dW4+c+Xx4crh/cnYhpby/93ZYCa87vdlsrh0nqNYdrxJVmwzq6cHZgwdPv3py+OionbNkrR3H8iq1qN4Mo5q2XGXZSjslyywvsqJEEKQQhFy1hG4CBhmYbfslSkoCJr5peCAyMVOZ5fG8f3F6/uwhx+O9zcZa1Z52Lo6Ojr/4/KvWndfub2+DhF5/GCeJkCqq1l0/ygseT5dHR9f//Nt/2T8Y54CFlk5YiarVMKz4Xui4vm17gKIkLAtgIkMShRAoGKkoc6EAhRRCyNUyAOvb5u73ZD3c3kZAXO2+KAQyM5V5lixGvevz06MiWa6Frm/hpHN18uSrw9Mzz3drtToxdK57eVm01tertbrtB8pyrtr93/3z7z7908kyMVEEgeMmqhLU19c3NlrNllb2fBGPxxPbdrSyyQAASqWUlMwAXLjaEQqVlFpJKaVEFMBpHCu40cbf8/5GtglGxUhMBRNL5DKdZuPutH1cTHt3Nms1VyymvYuz08tOf0nwn//rfzu87JwdHyvLrtQall/3qhvLpPz8s798+qd/P7+aZCWS0qWQa+u7r999h4RNRMPx0tBcK11rtAAwy7KSSQhAKAVzWZaFKWxPM5ExBRU5EBORAEYARWwAzbcYWALIm0MHBkCI88JxrDiZKM49Vy27h4/+/GmRTNfqYWRRnsW98Xi/3R8nfP+tDz79+mvHr9jVGrF0qutOZfOss/z4k88+f/DVZJZlhfR8d3tjo7m25oYNQ3ZphEDpOp7SksnkRa61FoK1BVJAkiykklrr0uSSVZGXAKC1FhJBsmNbWisFuCpNVopHANK33t/GEQP5ritLky/G3YsjSscNX3va5MlsPJv82xdfGMC3fvaLq17XrtT6w7HtBm+88XYYtp7un/3xTw8ePTmaLVJle/VapdZoVup1N4iU5Za5RBYMQMSmNETGGGJO8zwjJq2VUKA0ABQMhgsKvUAIDMPA9Vxm9n3vr3qj3y1YnnctuURAJTmezdvHz05PjyBLnWYYVcNer/PlgwdJmtTXNnqDQZJm86yo1Ne3tnZRu19+9ejjTz5//PQ8ydDx3EZzfW1zO6pWpaWZoTCklAMEALTq1a22iqIsiW6qWMdxLK0B2bbc0K1UKrU8S/3AD8MwTRPbsXq9zgqAAIbvFLirEwREAJaSJRR5Nj8+3j96/FUyn9Q8BRLjNHl2dDgYjzd3d3OG/cOjoN4Q2t2+tyeE9ftPPvv443/v9TNCrDVr6xvb9eZ6WKmiVFlR5HluSFSiSDAwEwOv2hYoJbOxPU9KCUCu6yotHNsJvUhLy/fD8XhkaatSqbiuI6SYTCYKWACaFV9v3V/JY7Eq3l1LIsSD3sWzp191L0+2mpEX2NPF+On+5cHx0e7deyVBu9Ort9ZBO6+/+1FvMP3ssz/84Y9/mc3BC5xqpd5a326tbUnLJhR5XmRFjii0bZuyQFyFKwMTIkspgVU1jGzbzovUcR0EDPxga3MnWWaW1paybMtZb61FlUgIUa9W1O0LF/C8dke6PcBFBINQjAed85PDUe/KFKnjNgjMk6dPLi8volo1yYvRdJGXXAurW3f3SiM/+bcvP/nkL0kCri+9qNrautNobaC0s5LyIjNMUlqO6zi2vRgvhZAobh4tpVBSCoGVSsVxnDRNwzDMy9RSTq1a3Wh5ABiGUZaltu3u7uwKgRcX5y9y4IWjBmSAVSVmwKSH+18f7X8jZOn7VhLPp+n8snNFUrhhdN7uasevrW0u0zLOzG9/968PvtlPS6yvhUHQsN0grDWV5cVJXhhjmKVSqLUBTPKCgQQKKRBRoBCWZdm2LQTWqjXLsnzP39raWi4Xs9lcSX3v7j2JIkmT/f398WBY7O5KKWbjqboJntUu9GII3SxNkceT86Mn7fOjna0WSDUc98ejgbStShB2egPteKgcy4tsaX/86ef/+/ePSi5BCWH5rc3dem0tL2C2SEEoRqEUKq0AYL5cFmnaCCtKKaWUEEJK6di247pKyTAMlVIIuLOzM51O0zQzJUmUjuO4nsvE/f4wjlNL6729PSVQAhCAZDalKRHBmMJQqZSUUhiTfvH5Hye9ay1IC2KgyXS0jJd+4BcMy6KwpLu1vi20/8WXX/2vj58URenWwvXNnVqtqSx/kRYCLWV7htjkOUiRl0WWZZalW1ubWJQVPxyNRkEQ3L//GjMLIZM07XX6v/nNb+bz+Rt7b87n8z9/9nm/M2hfdH79q18Hvv/+ex90rtsmL8Na7e03f6IAVwRgBKkVMhCgESCVEFmeTIbXvfaFSZcWcrpcxOk8jmNAMAx5VrhB1Qsay4zOj04e75/HSR7UK2F9Lao0HDcSqJkloywNZVleFIUtLN93bdsqinQ6nUSWOxj2a9X67u5uo9HwPE9K6fthr9d78ODBr371S6XU1dXVbDr/6P0PtbSyLNnYaJVlGQTR1dU1A4Rh8JwDeHN2D2xMgQIYeTQZPH30sNe+EFR4lsqS5WQ6zvPccpzc8DKjWnNNu5XT896fv3x0cTnRrr2+fcf1K15QUcoxhokQCcqyIMO2ZSkhkcl2tFZY5Ondu3ccpVpr67PptN/r/vwfft5aW/Ncv1Vv/Pa3//Nw/ygKKrvbu//9n/5HFEatRitLsjKjalTxHXfYH3Svrxu1d8TzFMaAhpjIEDEClWU2HHYPDp7MJmMF4GpFRVHmhZAShCwMo9RCu4PR/PGz0/b1lFE3WtvVSsvzIyktY7gsuTRkjCFDrmvX6zXb1lkSl0VerYS721v1WuT7nmUp3/dsyzbG+J4nBDabzX/8xS+u29fLReJ7wa9/9bPN9c3QDyfj8aef/mk2nRsix3FOjk/JfHtGhgiwSh9CaEQTp8vZfDybjbHIpW0jEZlSAiqpi5JI6CCqJxk9Ozg7OekWRq5t7NRqG7YTsBBMSMRMIAAlorK0a9uWkmSEdOxq6FcrgS1ls1mbjma2Vu/9489Ho8nDhw/3nz774MOPtja3fvEP79Wiqm3Zj755bNtqZ+seIlqWkyzjeBn7XnO9tXZ9dTUcjL5TDzARAQkBpcn7g26ne53nqSeAijylsswKIGCGkkh5tnb8q8vB8Wl3Ns9qza3W+q62XCW1uUniqLRUQkqhpBAInMZLy5bNza3WWg3BLKazeLF4//13G41mJaoURZkkcb/Xv39/L280JxPe2b1jSqpE0e9//4fFO9nPPvjwzu5uo1ZFIXxPpYna23s9SZ631xkYuDRkqLA0LOPl2fnpxfmpKQtbazZZWRZsDBkyTNpytO2OxtPDw7PBaOYGlVpjw3MrQluAiGSAAQGlUEopiVJKCWwEYrUSbqy36vUKQhkFfvei85O3fuI6zmKxkEr+8pe/1Mry/VAq9fjxszCKXrv32ttv7Rwf7Zyent6/+5rjqGarOhrOrzuDo6OjPCsq1Ui9eJwsBDIIIbjMs/GgOx31HYWOskRaGEOAyrAkFn5QA8s/PTg6Ox8Qq/X1zahSB6ldP0izhWHDBGKVX5kZCUFYlhUETrPRsLWm0qyttXa2t3ubvYcPv17O43ff/0BKy3ODSlQtilJra75YdPv9vb3Xr7uT//RfPnr44GmWpyhsQ+SH/rNnFweHh4Ph8KOffajK200ImQ0ZgYRIJl+KIgk1UxIneV617e54BJYL0hVCW27z+LK/f9RdZBBUmrXmhh9UGVReFsRMgChAamVZ1uqvDL7vO7ZlaSmERFA723df33tdMKy9vZUtzcGzYymcN998azSbXrd79167Ww28vdf3pvM5ATMqRvn+Rz8tMqOUBuCci6AevvPhu3mR7exsq/JWRgsAIRHILKaj9vnxtH8tyhQpNyYj1CVDmmTSiRyvtkjgvD0eTXPbq4a1lnZ8EoIJgAxIoYWQUiqppJKWpW1teZ77xt7rnuceHRwsMUmT8vL8OnC9il9976cfSrSUsFzH00qfHJ9dX3c//NmHd+7uXHf6WZFbjm2IpERQBAqEkEVZBNXQCR0Curi4+L8tDI+s31oc6wAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAj1UlEQVR4nCXWya/m53Ug5vOed/qN33SHGm5NJEUWiypSHETRokxZ82zZiRsNd8vowUBnk0WQBhqdZNPIogMEyC5IkEWQZRK4F250G7LdtmxZMiWKokhRxanImodbd/zuN/ymdzqnF/0PPOtH/B//1/+pEI1VKBSiAqCUYqKYSAqBiABISkshyPce2AghmFhQQsmRPBFLZoESUbHg3nk/OFSSUFLbESSQcjlfeb8KiUsz2rpwYXO84VK/++jh4aO71koP+vjBPnfDxacuX3nhJZ3lx3v7t29dP1wenxyuu/Wwc257czaZbZ/WQigI+6vjZbvqFu3dB/vnzl9SKQUEFaNCQYiJEWJMDEIxMjEIIUkIRyCYExAEg1ogek4hJUgUE5NMKsZAnAQjcFWVpqicGxrfJO/WSzefNxohcpRpmXwr9ASiFYQuioeP1oeLtRiaSa6ciwIEAMTYxxCPD/uDg7nU0kHqUhyBUFL5ELwXu3dX+4dzH4N3UXGKBBggoRSCIjPERFJpEsApqSQikUCBApElIxGRMUYhOs8hBoAovFwOnY/rRNIoOZuURluJNgy+7/zQu84PKQYK0dXFzDGQBMmEFALPl13TreuM5ahKWQ5AghiIrFZ1WcWp1xoxJgZBCViKgRHZWqWnRcW20plSIaYYgzSsBQCDECAEIAohdUrJx0jJK61QW0TkyJ4ipIhCSqEdxASxc+vl4YHziVErySCmuppalQViBg7e7x0euJYFuTPqjBv6xFICIZjMZBsbdVmQFqh1IRWy1ErbopoUo3XVDj6VGoTSdjLezLNCGkAKQCIrdNKVBCUJVWRKKRhkAESUUkkUQiJKiY5lIgcAggXHAYREksTU+x6FkVJJpVPr+86vhuA9KyNiP6AwKlvmuWdK/8VELfNMRWGkVhEiKCER6nJcj1uXQAIJkrasphsbZT02qOvxxixCZJkXNQDmVbk927JGsyShEBJGCKWPnCRaVig4UoqRJGoAiVEIqxGEAEAEJgQQMQKTl1JKaZMbYgpC56iNRkxMKUI7+JNFQ6ho6CkKaQ6KUV1meSBEtON6LCz7hECgVaYFstFVXc3iZoikQSDY8cZkc7zF5B0RIhRZNd04W1V9gmS11gpBMgsobYFjrXTuk3MRtEpKAiIAJc9kGDAwqWgSAUBEFsTAxCkmAJIg56vjvu2KPJOBpUkyLzx7IPKrYX/v2PuYG7lRV4KCQSQSUuoQBLDpuZcERpWZrsgDaAjMaIt8NlOZFk5WRdb33XyPiElQHIYU04CQZIDIMSYhHTMxEkaUwGQFJQThWTFqqSGl4FynFKPIgugJZQBJRCklBojAwNCvV/vH83o07ROk0JbIOllG9CEopYs80ybUhSYRGIVAQckbnWd5YZ2PAkeV3tzezqvK+75v+8735KBkYitBIsd+tQrao5JSqKilRA7M1PgICcvKaAkKRNJKggDHC+dUE6IQCrUSEmKPzBRilGJANCQEB4FKRhZASQGESAcnh3sHCx9V73tjQWqsYZyb8kSc6DLbTKMAJDgJpQVmgVEIJoA8txs0nvA4y2BUlz655ngZdCiAbabERJfluF20u7tLaU09rYrcCilDEkNHMcHe/somMd44JS1Ig0WZA3HyvHwQu+Vi54lLSgMFjFIqRsnJuRS5S4yepcmgRKIkYQDqQ9uv290HhzduPCoL+9Tj53kAmMlS6tF0igQYwafkGSbV1Oa1YEkxBu4Fx1GpU2KpnMQutMP2xE4mRkmFVpazTUDYk6K5uZ+BcH1slvvO+cXSD10/9H5Ik7Pnz924t1gd7unCFIWhFGJKq5YGL6r1ShFJQQHIpLRyjkQUrVhDDEpVZDpl6tqWwQ/rljwFVEmhy20OSCwoz42Qdkpjjkwsg2AtsChKKSm51jApPWRjwwjdSXt6eyQMHu13ZjRRNoWY+lW/d9itDps7N3YPTghLfe29G/vH89iv+zZ4Jhr6i1dfNqc2f/YffrhYnyALhQBKgNSz2elTF87f+U8/ViBzn9j7VYqMELU1yen58pDFWipdFb6wRlmbZVlly41J3VRZXlV9CkJbW5QxJUicgQ5VaSBmCiU78EOmcVKbcrJRGFDKhPNTmeFif3mw14RWLdLJ3vUHi5NVF9PJctG067zK60p/8OHt1EdlUGFpGLRUh2/fiHeOuhNXy7FFo6zp2hU75/v5/qq9tfdAKWKlbEpCs5ZKhjI7fnR/vVjOZpOVXybiejqpJ5vjja3OdcXgREOm541qcnp2BlIQyGgUm0FTo0FOcy2BD+YtZHWd5ZXNZd43Dd+7uXf91se3bx6Otz513Ms3fvn63sEjajvqw8RaK/MST52tnrj6te+cvnhpfPp8Naqqeiq1IQzGmNYNVVGZKvdd33WtRh369Xpx0re9WlCvExR10cXQ+N6u3N7J4mA+X6fkoz4zi8iYYSHLYXzqnMrz7mRNNJw6vZky1TcihFW/6mVMp6vxdKOupxaF4ttH+8uHJz1+fO/etffee/Rgudzz8+CCH+rqkaIh7LunT1+88LmnP3Xp2Y2nnjl/+enZuVNmonMNgAJD6t2K4xBiEBGgX+Zdo6vWassaNzcsKCPErBo94aJSYuhAVqjJeunXYQ0J2q5ZDMMynN7aLnQJQgogw3ZSYqm3h2LGFGSph3bZtb33Taa43szqjWJnczMvs4eL5d7JwfqRXizaX7917fb1e/PYptV6JPTO5PSlnedeevVbz37xtY1P7ZgcQII/OVbSxeE3dHf/eNn2vXPUqRSthpQ8ReEli+AJKMvLzqfucGkqU5bVnrAuKrU83jPEzcZMsl1xWqxOnILZ5hYhTrc3ZWGTVY6ikkapLHhJagiD4+O5TD6mMC5xPNLlyEpU9x/tP7xz8Ks3Prj1yc3JhStcZbdu3CkIXj17+cp/9ZWnXvjCk89/erqTC3R+tVg9+llzeNfEHgTneZGVuqX10LZdCKCTKEosTPKSKYV2ZYxBo11IQ+i8dlZJNHEYGiGEOtmf60KPdR40s1J6SKU2mcw8IjOFweuElg2AXvfd0J3Edi0lbc/GSouHD/YOespG4uDW3Rs35h/9+v7B0WIVO4Fp/u47j517/Ae/98ef+8bvPv3S4yCDa9Zhfaf56OGwXAR/nHzDiaDUJh8F7YXG1MbAKYVomDS4yDEQO0++iaSR2pCYFKNCm0KILgCrRKx6lQ+BZT9YkAqNns3yFYY+CoWgsmprG7PcEbth7duWhnlhuarys2dGxtiD5eLutVvLxfj9tz66f+tu6zsSclqOLz/98pe/9fsvf/Erky0Xu/bk/rV28UkuIbF3XR9T7MGrXCuOmGlQAoDAkyDFKaIgT8TBIYlAED3E4D07KwRLnaJIkUiIvvWBQwJWJ4ulVkpJYW2W57mSBWszCoAarc6quubom6ENYVULa7bKDCMAHDaLvftHP3v9g91Hh3psdvf3MtTPXHjh/JNXn3vp2ed/+5XRzPaHv7l16xPyawlBljKqApQVSgnps6R0wpAksg0EUjITa7YCiJEoksckgxACpcTCZikFU2SCRR965uRBDM4LAGW0enD37vbmmdlozAyZzYzWMlMSUAjWOpHvo6fEvc6MVb4o9cmy+/DDh4/uHt++eW//qMm1rtr6uUuffeXVr1x99fN5zSoes7uxvr0O/TJxhypEsiJSHJg1SlMiS2KK3ocYNWjFwJQC9yQAi1r3PaRkOAEKoSRQYi0ryJXNBjcQ9lpDYJRoEJk5qrKustImlpFEShGtLkXOFBNEFMHIoZCqHVJour7K7ny4/8FvPvjkg/tHXS9CnFZbTz559bWvffvq51/Kc+iXd9q9e1YwSJFnuTQmF5Kj99FxSl0Y0IOxRhpDIfhIQojISQrV+8CJTZ5nRc3SBD2wwhCC73sAsAVaUzOzZ4nCOsEucWVtVZqUvMqrXEkZUlCohGAlKDGybyWSNesqh6w0/ljcv73Ip3ztrXsfvX9r3TVnZxcuf/qF51969blXX9zc0qvDO8cPb/phCVkuM4uEw9ArIVHJRJJBJYo+eiNVIAC2KKXSVoAGQW6g+eIkujTZ2qhHCpCFkYH54f5xv+610bNJqcY9Sg6ub7qV87ym6BwJNbHKqtJUVstxXY5GY5uZmByHqEVXWmEzBI4Hjw7fv75/8/Z8c2t2//6NHNXVz37pt7/43ac++2JmFhjvr+7NXegBo1V5CDBwp5VNQghjpTASMxDMImkrAcGl0K1dnhcoEBQSieXJo/39pRAm0VwAlkoJEOtls7s377zQokeWZVFiSlpbbe38aLHo+Wjoh747NZ2pc2fOlWWVV1meWREipyFTaTIpM4zL1n14/fb19x4+2GtXTTvfvXfu9Kc++/1vfvbVV8yYwtG1k0e7SslqVEmrBBZRhNCuVeIICoCZKDPKKCNjopCAuW37rh9c70ejejYZJa26RXd8cnR42AlbunZujMy2t4IL/ao/mbd7u+tRqS5sTQYX6toiSALddP7ujQNKQ15slVKr2faWFqglk2+cG4SifFxoVPsHyzfefOuDT/aWKy+GYas+/fwrv/OFb3357Oktdo+6B7tDvwKICRQ0IhtlgDoxCQkUOTgvBCQRhNEqU0opL43z/Xyx7ruQ58XJycraQhC268WqD/cf7DbOvXh1xw1usVxRik2zxNiL0JbZJKYmxDqRbNvuaD4HHWJq6qryCfreKSGRyBPFwkYr42LZrlZyFd1b79z41bu3h9ZnaK48/dlXX/vWpadOj+q+P36bk2cJqBUzRGJ0AVZkMisFGaV75yNEiZhSiqtGCWUyQ0ot1/54vjw6WJVFZXQo66rkUaLYDQkwFJoIcd0NBjNTmAjRqjSaaVuothvK2q9W6751yScfYjFSiLxcnpwZz5SInVaQSV/kpDXsHbQ33l0qa+7eu9MuVpujc7/12pevvPTMdASueW/lImhh8nEmc21FIgwsXDeETkYmraULHGNKgtzghraNTMg0mk2ENK3r121/8859hfK55y650EywAJWXRk+nI9e0TeN2TtliOlou1n7gZoh94yGsWaiT5W0tpVF5sz45WvXzw478cvvstFuulcGQFXkF3nG8/dHhL35xff+oU5kcDrpXXvj6Z157eedsnvEJtOsoIQQhSbCInEVmmbznxBpkEKIbqEZlUCcdfb86mS84y/OqPl6sEJWuMucSQdQ2FBJD10XnI0qtlRToQ4qOJ2ZszPh493j58EBLuDjdNBNUWclCer8uMyuF9FxNQV9UVkQan70gZlJtzOpMppPVybW37r7/zt2HbT90w7Ssf/ub333hc8+cmnK/uAcpASEpD4TEKjFFCYIgJd+lWOoMYkiUYqsVkpDofdzfX7Y8n2xMZ1JJqWdCFRIqAWfGI1SynGR1vd0ufbNcYeOf3NyePb0xnk2iT+c2xuOLtWSnVQZa3juig4P5M0+cL3QSkJQWMUJ0vGjjgmbWODWajBfLo7ffvPf2b26v+zZ6d3526Uvf/OpTT2+l5m6vJaDyjskMECNhJRi6trGZBamjUCl0B83QDmstldfDqJohpmXXt8ftJw/m29Mj9dx502JbG4367OZGlukgaGO0pROQhZ3x7OrFc0TD8nhx7aOPB+J5QUcH+wcPj8jF6N1xqKcbG7/88RG4LjMmq209qUGwKE8nVfzW5alKUd3bc/cerdYhKNDPPvvy1c+cO3sqhsVNF4fQ1JujMwqTc8LaKgpBwXHkFDujxp54vVg3zSCzUhvbx84kxWSdC60cVGawLNrVMCnPZKQDGsx5MxvXo7xZSaly2a1uvnfr1uGDmzdv7O3Ol2u/c/pcWHNN1eaZc0mBUWrDTjaLUyfrQy9Cu07hsI9uoLAi/XB0avrWn/y1+vnbH8fWH63bPOpXXv3q515+grJ70CyTAykS+8azY2tNQDkEJyKDUrZsOh5JxphaP9zbvcepOL19moV3kbOsLgxMR5XyjVQ8zsxoIkmU0nZiIL/CN9744OP79z//xa+//eZP3vvx6xvZxvnLz732jZee+txLF69cHu+My1KgZgAEYBKAQgIDMwlIDAJYMFAfZXPEt659IJ7+9MsxOVg3X3jlcy9883LRnTDEoEz0BE0PSFl1xhTThuYFgxuYUQ1DyAs1qaeDc3fu3/joZnf7+t3zp4rzl0fT2elJNtYkFu1q6Q62p+cvnH/SeArCvP2Ld/7+9bdu7d45mbcX8o3f+8YfffqFl1/61hdPX5pCziCYQYBbcrvEGMPxoeaGRYzBKWVFVCwFa0X5TGRjWZRgM+eizcbq6OHubJR/5ztf//SLF7TwVFepj0ikIfaFEQn72Co0eiVa8I5d1/XRR6WqVZMiYOeSxOONU1huliJsWFHmVRVjnJSntvFJ58W7r99/4etfv/nhr/79n/75BhT/7LV//Pnv/NfPfeXVYjNG4XVouX3HPbwhTx6R7zGQkoEzp6Rmn3GKSiRQkpWB4NABHhHYCgiYKsMRVK6uPPP8q1/7zNNPZJJzEU1w8yCiEuyAgYwE5hCGdd8T9zH1i2bZhVNnLs6bRq/7usozWVpbTkacEdgyTs7MKjOSnlaHh2998M6Hb7/74NHejQfHpxv5P/zLf/OF3/9OtZUpJjq4nt79RMg154GFzLJxms1Uc5hc6oTQbEWIICJqFcASJQxKCCVRYqkiOyE5DschdiVm6mu/981Nc5jagPUghYE2Z0oDOCWlkeR9TD0nHkChX60Pl6u794/2FrQ9lmfLjCpljD6zNfGzurb1qZ2NDbYFq9s37/7Zn/3w5t7RbHzqD7//T770zT94/utXMDl/dDd+dBd4xcNccoiiAJpIJuo70HkstyR60R71w2C0kpopBT90RKnINTKmgFFiBA9BBE6QRBuEsulujB0nzYQqU1kmRbQhkBQJhFy6FKNQkuos59AKL3nRPzr61fbzz7qU5SDLKpViajeqXNh+2ex6OnP1/N9/+NMujf7R97/3+z/4wcWnT6X+IN39qWj3YfUQCys3dshugmuTC9S2aBV1PQqlFJPItKyC4OgJE7BCleUSSSRKAEkH7z2ycTFKxQmGGJwKw95EjVdxAOqsspgpFApIxwRd2ysSWmuGmNKAkhRRNTVjPOto8DpI1lYrHMk8dnv3H/zo795uePq7rv7U9PH/7n/7Hy89O1XdEd3487S+hXkNRcXlVAgUTgABhUhDRzBIzhRaFsgEVGppqlGRpegZiWNQKQogIYQQQjJlKgTvK1tIiCGTISVVVVPvXGKOce1dZuwIpWbww8CUQFBwLrDA4IhByFJs44aAaPVkMhpVRhdKDavjn/zdu9feuxVM/fkXLr989coTf/R1O+zzB6+nuMfKQD4lSMKjEjaljtyKtWWhUClm9t6JQqNBigx9A5klmwudixQpdcCtD0LZAgGQiIQVqqUUITFLqVApiC6xhyiT8H23JKnzLEeGPvjk6Phw6QmzTKMgIcT2pIKRCTiMy81pPTk9PT3M53/941+//e7dx688/+3v/sH3vv+VtLo9fPJDFQ7FSHYo4nqJlvMylzIixDD4gEqbJKwhAexdCjEMzhKkxMENasjVuEZruUsITEIpiyghhihAAksmAyiAEzBRYgUoAC0L7wV61/XpOI23EogUY9f6h/tdx2x1OjUqptUoM1bYbKa2qoKO7j9azvH44b0HB/57f/BPvvTdL12YBHfzh3F5J0EXTJ1cP6wHkVxdGAU1kGCpkhSQoowKtTJKhWFwRCLEmIhRUxLdepXFlgShzkRKnLQyaX70CCA3thZARiJiFkLrnDfGKq0kA2EARwFIgeukXqtMMws39Ml3x/NVXemt0oAgYfK6KuvAN379yd/8/H1lNl568ZV/+sf/7ZWnzmXxA//R9VQJaXOKsu9axhBDzEwpAYUgoTUSWKETU/KDlCUTAYBI7FLngZQuUBkKcLQ/TykJaetxqTUd7S72do+FkOfPybw0rOWyWR4fHWfKTKa5IgEslIE4kBAgpJTeOQBCRg690HFaiqwwgZw2dpyPeDn/+198/ItfXcPi1Pe+/Y2vf/d7tr+1vv7vOmxnG1tCmxjj4EhTBIoICIxuCCqXCAwSpTYQEgNwZEaBWqveNetFjEHlfVFthcHf21sfrbpZLi7AWVR8fLJ+98PdjbranE3ZUCL18Y29mzfuPv/MeWNHikEHiIKTRTUkMXStcpGSTimQ1qa0pc7QwNbW6ctPfKoEePva7Z++8dGlJ1763j//42cvFfLhX80fvs9xZW0V2Cs2SGSlRZQ+AguOEVLvbc1KEDMLtMwtDR3HRmUZJer7YX48LJu+LvrJlnF9t2pWn3xy+9nHzq5WjdEqOe59h6Z0xEKYoW0OjtaLjtYulcQKEmACUsr18eC4pRiLwvtklZQ6q7a1LZTK6/LC5Kwe8PRzL6tP8Atf3fruD/7BFPZXH/9QDfs6q72vIPp+uapyg0rl2TiFIa7IhyZwLIpiGIaEpHWmgJv1XrscqnGZox36sOibTx7uPdpfvXj1vFqeeE/sw/a0RmukUbnNTxaHO2eyuhS5QcNwtFhbHbc3VfCpKK2KgYLgBNgO7v6DI5NPRoMv65jnpi7zzE610bM6v/H6m+/dcP944wvjnU+/9Nsvlsu/7Q+ui5ScEhqVNjmnSI65HbzONBJmCjukwceE7PLOtDkCkuyG9a0Hu8ybedNMB8TSzpdD1/qxMql3lE+ymkdRFtXUZCMk5UJp6yfOV09PRhvF5jSBsLOzZT+V5eHGbGqsVQIkMndu6JrUNf3Jgc8vzShZ8gbHWCgsVPXWj975yS+u8TLe+ejac58+nT38Sdd+LBAYcmBBYRBKywJjdOsmltmpJjVVPdWYZ3bS+S4K0JBc1xmUfT+cLNP7169fulB8SsUJbmXSzqbjxgdRVrKqV21Wjy/LbFTuPDWaFrIqtrUGwSBsIhX6hStvGQ44zBzg3lqrZLT0nMhwXOa5UlmfZ+AHGFWqpKlSxa/+5pc/+eUHgvN/8T//m2fO9LD7H0NcYGlhQIIInhPGXE8UFcwkpQDXt81CKCuzsu0GSnK9WtNqPZ5sURF9cGkIIvSKi0hJlzLrpbSVlBtq8pQ9++zps2fyOkOt9hfBowvdcXN8BCKQQK0wxJXvO8K2o+bENVkaKYwpkeAEHuOoNL3X8yZsb9jJZLRzdmfv4/euf/BJWZ391//2fzm1udu++5eQBMrcDMEnpVQKAlFiEg6FVJmAbNR74Xwfl4dFPQaCZtmg0uDUYrnwRN6R9+H8TlVUGdv81s11Ex8bXX32ymNPjscFKRr65e7tD0VsIq986BJ4bBNqZmuCNCazggEMJplkoVx0yq27Lobo+lJOsLAWezsqT21uPHX2ic88+dzP2/LZ39l6+Xe+dsEenLz5F6KUhIJEEAw6UxC9YCbGEANFFaW0YNn3EkuRopUQQBwuh8N2WQELhHPnLxaV2nl8PD8BDzu9emny4iubtViv7z94+O58d0UxOAqBHDFMyonCyoWlYSQBBMhJMBEL1liUmtzgErN6eLAAK4yWMmOjKNXlKTna3Jz+4q/fpK3PhvqFz3z5ysXi5sm1P+3qEl1EoayxLCGSEyxyNB32NhmtjM5z9M4AuG43RejSqE2rluLuveWpUb25XZJJXYsn/YXtZ75x7vFn23D80fW/8PNdY0xEpKzAMk/E4LSgsIQTJS1HDjJqABwwBK8pC94DSpTEyRsl1IP50bgo1EjOkgKwM2HGO2c+efejP/nLn/7V6w/+p//1354Wv+k++jGMZvngEiqUhlEn70VIbFVPDkhJmQkLEjLisE6rhTOe/MHiQcaZkvbiGWuyQSIM/tPlpa+99MJn+/7ondf//6PDd6QNJh+pcicluU4ii03fJpQaQIsGssyhtj64FARHTRBNnUgbGqSSlcp4vXTq6LjTgqc4Hgo9McXZC2cf3Lj/dz9+84yp/5t//a/Ozh4t3v57kVv0fdKMIEGSiEMMPUvUMSGLaMhTLJNxIR0v5v16vX/Mrpfs7+w8dn42rVNdtcP09NXvZY9dbhcf/+g//u/96g5hn+cVJYukY4/eYZfmrrEpcFJEopMxpFSVRWX1ZNUcpeSywrqBpEIj9bodhkG4BKpkaarCUSqkKmb10Z39v/izv0qx+MF//6+uXHLdL/9Uloykg0UJxERp6Fddb8ussBYIok+KRFlbkLpvTuLAJ11/8+H+RjWelmOHvXAbW+d+55lXvrGe33/3p//3anVfSkI7ymUubZYQh0F03dBHEoFZ+ZA4BmSZ6lyvup5B2Dzrgm2GGBf91gQ3Z/ne0K7XtFp2GxOpNreq2IdqNNkczWaq/PkHH3YD/oM//KPfunKmef//ozErEgh9wYqEChR7z4yqzkZ5VvQ+CHA2K6S1bTv0fQNQey/q8VhBDJQnvnLqhe+ff/LSez/7yxt3f4Q4ZNl42SZMKzs9k5Jqu14mhZkYoosdSuGFKQlYJXPj7oGKwxNPXSKExbr74P2704kZF2fcoHzT37x+37l+PHpMuTiUo/G43Lxw+nEt8ljf+epXfv9LX3ymu/EfJHUIlUwmigalcP0ikvV+yLTph84UhbFGWMOAN+4eUYhGDUJlRV7MWBKqcvryk699W1v623/3/xx317CoyBeDG0Lv7aSOzN1ysV529WiCApt1/Pjm3vntUV5IcmyMv7N7b3t7u2VWLH3n6kxlSvsIjrFtAyKUVda3pOp6nGfZuCp+/ZuP1XSb0/gP/+G3xNHftKKNykY/ZImShHbZs4idi/q/jDVF770xuaNwvHf09ruPphvZY9tlUcSyMqKYjc+8+tiLXz58+MFPfvTvGd2kPhWIh9S4CHpUGbvZDPG4cSEgdQGD75qBu+HgBItBWGnjuj013tyws0qMR2Zc7GzsbDyeZVk1rhKnx86c2qrOrtctCKXqPJtNz9y+de9nf/urxx5/+h/98T/jk5+33QNUteCGPA+8gmSBoyOptbZZTjFaVESYCNeL5dGiWbf9dIyJJq3jKM9tXvzK9qdOv/nmn673PyzGWkBBvXcpRUJUmlI5uL7t+/2DJSeVQpRaB6dn49mkmO6cvTDbmBVltTnbnm5u2qpQGWa5AiGYU4KEaAUCIgqyfdur6WTWNfSr1z+UNvvyl7923t5odq9lZkbopdSCY0jM5FNEiagLZY0eICkUSomh771LIMWlx0e1VD6StM+cv/yayLKf//hPDg4+3jl7FpVZteu26WKEhLEfks1UZiU7EAnY9SYrNyZnzz52cbY5ffLJp9WodKLxg+PEvTtaHPVd02tlU/QMEQBRGiWtykWWZ0pKhWjf/c07q3b51S9/6cq5zt1/V9lMSERICJi0pJQNwTNTIodOSKW1yVIMRyeLNOCQyJh8hIoCwuYLF1/53W5+75d//f/2fFJmk6GJ1njf9t5F4rBs0kHjRnXaGhXec2WyU9tPPXv1hQuPPSEr2YXuoL9zsnvSdSehGwSpyB2JKFhJNMgkFaBUboiIWlnseq+lUtzGozv7O1ubLz1TyNWHgIITkmyJBBOCUsIkCJISCSGARUwKtXywt7p5/3g6qTYra3LWRQX1i6ef/9r+7rVf/+zPh7Sy01HXtWHoWPCqdcS684k86kRagB/CuZ3Llx+7Uoy3ypnYn98+uXfYtwuKQaqklFZaSCVyU7EUlBhBkQdrcmmEjl6hFDKzVQquU/n08dlo+6XL+dlR41NAYssp+MCsmR1KgwyUEqc4KkaQ2Yiia9bzdX+ybKvaMEllNhp7ZePJL924/vpHv/5zFoy2AM+Di6sQln2/PqEqL4RFRJlrFqQvPvXKU08/iTx8fOMN/KQzKLAwo0qlVDEDKoGIMQEE6bvkgxOClMmkNCCN0hYZmHVMISaj7p+kZ3bUC1dHhlwgYqEHSJqAOUZKnMIwsEgp18bmRZ9aH2TTe2Hp1FZVWCmmmw+ac5sXXv7kw//04NYbSbBADZ7DEKIw66Hvej5oRVZ4k6og5Hi8Pdua2Ly/f/uX68WcOU1mm9ONGehcW+P7wQ19iLA88TEBSlguXQhhZ2djXNRdn5rFUpt8VmeDHx7cPx6PCrV37S+//VvVxkh77zRKEtGnwKwTRECbInd9R5CMmRyv2kd787zISYJUdjIxxXh0f7eE8+f8w7+9e/MNDcrqqlORfRcDt1FS4kzKzQoi+Azs+fOP5YWwuO7njRaiNJktx+W4QFMqWYe+7XpuVm7V9PfuxbyEjc1ivkoJ/KbgPsH+YXu8v9rcmiqSi1W//6iTovjPCx4pqrH1kUoAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAdn0lEQVR4nE16W88t2VXdvK1VVXvvb3+3c+17t7vd4MSxwUgBFCAKQoqihLznL/CQxzwEpEhRFCkviSJEEAJZuZAEBUEgDVGIQcaAsfDd4Hbcdrf7fs7pc/uuu6rWWnOOPNTXTkraUmnftNasOccca47B/+Sf/woRmJmICGAiLBdzIJhYiIhAREBEQIQBIqIgBAgAETEziIhp+R8RIRBAIsLEJCQsTMqsIqKqysIsKsLEoiIqlpTBwqJqKqIsKkpEyiIiyiLCQiLCzCLCJqKqImJEFAEWMDEDIAJR8NWiiRARRCBiIhAhggCAlm8CLEQEAhNRYPlZBJZdB7D8DmxMyhxEQeRgFVFiVVYAERGNQcxMQR4IZzGQEoswGA4HCTExhAgRqKAgUiIjQJiBABxMBMbVenlZ6VWQlzsiJsYSemKQIIgAZiam7++KgpiFiOHBDBULhIoHg1gDbEpwQlAoCVxCoroKu7DBmI0ZZKIqSqGkxEyipMqqRkJMorJcBoAJsgQHAK4WyCDAiZmIP4w4BcAUxHS1GwKWh0cAk4OIwMICEEfw8ggFAYRLIjAFc8A9khAlo6AQF5MI0+bMIcxSOCxT88uwnCVlcDBDyTi01Ui9iBLAEUJkTOQRwkwkFEFAAEQIeABgJkhELNslWj5d6iQQxCxOYASDGAEiYWUQAsTkRMQcBIDcKzFRhJESI9QDoTCxBGMGgzhHktagrY6qPMyeLr2thQGnCKJgDuLELEktmUJgsVwAM4N4KWBiBslSoswiygRyDyJ8WLdXeQaKJc+WcAch3IUFS0UwUYRHAA5ivkpoVimmQpGcQB7miRFu7SJ2GpqoTwwZJhrmg701Eq+6Td91Q059zqlbQinGqiJWW1vy+sO0XiqXQEzEAWICL0USQUx0BVfLPnjBHxABQcxEEgglgMjhABMFEQMuwQARC3MQRF1Tg7Z6gYs5tX6dDrYHR9vj69f3r107OBw2637VdUmTDZKYmcFgYmYmKoGpteLews0jPsTQD1+8xH65XzBmyZkFhnCFR0vgmYPIAyAnBrFEkJOGw4lJiRhCEhABgVjEdGZMvuPLuZs317fPPvfEx15+6oXbTx/3B4N0xklYrhDO2U0+d+fb77/73huvfvveq6/T2N6t4537909PHs1lrnMxXvJ6SQvmJdhX8EcEIDxAEGIm/X9LX+qYKHzZFAG8pNGCSCSkREpMIUxCJNFimsfGwRs5fHb78osfe/mFl37g9nOHOijRxTydPTr9YLr/3gd37753740H9/74L77w2p9/cTw/l+YUEFZmJhaAmUgBJUpMtnSrJX+IQsBBBII4iBlMxExgBzOBiBfcBJMTSAQCAlFQeDR3ElWRYAcTEYvllPNuuixc8lafeeapjz73/PNP3rh9dO24P2DIZZv/+Ftf/PaXv/Had157843vPX78+N233qVd4aSq1pGtVteG3AWBkhJzkgzAI4KIRIhgAcT3sycQBCcsz30JLS/RXRJoWSw4QEEEdxAFnJhFVYgjqDVXgok54tHF40mnax85/Duf+sQnX3j52t7xKq2N9Gyc/uTLX/qP//03/uqzf/7o7Ozi8cVadf/gSG11be8GDkCAN6gNzFxaU+HwxqIVzCgNTeSKElhdoJN4oRMCOC1tFyBcle0Civ8fpSBQEFUKEQUJBTUPI4h6ERUfxvHc9+cnP37zZz75ox997sXrw96u+cXFxRde/fwr/+N3P/PZz5y+f58Rx5vD/bx346knq9bmrZFGEIFTUrA3r8mMBcQBqsIRDqOUJBEIDiaY+/eB5wqLfGnqYAeCWXjBd/4QfIIRFNFEIOJANFdiRczhtZJQ6W/Eiz/+4qf+2t/42K2nt2l46/4HX//S1//0zz7/R5//3Otf/cuu69d59eytFwuQup7FWjCLZmvRKolxJK9gNLFEJJ0YOCSrR2RhAqtahAcgwlYaFhBaWAQT2lU/4CD2hRbFgpjfR88gBhDeAgEhBtEYZUTZXt986od/6Ec//vGnD29s8vqN+3d/8b/92uf+8H+///pb0/m06vdu3Hpa8pAlt7mRiDcKqiSSWYlVjdypEVGIaGJEuJMaABERVY8AALmiZUFsNZyIBARmZhKQUwQDUBCCeWEYEleg5AQIA6QemQ0UJcquztjTj3/i5b//Ez/x0ZtPKdt7Jyc//8v/9jd/5dPl3uMb1548uH6776fU94ls2o2QkggnGLVEttxGdHubEKXRk6p11sGceqY2eyPVaC7hKecJLgFlUe0BuDdbKGSAQBEACQVcVSs5HEwMBpjh8NpCCQwmZdXKMc2jJpNeX/rkR3/qUx//5PMvCqevff2vfuO3f/v3f+/3Tu892N8eHX7seQ920tx37q0grN+IqirdNAGo1kYkHKGA7K3cA4TCjcgcLKKJpaCS8FTm5u5lFhb3GgAAa94WNvlh6yIibhFMvIAVM3s4E5MyQEFCs7M3mNUU1293P/qJH/zJH/qR7d72K1/92iuvvPLKK6/4+ditN7efeqZLW7HB65w7iwCLUhBxI4nSWIlrnQjIlsk6S50wWyruXuY2TZet7Mo81nF0L9Vrq21h3nTVhIhAVpovjS8ADyjJcjBQVV96mIeoBAcCg3atoDHNPg3bzUeefepnf/LHP/6RF9++c/df/Mt/9dnP/AHPtL/ZlieO+rxHhWKhLSwRGtHUqMasThyRkomJ5a1IEoIpz7vpwYN7l5cP53nXWgs0IKI5fVihLEwgphBR9+V9MXenoKWXkXAEKEKIS5vV1MOJmCIgRERTnRsREt96+vonXn76Z37oR/p+/1d+/Tf+3b/51xcPH964+cT+M7crhKbWxhA1JqktcjcsCBYeOQ0E5NwJp9qCnebd2aPHd05P7pZxR0u3uQIVEdZsPUCaraGlZEnVQ1ardQDEIqLWvBEzMSOwkAACSFNp1YRbbSJMJORLo262Si89/8Tf/uFP/vDH/vof/+lnP/3vP/3Fz31+szm8+cQLq72jEqlFqJhxiDIJt4jWRjVWE2ZN1ompNz89O7k8f3hx9nB3eRZLOJlJmEIkaeqyac4p59x1qWPNqetS10WEad/3g7u7B4gsACbGFToRGM0bIyBRWhEADW7JnYziYNN/9Lkn/95P/20R/Pw/+4U/+P3/xaG3n365W+0BMpWmSjmlWiuYgiQxUTRhS9rl1Kul8/PzR3fePj/9YNpdeJuvGIxyZ0mlU0vDepX7dddtRIWSppSzdIDQchYWSV1ydwKEiABrIA6ScBeIGIgBY5dWQ7KTcCddAWqHzY39Tz7z5D/6Bz/7hS9/4Z/+ws+/9/adm9dvH60P0XchHUPWyUptYpo1e/PEPbPSsIYqKZ/vzu68/drZw7sLOC/tk0wtdev13v7mMFufh71u1beowtYqG3eJkySZ2+gSNUqb5vL4PKkBUUthYgsvzBxgNJ6pgskBQEw4SkDlTMEpnj/c+7Ef+IEf+1t/85f+w6/96i/+srE++8yL3bBxUpAqmSZrrSmLkgWIzdjYRAixu3z0/ntvnj66T2hEQSyslrP1m816tb/a7Kc0mHUsCudawZyrl9bqhMfzNNZSSiutllYKtQZTd2dmZiaQoRHYg5SDQ5YTOcDUwApGBOW4sV3/w5/+6cP9/h//3M/9xVe+dLR/fbM+SsNGLO9aW+XBwA6oqQhHOIkM/aq2cnry4P69d89P7zMqBYjZ0pCHzXqz3extLWezLJxUs+W+efF2Uebx8vxiLtM8XXqrSa3vh2Sa+lXe7GdLDlIzTUlECWSAeQACI6DBFcwqc1sNw0jeDfrc9YO/+1M/8cWv/8Wv/tIv1rndOro9bI6a9DXUGlbDihwlitnAoOBJu7zqDufLyze/+83Tk/fDKzFENOVue3A0DJu0Xg2bbU5r08QBjzbudo8fP9xdnlycPRCGaV6t1+ujg1W333drSxbCTiyaTFOdRq9VzYi51Gru5GAGKgeY1ZngMHk8nh7vb55/4uanfvCjf/hH//O//pdfZ9Ib158chsOmKXdZACMWUijmaKWOq36jvE/w11/72r13v0uoLBAR1ry3f217cH2z2vbdwClpymWOCL+8fPTo8Qe7iwvmyszHx9c2m+P16sAseTQXVuUyFSEB0GqFU2tNRMCIcGJYpcoECgpihleRpS8f7K2ff/Lmc09c/91XfvvPPvfHidL64Ib0+wWsxOahQiTs06zdqu87EVbBo/t377zz+uX5fVpmQ9bvHxxvNtt+td1sDkiM1KjV6ezkwcMH03zRYs6pu33r2W5IIBbtW4vJq6Ax2IVW1hMhorGwMAs3URFVRKhaymJwD0nGlb2jHEHO0F7x0o1r19bD7/zOb37jq18d+vXh8VP9sN+i5ZxFLACIMYO0EbWUhlbKG6/95aMH7yBmgER1vT04OLjWD/upW7MJLHn1Op+fPLh7/vihqB5cP+r7G9v1Mbib2qUwvLacNEABpC6j1KhNzdyrO5hIkrDEPI85d0EIr0bsYG4+a0qBsMBesoPD1dHNvd/7rd/61qvfXG+2h4e3h9VBbXE1pSLknOe5Surydi0UdTx57VtfvTi9v+B6t+oPj586PLi+3hxOs7NlzjxO56cP7188ui+Zbj15e7u9VVoS4dkL2Vl4U0kpmapFmVQUBIS7c9f1RELeWi3CxMK5yxRorQbCAAQaBUUrybTv8svP3gam//yfPv3eW2/t7107Or5tea8Fq6Uud4A3r97QdysbVsXnDz54996b/6fszolgadjsbQ+uXdtsj3PaUupXPe3OTx++fe/i8lEyvXn7udX6gJOyKvsI1kohMGFeYMq9ETHCowYLe8Q8l4hQZVEOdq/IKYnqgtrmroxgpACryuF2uLy8+/k/+eydN9882D/eHNxK3RaaOEEYY51y7kSTR+TOwndvfeevHtx9i7wI29Cv9g6PD4+f6IYVC4OjTdPJ6f3TkzvZ8vHRtfXeJnXrcCq1OFRZlvGHMAmzR7AwM8dVlyVEqDJQWYhYhI0gKQGI5g5AmCyYDUhClevB3l5u45/82We+9/672/XR4eGT2m+DlRmqkpOlTmpDqXWzXhGXb3z5z+ez+wIEy2q9Pb7xxHr/OK82ljKH7M5PHz54Z7w43W6v3XriuRZtbtNYqokoh7LUWkjUkrITM/dDP8+VCMOwmueJmUSIRcIrE7uDOQmbGpipTDWnDAoLr2Rdw3zrYFjR+OrXv3L37XurYXN044XgToWSadd30ajsqiZi4WEY5t35q9/6xnjxiBhstt0eXb/2xHb/hqy2nLjOu8tHjx7c+yD3/OyLLwuvdvNEFMypS70wvE7zOGvXIUANYqjwcBbhcMxzYZZwN9PmLpzdAwjQrNbqFCLiLUSYhKzLXGsc7m07lG9/8xvfe/21YdjfHN82zZZykINQ5xYeXZdYhESm3eVb3/nmuDshRrJ+e3Rje3i82hxwygxcnpw//OCdOl1ev35jtTkillJnj8JiAmbUABpCc3JqLALwlZzgrqqm6hHLbqrX1ppJFlVhm+fLVqtwImHrEgJeq1lNsY7jQ7373Xe/89p3Etn+8S3pD1KnwWDpUh4UUWmcWuvyul6evfndr5yfPSZG7lfXrj1xfO1J7VdqaZ7n+eLkwQd3MuvTTz7PKTtcKFTZrDezaZqrFzW1LrXmFMRMILRoTBARFY5wZohQrTML55wiYEmYQDObZGFtEUzEINNk1dIzKz+7+8Y3vvIFL3F885luezSWlikTkVn2Fso29HtNqNXpe9/9+sXZIyLKab09uHnt6efU+mQD1Tg/uXvn3lvXrh0dHdwQybW5LvOznHfTtCLJlqs3kyQizVufs4hGBDO7N3dHqTnnWts0zZYsaQKgwuM4emuroVNTb2QQM53GMedkR/350W784he+OJZ6cHBj2L8G1VWvQzeIylQKs1TMFBkxf+sbny8X58Ip9d31G0/vHd3M3bbUZq198P47FxenLzz7klrKqZ/noiogat60IrMEACZVLbUSUUSER2u+QH44kuV5nqlUd+/7XlWnaTJTs5TUkmoArRSCqFg0J0IgLN359p++/u6908u9zf7B9hqTGosI7cad5KTiCNI0NJ+/++qX5vNTIpYuX7v5zHb/uIFrCTS/88G7db48Pj4e+n138uYERMD6TpksqJZSWmNTZWFhYhZSJwqP5SEsAp2Iisqiq7RazSwiWmvCwkKBMFGwCJjgRHBvfLC9fj4xGT/xxEe2B7eCWSjUcqSueuEoprI33PrGVz67270Pl5TT9RtP7x3e7DdbEmm13b/zfi3zjVs3+9WqtjC2LluptXmQEDySJWGe51lUiSAqzNxauxqJgyLQdTnnbpomjwaAAVVJKavqeDkSQZO15gFnM0a0UoljGNY2Te4mm+Pbut6MwatNh1oBphZJNeX9ZMNbb3xtd3kfYEu2v39jb7tvZi04mz1++J638dZTz1oe5jKxhCQd58kDqcvs4YzRCzOrChOYJTxERUVqrV3XLTO2Wqt7lDqnlBbZQYTDo8zFw4dhVVsBQTW18EVz0ZQdYcGUV/s3tzcyrxrFOBdRJEgiTzkTpYd3375//3uImvNq7+Dg+MYT3bAJ5hbtzvt3x9PHL730kvTbi3FKnUXxMruaqlzNY1RZSxXm8ICICqNBWUotBPLmRCRMIuKInDMRqdgiSjN5rRVEc5tZ2cyUjBsiEHw1trCQdDTsk+7NwqFtm/fdC2WbPbRqm8/ee+fVWkZROz6+2a33JA0VakKXDx7tTk6e+cjzBVIeP4aqKxFgqu5BEeGtMVvO/dCVuYgpg5qHiCxnVmOrteFDiSjn1GoFQCIpWQQCSDkjIqnWaBByaovCKMIU1Gq17uBg72CflNwnRHgXxEjEqJx6fvN73512p8S0t3c07B2vt4ecs3ucnz6ax/OPvPACLI3z3OfsrSXJllOpZfRKypa7qK3UoqbMskz/RJmDSylqyiIpp1Yr8ZUszcuMjbh5gNBazSmX4qAIDoLSoghHTSkDYBY72L/F0pdaU5/61CVvDWUsvn9w9N6733r04G1i6oe97dH1Yb3vQdSa13J2errqBydiR2IDkZjWUsKjess5QQgAIuBe55lAarmUmZiFxb2Blqk4aHEPENVS1QwACwNLM6ZaKwsDSNmagykIklIS4dY8AqboapPcZYDCcVEvwLG3d3R58ejOu69TuKXu+PqtzXYfAiJqdX7wwd2c0/bgIEBRC4hYk2MRn8JynuYxWzIRYtIui2grriIpJQIsJVW5uLgQ6liFmUkY7st0kwjhEUSquigWkiwI01yEuUsZ4R4B6NXUcvYiSjNNaEHCPPT9/sFmu/n2N7/q8yQie9u91HUX40Vt09DnMo1mslqvosE9HK7ZWDlZGlZDmcsy85jKXMoMoNQyz7O3dnl5UeZ5wXUz26w3OWcgFlmxehNTTQYmFk7JANRSmaW6X4w7CHtEa63WykxBaOFgWErmLTrL4ODA0G9U9d4777T5hARpWK/2r3XD4Xq1rbV8cP/98XI8OjxerzeidjmOq9XePM9MaF6osfbJ4dlMuzyVWZjVJdy7rgPQvHmUYCGigojiJhwRLCkZ11oimqoSwmtj1s7yOM9m1mdluCaFBxP1/TBOc20zQsW9WI6UhJnBbGrU4p23vkkUwrbZblbrNQsTU/N2cnI6rNdpGCqhlBoRFU7KTDDVZMmColRJqYJUlECiqimX5rEIPWAhrtPMCBOBR5ur1wo4EEBEeKlTRAV5I2IzYhUyZWMIgkXy7mJX62yaKFRabQCV6nOprcW6Wz2891adL4lovdkcHNxQ68xSre3+g/s5D+v1Xq2tlhbuSa3XxICX6q2VMkdESqnWKsyi5i2IRZJJshqhZsJGwfDocyfEi0eIEIvUnJKJsKiwciBY2VJipvBotSEWMdQWiogIBKzPK5M1ka426/XQXZw/eHD3LfeWkm22x2Yr8gSi84uT3cXu5s2bJsYsUy3eJQY9evy463OhiFIzm/W5lJlKQI1V+tWqeaul5q6j6sIKCULkrgu4hxMTMWOpYfB4OXVdZrA7wl0CphwIIrBo9Qomd+96axHRwCp2985rq/XBtVtPKw2tyr07b4zzmYhtNkf9ap3zXs59KdPZ2enh0UHXd4GgCGNupeaUE3Misa53dSUptTFxyl20RqBWq7AoS51mMwVimfu7BzE1ILEGvHphcJd7rq6qba4AmNhbZXDOOZS9gUBCoYJWqwPMwkri9eL04Z3LsxPjVMt0cXEfEaLSDeu+P+iGFUl6fHZmOa3WK0kiSSWbdKnrshBSzh5oU2Xi5k7NlaVFsBkF0KK1hggGRW0R0aIxUTjgwQEAqmImIlTKzLJwiMbMllKyDKIa4R5ErCQG8VIIzJQWY4mwKxE0Uc548MFblycnhFhv9jeHN7nrJ29ju3z06MHh/n7f92cnp17dq9e5ttJKqeM0z944SYl6WXe577quYwIrS84p59pqIDjzYltKKu4NyxEsaYXXCIKGS7aOnIgkDWvJXavN4Q7M4+RemYtoTHUOtpQHUS7zPE+zQZRTEuunaa5lJIYlW633RCynFIjHjx/llFRTKXW93rRawazJFkVk2Fu5t7kUFulzvxymuj6P08RgU10NK/eGFmaJEKW2xbZjYCNeQElELFt4pGy1FSgDJKYsMEpzm0opSCYCS0lE53mcypxS6rvOAsh53eXN5eXF+dkjIgyb7bC3bzmVeS51Gi/Oh64TkXCkLiM8AF8cW+6MJCAhmIqKemk52UL6c8pRfS6zMqsaAsGc+j7cFz+MCDMt3YoBqGqAiUiIfPG+NYjQathc7s7Nsoq05kQQ4ZSSiMylCCGGfmMql+cn3mZRtdRpSqvNXoSXaVKm9XoNIOdu8eWYWp+yiZBHnWavNYky4LUuAt04TkyI1iJicV3M86SmwjLN01gKibSIIHR9JyLE1HXdMAxzGQMtWmMQAZbTEqxhtR6nuUWAqLbKQmbCAo9qotoNvXsdd6dEZLlbb7bdsLq4uABotxu9BTPXVhuhtbrqegKmcVLV9d42ml9cnJslNQEgzO6Rc2rRWNiSLh48JtqNYzCYOZtFBKsQYa5FTJg4ArU1ZhUmLOcE5oV0EFGppe97Zg60lLtxGgkQEUtm/Wo9DJtxdzGOZ8TU5W7oV4sHbTeN41QO9vZENAge1ZK6t1pbEKXUNfcGl5xFNdCIorUg5r7rW2nemoqyyNLFU9dJBBBGAg8xAbkjyIOJRexyN3ZdAiK8dXkAs5MHo86FmSKcmZkYEaZp6PvaiiOENCXO49mDaEXENHelxe5sNHAbd4w29IOqMkKCyBHMYtblvJwDo7kptzK3UhaR1kTLXNBAi0v3Q5MIAcmSmRETG3u0CJiokDI4ovbZVKQGge3K7ON1AX5TFRYTVWMzU5Hzs0tmU06Sul5Mxt0lEalJTklZam0nJyeXZxdD14WgeMWiJYOS2XKAinBCJNWsScW61DMJQMWbmlkyBpmZU4CQWTDXNs7imOcJhEAQFtuGmiUQaisIZEtiWt29VWVttaqqMHlzUYllrMtsSVSFRSylThi1FWLuhmE1rL157lfewiNWq7VlI1DUEBEWnsYREQFnYkvJF5ZyZeZFEFLKYooGBk3zNM+zqWRLpFK91XAwTdM0DAODSmnKLGrCmobOSwhFyqlpqAPhXR4CKK2wyjTP3iokdX3u82q3uySQ9XmYdmetVDE1GyCmmilivLzwOpkpaqVgFXX3nLI7ui4RW6shxLVWVWUmB1hFQBwxX+7UTJIJq4Ubc0Q0hCRTFaXUam0tTNRUiWgxZQiLozHxPM+UlAPhrU3eDb2oAhDi8KYq0VpFpKTRQvrUnZ4+IAplTZpFs1qKVuu0ExEi8ubRXIg9/Hx32cKneT4/vyiltOYL72DhQNTWiKAsFFRrHedpnEZaVlaqsSQSKa6ARxBRraW2FgCp9EO/jMM0mYqIOzg0JVGJ8LlMIsTEpjmxEKi2ujiMzSPmugMRm6Q+cYSXCpG5VrUcIMkmQY1aGpIRRQ24M6laZhEzHacxZUtqizFnbCXnFKX0kok53NG8wFuZLk9HY+663HWdLpKLiOU0zpPPJafsqlMtVJsIR5ISlSKShzKPu11OnaXEiForTFiUIv4voBXlorfUac8AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAeYUlEQVR4nEWaW8xt13XX/2OMOedaa+/vcnzO8S2O06YNbROnaUsLhUgUqaUtUBBQKAUkpIKQygsvPEJBavvUB25CfYjaUqlAKW0VCaGioF5yc+LEbhLHjh07x/ZxzrGPz/nO5bvsvddlznHhYTti74e9HqbWntcxx/j//vT4k3923G1JOHdlN45917s5EMMwVLcAoF5blcStNi6JRTyQG7SN0jFclhr90OeStvMuNaGE3KU2W4RTdjDgaLOCKeWMAAMIMnVzJ+GUkmkjInMrpQjnzcVuGPq5npEw55QYidhqM2fJnbA2iyBiEpjT5Sc/lEA5palOOScPCCUiMmvULCykJAireUrFw2nRnNMMlSRam4YOfb/bLtIVIeuiKJqZSiRhQQpzh5ETnMzdmTmnFI5aFUQioq0ykQhHmFkrpYvwCBcZ3F1EpnHMwjkXBC/NHbX0q1ySB8wtrTJBSYgFDAcHIpyYEiV04gxi5ojEDGYVSX0JVdJYagw8DB3qMh6WziWRUcsgj6EfnIo1tVazZCXqBO5oQRQiUUrOq86bzWbWlV7VwoI4ixQ3k5SJ4BRCIPfLl44XayRSFztYr1yyNU8WJFLZ09Hx49J1Zrp2C1cmWpbFAyyJiVNJQaSqwszMERDmuiw9ou+6cbtLKfXDoZkB5O6HPc/VPZCSlBXgbouKRy69B8RaRABhAFMSrJk9gFTMECwUEdAokkw9JY5wYsxm/fqo67ql1ovdNjv1wwpM0zLnUtK9B+8YwiNKFgIQ4eaqFkHMTARzNzMiIuaICDOEJyFiUlUzTZLdDAFmToW0+X6LAk4Ba2oWDoCIKFgYgLaGCCIikBCDYe4sAGAaidnUWNjdiQhwgM0iCMzkLfYtiSgi6ODwURJyxH6C3V2YPQIRBEIE8G5rZgYBgSCYBQjMFBHCHB77hiZOYCKmQISzUASYhRwEmBsRWCTcASfiCABMFB4RFBHOLEmyqjJRRCBAHGFOLMwChEcACHeAiJDK8cAWQeThIN5PJALuBoCJsX8RIuARwczhwSQA3CxnMXcOiKSIEIpwpwDFfs3IY78gEDCLRHiAwAQwmOAggIkpPAAPoiB1AwHEQHh45mQwBMI9AiEEAMQRwSypLiYghAcoyEFEZgDenVL8/2ciAuAebrZfRLiHIVSDKNzcAowIT8RwAnEQeVjAFawIJjK3eHe1PQwUkYmdIjyICBEgYN+LcAJFmKpFBIMDBMDNEYgAEbXmySLgFB4EOAIACN/qLu3niwjuAQRR7D9koG/tSDjvz457OIMIDlC4m++3EgkIiAiDh0dEBDsxAESghZmZiBCRugsLg2pt+53IIs0dRARyM+GM/flCaASI0kee+giciclNmUFMICKiCOyH4+5MJIndgwgghDkTp5RUmyTRpvsxh0cQEyPMicjUcynhGmTuER4AEjOBIoIAD3e3ABzhAAIpJQBWVZvmIhbelc4RpesIXJt2pYMZ4N2qJyZipk/88ec5ksPNKjMIISTEEsRwIFy1EUXOWdUBEMO0ZabS5Xmq+2FEmLAsc6vhkjjcWdI8LQDUqoepqbmZGYLCY5lrXebaWiASi7tO89iai+Sl1qYtMbEHEgtza3VYrXPOtRmCBIGw1GXJgvB0uB4oeG5LUA8EmWdO5h7MLGytcSJiCjcOxD62EIy0qZtZrcoJEZWQrVlrYyMCOOWkrS7LYmYOEEy1WURrPi/amrZagcg5t7l6mz00LC7OxyWCMhPUt7U7WqecvNn2wWlZ9RGii+cCuC8X55xYwtI0qsPNzc2SJBLxgKojLBytagAsBIQQhbubpZzCyQkewZLUakS4mYNEuohoTdXaPhSHExG5N63VATcgoiu5JF7mUevsSpyyTVZrQ5Ke2VWNuO/XHszcUSL3Ski5FG+zmuWSc0pC5NYSgjiQKAclQVJ1ImSIm1mYZHanAFJmpjAL5hxgdw6vTBwR4dgHShAzJzMz219YxCSLVlWlqEkEzEC4m5tra60uYdEPR9V0ruYOKcwAJUHqyHyhWNqCpkJR20wMQIV5H4bcQt0TUxCxpOQCAicjAoKICshdEAHxgMMMzokRHEaJAWczdY/WFhEEHEHupk2ZydQcUWvz8AgrKZeSq5rpPM+LNXNvbpFTctdxHPfJFoUxgYSZSClAUG0ckQTEsGgBQ4Q1D3MBOTxRUEgouQqRmzAE5OENLsysBoIQVzVzDxaPIBCREdytqrppExaEm1qQg6NI2Y2juQHRdUUFpn5+vlNXc8Biv27MyQObzYWqU1/clJoGoYV1KYt0bk04hSpA5qhTI6cQJyHAl6YGJJLsUcNcUiICwQEQKMwtkJCJQIREBMogRpC5zW4CVwu3yKlzbyyk3ojJzMKRUiIDPFqt5uZOi9q8zCklIri7uvs+eBLnIRsFQsxUw0OomiZPrTVOovPSKAyujsyZxRF2eLjuu26ulqSwNgoPWjTcWYSYMqck0jQsnEkB3Wdmbk6MlKRqF4BDIYmY1ZTDA0aUSdiambm7m3mrLcKNiHMSlVpnU7MIYiGS8BBOs9Uw71K/IJEHk1VWq+O8zMx05fj44vwMjH61yqkb6xLVz7fTxbg9P98la429gABuAScGCxERkRAQqi2cwcRodfRoJMi5K4lcWbgznVurCFoWa+q+TJxzdZ1qDUBVzRpcW3gNd4YGWlUWkcxShAOORsoOcleQpsQBc9VlaiyWKa/XQ+7l1t17t9+6dXJy797pqVYzbXOdtba0vx1BRJI8KkKXOhJn5r7W6tAkBLA6LBGFiIdPlTKHEtzcFNjPcLKYUkrqDoLk5BZu1REGn2pdWiOgEOWhN8AI1ixqpdaCQlWXuZHknU5dKR31O8aD07OTG3fsyy/s5nGz3Y27eZkmbVWY1X1Yr44O1ymDo5BZsHMKIQqSFMzN3UjJG1T2xV5iFk6hNUzV1BEBy12e58pS6jyaKYhra2o2z/tfs7BFl9YsHInRJQ64mnpIndXnJUtU03FZwHnoO2g+PTt96+aNm7du3b93+m3veeKbb7212W065tTnLOBIue/G2YaD4/54nQQ6qpoGGnEEczi8wZA5S3KLpkHhieBNzRFQKRSLuVvzlru+tuambZmypCCY2bzdTbVpBHGa5zrPEwHr9aorkilUW53qVM0VulQtTEQHR8dj02+8/sb1a6/fvXuy1NlbZc4/+9M/9YlPfur1N99sm7ESchp41e/q9vjy1fWq70RSdP0Tx5erqVM4oS4qkopIm8Y2TwpZBC2aR+skD2UwW8bxAp14CzTZCxfhixBtLhZeUQZ3qdvNdbPMQYxaudnBwUCMzbhbplmC1GxeZg/0XZ9LuXP/7rWvPH/9jeubzaTa+pIyEfpBqHz15edri7Nx16s8+uSTT7z3iScfe+/3/9AH79+5+PAHPvDoYw/RF5+//nu//btf/vJzw1GnEeBu2k7f99QHf+JH//Lx0Xq32SGYU8pD/+UXXvzNX/8vH/1zf/7n/vE/Otve3U2zO5mFWXNdxl3VmqizZbcN88l109q8LCViTakbZGfLg4uNa0TzZVlSn/tVf3528fyXX3z9+huqJoycs7sxyLSlVe9blXU8+ehTP/tzf+/i5smP/dWfWF/pXn3x9R/78R/5kz/83He/7/2SIh0f9W/dP/nhH/lL/+yf/pOulJTL27du/fIv/9KXXnzhx3/8R5MqWRwdXfrmW+984ZkvvPfbn7x2483PfPHZv/6TP/LKKy8TmIIiYESp72bTZbvRpg63iAwGJw5viGW3c4Jr1KUO3TAcHNy+e+ezT3/um9+86ep9lqHPeRhKLifvnATjkScekyT/8B/8jR/7mb/yB7/9zL/4+Z//z7/yH6aLaZHN88+//MEf+IFvvHV7nOLg0mFyLinnP/jE/7127fWS0+pgdfPtG1qXJ9/7vuTFa6DQne3557/y3GNPPvZrv/irv/M/fue3fvO3fugjH3zskYfvnNwmSgQBl9l2LiBIXqXT87PwyEYrkl2tW1/C3aoSxUHfz7V+6cWvPv/VF2xqfU7ciYWZBja+utw/9X0fCaFf+Ff/+uVrr60vdocPX376mS987k+feeG11197a/uBj77/lWs3Pv6/PvHq9evPPPPs8cMPpUuSEuN7PvAdP/mjf3k9DP/uP/6nS1eu/Ntf+DeXjw9vvfVWQxwfPvry81/5+O/9flvGT3/2U+fnu7M7px/7b7/9L//5zxNOR9tOtdZNm5e2C1+vhnGzo5aOh7WF335wr7nB2KsOBwcR/vIrL335+efPT0/7LvfrjhwB4Zz6h44fe/jqD3zvU3/z7//0r/7Kv//GN77x9slbf/jxTy2PvP/W9uTXf/2/TuwvfOnTb927cffezd//7y8O/UHT9ua1Me0slqW+9z3v+ehf/OjBwfBdH/qeX/ylX/61j33sZ/7uTx8erPtV/+orr/7RJz7xU3/tJ/7O3/5bb968xcxn9+9/7Dd+45Of+tQP/uD3Txc6zTrkTmct4LbZEVFe9aebizaNYVXdSKQcD9ffvvn8c1+6c+vtJPTQ4QExV1VJsj44PLhy5Yd/+C9cvnrlf//u7z7ybe89v9j+7v/8/Q9/34e2m9P/8/Hfe/jo6Muff64/XNtYv/7888zKSbabswAinL741RvPfeEZIf+e7/quaZ4PHzq+c3L32quvvu+9Tzz+2KNVcf3Nt5Z5+5EPfzcgdXaQbaez127c3N1/8OEPPXXj5LaZyQKoeY7dbs7r9cV2W3fj9vxcoVxSndvXvv7SN65ds3lJIpKTELEIl/zo1Yc/+NRT97abt9+48We+96mvfPqzl977+HF/8NbNty89dHx2fqbVhHPOYtbcvOTerYLDTJkTSOizT78kLCzU2hKIcV76YTX0ZdpsTc1FSt91HKcP7m22E5SY3ZKmfsUed+6cVHd3bO6fzttNN3Rzi3Fexu1Ykmymi7zqHjx48IXPfP7+nTvloGciJ6KS4PH4409823d+x+ntu7tpOrh86fWXXn3fd75fL3b3L85z39lSrVHkJXk4l5w6WFV4GDFEEgKGAEhSJ9hcjCTUDV21mnM37aZpsy1JiEXNxrP7KSLnPPTRxjngrZrq2JaFiAk0z9NibkKLLa1KrTY38zCW9Morr371+RemzcWw6piIRYjoiccef+jK1ZN792688SaD7995wKkcHhy88+bNVd8XplrngZlSqWzcFuOY592QcrcaoLFUD8Dd3EyEU9NWuhIItRYeiSV1hSmSUAuyRvCiWudWiVwSqaGk3Fq12hAYt+N2N4FI3QxRF1+0lXW/Obv/yte+du21a+oqOdVMhfjyw1f6VGw7X+CBTcv5xaY/Ou5Kvzm/AFOX0tJqEYGrk0SYOUESOFIRdSdV5pRyCnfmLJwIlCA9cTC5mZcsMHdXD28WRkADKYESsWVGuAQwj6O2Zmq1NjRLAXULj7obR2UnXJyff/qP/vDswQMW7MW8w+GAAA7KKZ3N5w+2F8OwyiVba4KkrTavq9yxxuwqAZcIdjMOZIoWCFDCvrjlvViVmCjIk0iyd0VjuKo1JQJRNG0OgicYgh1wRFigmrorRQRoqVaX5u7LsrSqrYWy37139wtPP709PStdMnhfCps/tD7IfX/35OT89LT0feKiAU5ZwEEuTEHJ1Ck4GN7UzAWaIxOTWsvM6iFMbGAJ3tcSoJQ5uU5LCw+liDBQMMMs1EKIE1EELdZmN1VgNp3mSdw7ka26pRIl5s3ZYrpbXGh1+9brn/nsp3UcV6VEkS7J8eGhmT/YXhwnWh2s5nEuqculNw818zDAYcTB7pGTJA+XzEyFkld1SVyGoawMHszslFnABGGLAJAWU2HhYHNPpQiRtslVI1ytgiLIgxGO8ABCRBC0EFWvOdMyLfO8bYC5Xnvt2rPPPe2t9V0HptUwGKKaJsnhvkxzSTmXEiBzV3eztn8vEceePyD2yjYTS+kiqPRDWnUdpyA0NfIICkTATUDVLLFwCqg7EQzuDocHhSQJ92a6/y8iIoIQF8nNWhMw07jdtGWeauVUXr326nOff4bYVyVLlr0e6OHbeSrJh64P91YbiNW8WgWCYEJBCFCE72W/xABLKl0nuS/9inNyoqYGQlUtLMTcaiV1ECw8FU6hjQgWZgYKhBsB7k6BBOi3uJObm6oT1HSel93FZrfZbccalF96+dU/ffZZYSt9EXDX99V1u9sO/dDlDuagb32JASZmIZBHmCJCEhmQc8m5C6LcdylnkES4RcBdw9lBHuYqyBEUzHsck6BmppwlJeFI8DAlgOBEEeQgjwQKYiOPCEeoqtVmi7VGSN0br73xxWeesbaUQqXrEKHhuXS21FZb7jqAwkFMABOY3+UaiKB3tW8p66M1SVqvD9eHh+q+3e2ouXqYKtzDPYi6nN0d5ohwAgFElGhPAawRcXjbM1YEQBSAezALc1JrZqbWmlldlqXqtKgGXrv+xnNf/Jzr1BcxeE4JzNM0ZfehH9ycAScC0x5JBILwLvdhliCWXPr1QRmGVEo3DAeXjs/Oz5upT4txkCMRcyZvqs2liMMJECLT1mpNzQ1MzBzMZA44EyJC1T2wxwaBaKpNW0S4W23tdJpD0t07J898/jPj5v6QiZikW+vSIvNqtfKqALEIABcK2+/DgPmeAAGcc5e7nLohdV3KJaU07+Y7u3eatTrNcA3mksSbNrPEEm5z1apV6+KtwT0CiSV0j77cE0MCCBiBSIUyS7bmZs3CEeRLm+bdts2BtBnPP/XpP96e3x9KyUxgkVJg1cy4CAkzJRA8QvY99gDgri1MUle6vl+t+2GdUtnrg3VemjbbjlxEmBzkVZEIEdHauOxsWZpqCDELkLquz7kkogSQmYU5MzzCzB3kkiTAsocc7CEK8UiLkiKZ2ec/9/TNN68fHPQl59KV2lpblmFV+tSFErEgGAQWjghz2+NNj8hdXh0ddcNB6YahX0fEvIzqbRnHRCyrblpmUgUggXG7m6ZRtSGQkvRdl0qXu46lsBRiTu7pXaVSMsINUEJEgCI4EGqmGrZnKQvxQqXW9vIrL3395ReHkktKJOxMIkIGUpIkumd+INrTLLxLe5yoG9bro+NufdCv1iJlXhazlsxinruSjXmZZgJlTqdn99pSzYxYcikpp1y6VArcWRIRuRtFpBZOQN7jWw+4hykQ5uogIXKjWnWuba51WqqFndw/+dynPwm3vu/6Yaimy7IUyevVGkxmAZAjCOFmiGAWQpDQ0A+rw+PV4XFZrdR8HHcATOsyTv2qn2uNTaUs0ziebzZzHVNKh8fHTMycwJxKaeEiiYVVbX+BJAsbchcRROSuBGSQu7oqUTLiALNzq227TKq2OTv9zGc/eXH2YDX0nEXdOLDuBveorkkSQA6nAIMQwQDM3P3o0tFqdditDtaHR1NdpnHMkuo4WQQfDJuLi/AoRPfv3MZSa8Lh0aVSiuTOVYUSsyBYiIU83j1ORuQpS24esaeHJCxQd+KUhFKSpc7bcaNLs6UlpI23F772tRuvvrbqu64ruevmaQrzlEsS9j3mI4gIM6u2cPcIJnnoypVhvV4Nq9XB0Wa3200Ti8zbnaSURDZnp4KYtrt3Tk97kr4vq1UR7nPfBaik7Gold+4OuKm9ew1EeFiCkYfVVsOdiN3VXeHKCDUj8mHodwA3G1J65cabT3/mU9yUuwKPeTdKSt2wMjNKKXHyvVdAhIg9GhHnvhwcHOVhPRweHR9d2m63cKz6freb+m7Q1rbbMxE+u3NnGcfVap2HbkilaV2tVroHW0lMtWnFPhi4h7u6mltrS5IETJo9AkhCIck93NxNEdQcTZ00yjDcevDg2We+UHfj6rDPXSELsiBB6bq5VQKxOaSA4BoQ5NSZW78+7o+OVgfHq2F17/6pq4EwzXNOycxqq+x67+13GsXh4WG/WrNT9F1eREBgahRTnYWgtkSERejSap33vBfhaTNtOpBSBZF5eIOQhJM5KOUgaXXXyJvjxRdfef7ZLyVyhWZPnHi9OgxIGBcMEVWhqLMGiAThru3w0tHB4erw6Gh96eje7ZMkEkTTbidCdd6a63banZ/cy7kcDqvS9chJHRJYXTrOIuP5RVMfhkGtWQt3bXVxBIvwPrtEpMxFOFydmAHe7jYUdV/bQIJz6oe8Sv216+889+xzYF8frlLX7+HF0mpJg0eYNWGklHZRmQoFq8XDVx4+eOgS9X0+Orp3dpZKoWbaWrcapnG31GUex3v3765Xq77ru64HSwRWfZ9yXq3WJ+/cEpYuMbm2Os7zuMfYABMHIvYWhNTlLmw2Qxj1XTk6PDRdxnFikdKvHS7UUZLr19+89vWvdwUtDM04cdd1zImJ3H1vxWpmhbK6a9Xh4HC9vhQhj199z92zs0LJobvdDoSpLRwR7vcf3F91/Xq1zrnbu2SEhYl2u93Z6VkWcbWAbs525rWUomYeiIgwzyn1XUegRB5G3g9DOLuqWa11rDp7SFt2TKnk1VdfevFPPvnHOWN9vAqmebMM6445CWdz21sERKipSmuhJqvVo088Prk/+fDjF+cXKcLddtttt+6XZbGlTZvd2YP762Houj6lAgIReTMu7GZWW5fzMo0B0zbBVQBtFZSIBBF91zNThFetSduipCSgSO/mtolXBys1TG1h4qr0yrXr11//RilobmHU9X3O2d0jNBxEECFVRbBKSEqPfPuTwXTlymPOMuq278rZ7ZOu67bTOE+T7cbN6WmWNHSDdBkEdw8PYWnLMrbGzM2t6TLttl1O+zJQOKsjpZIKu+uyLK3VoEhOrhYM1zaxR0QLjxZWVWudUfKbN2+89LWXMuPgYD2bLuOMjtd57Q4C76uSCAp3IvIkVy9fXffDwP0qd/du31s/dHh2cXZweGituWqbp+nsPEs6ODwUSTWcg9yMmZsuZlpKAdE8TdO0SSVZBAeZkUeUriPmcRxrq/KuZwSJE5eyFkmlNJ0Xaw4mD0uJ15DU9TffeuON177OzKYh4L7vUk6BEEnusTfnEMAsbnF0eHTp8lVWvvT41Xu373XdsDu9YIpF53kc2zTPD85DrT8+iJTUnMwVnkuJMAoKj6o13He7zb7kAthAq34AvLZJrRFy1/eAm6kwM0kilmVeTI2ImAu4IASOrluf3D99+ZUX6rw9WK0puM0V4UmEWMLd1GNvaXF18yB65OHHkMulq1dvnZ1SKZFkaSqc1NTh48WFL7UfBskZTOZBgZQyAtrM3YDQVqdplCRCYOYAl66nROM8TtPkbsyJiPeOTEmJW9V5mYWkMAnoXfpL4GDu+hvv3Lr+xusQokKp5/XherU6EEnRrC2TMAVTwCXIql957GEZynq9Tl3Xtjvp0/bB/fXharfd+lx3D07HaUzroQwDAjZXwt7OSXsD0t5gZboIOcOSJKaUJRPVcXxgFl25tB6upjwQhCCl9FcuP5oogZ3B4V6ZqOPUHJw8D8k5bt745snJ7ZySNg2PUvqUs5q6t8RCIZmlhTpRXnWXrhxNOj92/J433n77+PjodHvRrfqzi3MzrdM4TZOIDKt1KX1t1cMJlFgC2IuAEahVAUmJaqshTp5BMU4bAvXdUFISIaMqklbp6np1GKiJ2RMomjZ13vutCHVZVrm7dfvkzTdeD9NhfSB726XvIwbCiYXDLSKI0MKuPvweg1y+dNSmOZpygY2jHB7aWSX33eZCa10fHnelM/fWFAhh2qd/+yTH3SICIA8ChDlHeG2zUMm5T2lv5+JSDksqAJ2d377Y3GV4yFKjqVGagiZvrS1WK4K/+eaNmzfeZCIza63tvWhqYQ5QVkfACSaMfhguXbrsjY9XB/fv3HnPQ5dP7987LsP2/GIgHs/O6jSWnLqc3WyZF3dPzHDXurgZwrUtqpVBHASLLhWELHVDjNIddt1x6voWoNwfHjyk1k5Ovnl+cQukiYxT6cPMIK7KgCOlMlTDmzdu3L17IimJiJsHwJIShZHt/REUJCLV7MrDjxLzIw89tFQFhEmIyOFLnWipm4tTuPddB0Sr1dXoXVHFyR1wt+q6gIiRACIOYRrrQoS+WzF1kiXA3epgGMrZxZ3zs3valiQCMDOLQSiLhSWJRBDIsDo4G3c337mlqiJSayNmkfSupTHCbGEhB9RoWB2uj46nebl6+fKD883xlau3T+5evnr5Yt4Sx72Ley1q7nuSFBHu7mERui8VAjDT1paAMwEEYmLhprW1UaRnzoCa1cSlK6v7p7fv3L2pOicRBjPADqFwsxZMNUyjAq0v6d7d+7dvn0iSnPPeLKnqqmZuBKIg8hCGQy9dPnKih648YsptUWdGMwNarea61IUQqWRKycLVG1EQw8P2r7FwgwfBKZzCORy+WJWciQoBgApRyXL24O3zBycUsU+Z9nbc/wc0fGV8iO5YSAAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAb5klEQVR4nO16WZCcyXFeZlb9V1/Tc89gZjAHjsW5ABbHgnsf9PI+RFK0HFbIlh22LD3Yj372kx0K22+WHxxhWRRDokJ2SJZEi+Luci/uLhY3sAAGwAxmMPfZMz19/UdVZfqhB+TuyhFLh4KUHOGMjo7+u//+I7OyKuv7vkpcXd0gIiJCRARgEREBABFmFgQraRPByUcMEeHj5pyDTzcn8MnbmPmjlyjAP9Ojfmr0f3X330EjbBsAiAjAR67g0Xd/p40AABEBse3wRybH7oT5W3LsZ7X/96fQ37YDf1P7/wH8bZv+WEkXAARslyABBPhZl/AnN4ZP/lMQLIojBhASQCESRAYUQkAUQmk/BZFt+2EMzOIYmBQCgQALMAgjomZAUAAgAhoEUAABQAARUEAEEHbDwJ8tAvnroeInS7BBNsgAogQUiBJSQgSKmFDaLxRAYQIEAQFiRZqIGSwqEBABIUWB56U7LYUAiA5A/4xD/Dc1Ad8qDwCA2zV7d+dB55TbzQoCIKJWWZZlWcqONaEGbY0Jfd/zIhF2qW0lILmeGIAFWPAXFAACaEfKEQAwMqA4ZEARYEZmAkERFABgx6DAiwBAASgR8sJ8MzVJ3YhgGOR1LrdDqmFNmhkn8ovKACAgCAICAJKAIIiAAAiBADgQBhElqBkci2HIEJzWTnkpoMuXXMlrZq6yU99eWFvfae60Wjs7O/V64xcUgIBYYkCG3eoACICCKAQAHgMJIYgIshAqrXxfkzKoUlQLW9Wl7ZXZtc2VynalXm/FaasWs+E0TY3JfmEZECbDYAQJhAAIhUA0CZIAsggIChvUVaXEC5LMPVxaujU9Pbu8PLu0kgJaUqJ0ypylKSY2H4Qd+Vypq1cDgDALIiKKgMBHgbOAyF+vkD+jIUIbZwEggoSarDWGkbwwMeyYlB8y+gKUJSkCmCxbarbuW3t/dm76/lRlfcPEsVgHmcn5QUGJT1wudfQMjoyND5dK+Z7urs6O8s8rA8yslFJKiYhzDkAgtZFAQJ5l7ZSXCjUsge83jdluxgvLSzOzM7Mb21PVemqcZKknkA/yuRz25AujA/1jg4O95Y6uYrGjkIsCIMVEqJF/XgEgQJsnWWuZmRDAIUlgrW4K2CjMtK7EZnFt6cPp+w/m5hZWFuqtRujl2ei88gphfri/77Hx0bGhga5SriMK8p5SzJok0qRMC9mwZfz5VSGlNSI656y1zjlQukVRqgOngkorXd6oTz6cvz41dW9uvpE0SQNz5nvYn/MPlPpPHj589OjhznIZxGRZHEWes1maNKwzFGjyQy3iiUJUIj+3DACAc46ZEZGIMmvrfmHN4OzDhYdLK3PLa3OLS9vVHSLI+0F3MRoZ6J7YO7Svv+fk6KgngOA4WQd2PjswHCpVUiiAnGacpg2bESmtNSn6tAAQGAUEdiu4IMgu+RFgQEFpg5ifrPX2T+CYDQOQVl5gBbfrzcsPl996MP9gdq5Rq5skQ+bOYnH/8OCxiZGJgd6RnnI58MQ1wVWtOE/pyPfQsUtNQJ5iS0bQMSExUVUHmUImEkS9i4QeoR+WNpvffWdwmXaKRRmlLSpHICAaGK2gE05RUhToKvRurm+BuM7uokNpGdNIDBa6jF9eb8r9xc0fX745+eBBvdUQEwckY93lYwdHD4/vHR/oK/o6p5UHiI6FSaFGILTIzgEgKJ2CIAISolZt3u9r7bVH65NrAH/KgqX9EXcvEAAAGZARgNChAAiJVeKIZXtjs5Av6cirt1p1k6pCQfX0bqVw7fb0G5du35ldARUm1Wbk4sf2jR7cOzAx1D0+2NWdD8hlnhhliFATKAQiaXsJAiAIgviJQo5C5BwKAwB/KpgjwdAQsUBbY9HOIIgHAk6x+KyU8xQD5QoxUr2epSrHpb6t1N65N3/vwfzUw7nNzS2O02Ihf2Si78Dw4KGJkZGBrnKoQrIeGBZHzMiEyEQewSMW/ggXw0d4OsnuyDpheCTtfHoAntUolkEMOKecIWGNACIWPNDIChgz5ddRN5Vno/JiNf7hu1fuTM1sbm7aVq0cqKN7+588ceTw+Fh3LirlPK0cmhZmqUanUBhFkBGEBIBo1/+PsJFPQHoBQK3l/zyF/rrtInUFwIQOwSFaIRIhRrTgGSIQXQXPlrq26+nFK7ffvnJrcmYeiTzB8ZHh504dOXd4fKAYeNZqsb44MAZcSmAVCgE6AkEURgYBdvyR8d5lKR/hS22/Fak2UGD+tAAYIVNEzCRAwoGzIKwJDQqjTrRvRDvlb1F48/bM1Zu3bt6dWq9sR1Guu1Q6f+LYs08cGyj5kaQhsU+WTIYiwFbYAjILAikkao++Y2BgIGzPGhEgAfoIW9pNCAAaIRZ2DqzTuzzykQz0iQAcSgNNpDFw6FtUonyg1CJ5/o5Di5p9f70av/relVvTDzZWl8Fm413RsQNjT548Ptzb3RGJJ7FCAGZhIUQQZkRQWhCACLRvMscC2gtNnJBWGafCjgADpQFRRNo7unGOQTJrNFJBMN6pr62sArOWXf/hke758QwQYhQ6a2xLfIcalGTOC7wEPFBeTP7duaXX3r5wZ3Grslnp0O746MD5w+Pnjx3o68hpFwvHjMqJZlSMyICACKgYlOxiah0bK0IdYaQUBpEX6iBpNZ2xCtBTGlCctQ5AEEgp7alWrX7l7Xdqy+vTU1NZM/nUnZhTk7JjdA6UB6JsZiyEBryNWvP9+7fevXZjem4xbrnRgYFzRyaeOzZxdKizSztMaiDWoZdhIERtjwWFoU15kQGZhR3mcyVnOGtlNjUeibENTRDpAIytVTar1erCwuKekaHR/ROGWfnaZNl3f+87OrGlXCHy/E+tQqyyJNI+IWagnfJrnoJceXJh4eK9e+9/+OH88kohlz99fOi506dOHhjr8iSU2LRiTxhRISABkDhCQnCM3N7K2+uTiJCZrRFj0UnR0xHpDNTW2sbM4tLiw7n52Yerq6uZNSfPnR0dGhaCLE19pYMw3N5c7+vp8/DToIQW6VHaQ2o4SZSiXLHmy42Z2bevXr92+3ar2RodHD558MDLp/bt7e4ItbgkEaXEKxnnBNr8S5QISAriBAQA8VENV6jYWLGu6IUiprq+eufB9MLsTGV9s7qxETdaWZpZY3Sgk83tZKtW6O3KxEVhvre/f/7BWmKt+dQqpB37cYyk2YrxvM0kvbG89r3XXl+pbMWJGevb8/L5Z75w/nQ5m/dtldnHoCAqjDNwBCRMYDSkBJnHmQC7NqcEBFHtsuaBlEq5+nrlynsX7t+efHD3vm2lSsBXnqeUZy0n6fjQyBMHjnQGeU+UR8RKHXn82MWLN6wC0loTGEBPULGgICIgowVxiI7QEDtyXgY+F/JLht+cvP3G5WvzK2s5pM8cfuwLZ88c3ztSirdLwMQqFYhdkgGTFypSyKiFVFstAcWCShMxG2sVISAYYfBUinzp1o3f+4M/UJnrK3d7EWVZJgC1RlORCqN8KzWDYxNhV3ctrcfISuHhU8e79nSFXcWJ4RGtJBMhQRJUDEiIIg6ZSTlAY8WJKlZRL2f42uTdH1z4YGVto0v7zx469PknThzd052DZgAWU2AMRKFS6AOLtHxQPrSPfDBzlAlpT3NmQQCtODIUeI2shcprCRcHB60XVre2c3kWsAOjQ/v2Hfzg/YuzM3MTo+NbiZ1dWimNDtc0px6C5q69A7/+L/7xYP/ASN8gKQEtopk1O+0csdOAzOIcCHix9jbz4RzYP7vw7p/+8Afrq8t9UfSFc+e++vTTx0aGtUtFTIbGEBgiS2gRnGJBFrbWZiaJbZoig1ZegN7c3enK0mpnmM+B7vDDnPYVYKvR7O/tn5iYiOMk9IN8lDt+9Pgv//1f+fKXvqxJRX4YN+PZ+w+yZhwpjwRQJF/Mv/DS8/sP7l9eX9EgSjEoZC1sGZiJEawRUB4q3CY3m8ZvXL904caVWn1n3+DQy6fPfu7MuU4QbRJUyIGXOlaaUcAQWAIAIABm5tRAZn3P01qnJpuevPv+a2/moiB86cXB4T2Q2qIfpiLooDNXPnzwsUuvv1uv14v5AKyz9ebwwEBHvrS9vpkLwqXZh82tnf7eUsrsHMSNVq6g6rX63bt3dRtAITgQRgZH2ilP6RCjfCqwXt36ixvXf3z9aq2yce7Ioc+dPnNm7+ieAN3Ojk/AGutZgr5iBwjgCBh/CmAiLwjDAji3tLh47caND69e2VhYFHZjfYN7+vrrzaYu5RikEITA7sixI6efPFPygjOnjh8/eUprnYtyE+NjM3enOnOF1k59bWFxaP9wBJQ65wVRlmVhGHb39GhHoEA0OAAREBYQCjjwU/TmNyrv37n/waUbSTM9PLHvs0+ePbt/oss6W1sPCbTvNV1mrPN8LW1NBgAAUEAJBKhCoKxan5+ZvXr58o2bN+v1WujpMAo7ckU/zEfAoLRFcSS1ZnPi0IFf+2e/3lsoFXxvZX397qWLi4tLoFSps9MyO8PT96cPnz2OeQ+dWGe1Iq312MS4NkSEDALIYpBTtozQcLxer124M/Xqm+/YRn1vV8e3X3zl/KExrFYIXRShRqg1djKAUrFkDBMLAigGICABzRCCsjv1ycvXfvzm2zMPpvOFYl+5a6Oy0dfTU+rolDQDxEazaQLlFwtgWeeCfG+H1sH0gwff+b3f39yqgqCPnrWpEfJRzS0tNBqNMCoLchRFxUJ+p1ob3LNHpwQC4CEpRaJJQKWKnI7u3b37+nvvoYXz/aNffvmF/QP9/k4z9MiZeoMBQSBEX5Skqc+aHGhP19PY8z2Tpr4Oa5uVd/7ir+5fvZk2WiFTWmtU6zuUDxdX196/+MHAwXEHRhfDWIw4K4FucNazd8/GwjIH+u78qg6AkELCOG5EpJXBXsIHcw+P9JxQWol1jXrd97zNzU0CIouqxVA3ULeSom5ZuHln8sO7dxxwf6nj60+ePzO4Z4i8MkskoogsSabEKBAUJeKxIycekgaySZYPoo3Vte//2Z9fungxieNmsxmG4Z49e+rN5sOFRfK8G7duXbt4qdhRcs6RUkZc7NKduFFLGhR6Hf29R04f1Pnw+JknvvKtr586f25lc8PPhQ/n56anp+NWi43N0jSOY2OtF/iakETAigAq5UctJ/NLa8srq5X19b7OjhdPPHF0ZE+3DyapCydtMdYqdIgIoFBQmARaSSwgvu9liTHGLCwtvvbGjzp1aITAo8fPnT519mz3xQu/+4d/WNncKpXyly9fHto30rdvpGUTUaCVYpHUpIowV8x//ZvfrDdao6NjxaDQX+55/423sywD4fnZh616o6/U3zKWmS07LwhIOyAmhZHyS6gKlUpzbnaxvrnTXyw98/jxJ08d5iipyVac7QAb5URZ1Kza1JtRHLEQd5RLmbMZW0cIgffY48f6Rodvz800wKquYhLQ4NGDT3/x88ePHltZWnOZWX44/+r3f5DWW3mlfSBtOVLaY9AAnNl9Y2MjQ8NZnHqozj/55ODgnvm5pVKYq29UKvPLthEjQxiEAECKNKUWVYgYpJmqtZoLD9eSncRjPDI6dvLoYU1pUrCKXQCgHGlB3xJqEmBLIMhWMQE7C6wQNSkv2MlaUeB/9R98+8r1a3PVjR5y0fryRrM6PDjwpc99nhtNZAsZT9+evHX52uNPnS4U8tV6HT1QTAGpjUrl2o0bk3en4iR96ekXnjp59szR4zvzS0U/rG1V3/rBq9dvXH3uK68Mjg7vipZimUQJU6OeVjZ3qls1NvbQ2NjR8ZHevK84MV6WeNb6zMjCokQpJs1KC7XVfQAA4VI+LyykkLQWXw0fGP/Wr/3DmKRzeM/LX/uyLuQXFxcXpmdKUUEr7ZyLm8n1y9fWF1aLKiyoIASPnFjLN27d+a+/+525mbn6+tbU9VszN29DMx7u7LaNVjHKra+s3bpxa2lh2RkWB+BEox9BGDabdmZxcXF5WcT19nWMDXd1dgQQr+eV81LSLB6LI4jBMqJotM4VwlxSr4Xo5UBNXrpa7urs2NOfeRDk846URNGzr3y2WO48OL7/8ccO375+843v/en6/Ye6ELZEWlkaFsKHCysX3r54cP+x0OnYWgi8VPs4OLTcwgH0sq3m/ObNv7w2aZutvjBMSKo2S31PFUpJCxqVtNzZYazV4vutzFZ2aq00BgX5MBwa7C4WfA9SgkSxaFHASE4YRBAccZZmYRiaZqINKsuXLlx4/7XXCoXCF7/9zeFD+zfjlvW0AxkYHno2Xyp4YaPRevuNty9funxwcKTSag7s3/v6hcvlLN6D6tbFG9fH33383Jk4TR1AanhseO9wf8/c9OzJ4b1mu1o1NvSDJEuaaKmzY/+RQ+NHDp84e7KQyzlwoFgL6e1qdWV1NW61ClHU39c1PNgTaYsm1YTwk36eRzINCQTK48REysuS7MqV6xfeentns9JsNLc2NsoD/X4+AKI0y0ya+UHAjv1ccPLcmVuXrpq8/5VvfHH/8cNBufC//uzP+/x8i5OrP35fUpNqGDl6oNzVVdDqhWPH/uTWZKOrwwsk7OmAMDg4PjZ28GDP8FBnb09UKkoOa60qi/h+oOtxtl7ZqmxVSHndnZ3DA72lXMBpjUB8RfZR+5Hsah2AAqH2sjTmNJmevPfaX/5Vc3vHNOpjo2MDA0OKyKSG0O/IF2rNplYalWKBE2efSP/Jr+b94NTpM0rTt77+S5MfXG6sbXb1DsxPTi3PLXDOf6J6/nNf+ZJoOHvyyMX3elwejpw4MTY+eurUqcDzA+WjqFB7WukskJYDT3lK+3p5Y2u9UnHsuju7Bnq7yoWIbCIuIxFkpwAImIHaehMCkIBrJYHQtYuX3nvr7ZWFRQWU18Hi0vK9O5OfGXrJD3SDnU2zrBlL4AeFQquZdHWUn/v83/MB33r9ja5i8eTxk//qn//m7//n/5InP0sS0KqWNh/OzgJBgsmRs0d/41//Vkdv98TBfQLQ2qn7XpSnwDeEmXOpTWrGCCAjW6eXVteqtXpHoTjQ39NTLqHLhLNQkTU2y1LPC5ARHqlLbd3VF1pfWH7v9TfmH84Fnr++tq46O8127bXXflTq7Tl67nSAhCIDvd3NJHHWJibZqm9XN7ZuXbp24+KVvkKpV+cwznpKZZOkqTWZy6Kejv3HDlfSepaTfEdxKDzQ1dm9XWsAcyHKi4U4MSYD7VBQuaDo+51MKnVOb2xvKUWd5dLE6IhLGootiQNxAAICbJnIa5/iMggi+IjIfPW9C+uLy0rg5JnTUzMP3n/vypHH9u3U6j96/Ue5UnF03wSIY2ENmDlb7iobY65cufyn3/2j/lJnxhv/ffO7GKcI4DRa0iMHxp944amj507F2sRobNYET2fGEqgoLNjUGQeAiIHylA862GR/dauxvLaysrmhG63W+PDQyNCArzjjTAOjsDgHAEjaMoM4QWAUIUAgJTg7NTVzf0oye+z48S9+7av1LKnUfvvO5P3RkaHahx9GudwvfesbuVIxbpmgmHcEzUYdiRRCc2dneas+1t1Xh6pttda2NqOujideeObUC08NTIxKoIA58kOxHDjK6qmvQuCgFjd1Lq8KhQRkfWdnbXP75r3NheXKwuJCo1HTSlFfd2dPV7lRrRR8jZZlt1lFcfvQ1TJoJI+YEBCA4Qff/4uNlTVPe88//3zv8J4i8j/9rd/497/9H2pJMtjRd/f25IfjN88/85lSFDGAH0VpPRWEc+fOzj5959qPL4gIKtpq1vr3jZ558dnHnz3XMdhbMzFblw9zlCUEPiMaQMe+kSjo6VxqNWaXVmYqa3cW5mYXViqrxsVA7AJC3d/f291VFpsGCjSwgGNp66qKAQFZgxNsn+eIE7ZOpu5PBRkH2u8d7M+cSRXsO3b0a7/8zf/xh39UbzQDUm+/8WZnqXjy3JnUuqTeyIVhLWn19Pe9+MXPrS2vzt+b8kL/8LknXvjS57onRmykGi5TSgcIOmOy2omw8jmIGqK2jdxfmrv24MG16enl2nbDGmtsWfK+M2Ay37EeHx3p6+nEuJkLfZc0SX5SLZVDJEBFICgZiBMBFmZuNpsoOvD8etzKhQNCoHx69uUXq9vbf/K9Px7u7s1a8es/fLWnv3fk8IE0qSnQURC02Bw8feK5lVfeRH72uRfOPXWuNNhXB2PRaUURapUZMZa1Z30/1v58vXltZuHm3OKt2bmteqsVZwp0gH6Uuqi25CUtsi5E1N1dHQrFD3TSqIUKYVdOQ0ZiIQCHwMzsHDOhJtRa+77vYqO0nn44Uz40tlmv+eTrwH/plVduXb2+OPWg4PvNeu2d19/8bCGMekqNRiPsKmWGMZd78WtfPPDYwUP79wPJRqtGuYAUceZMGoeM6AUrxJtpPLW89N69qXdv359eXI2KPZ7Tec7nW4L1VFfrfelKlyeFXD7vBzoElzZqQegnaRqWCjZz7ZoDIggM4Jw4YVauLQaSaDz4xOOTF681xd69M/nk888M5cpMihEKpcI/+pe/+bu/8zvrC6vDnd1X3nkvINxzYCwN1cixg0Ffx+bOUomCkX1DmU3irBWFfmpaCOR5nu9HWcrrTl1tyAd3Zy5fv7G4vCqg9nfvTSo7XqMRNLJiKmXyOgPVo4u+jbmVcqOlA5MFvnbWRIViYq3jR6eUwAhMwEyAiB60FzVnAmdefPad997VGM1OP5i6eP3YqZOpAg51gnzs6SdeqXz9v/3H/9So17spnPng5oPb91p5/STz48+fynksNhUKkLw8BeyMVpLazPrUQDW3XXn3YeX1pfqDuWW3nRVcKWrGufWNUrXa4WxRpIASCAROKI2FGQE0oNaEu33SjoXb8+dj8Id3W2IBcfenfQf2j+6bWH+46OXh1R++2tfdM3hggpW2WauyvPrM+c/E31h964/+pw7C1k49S1s7O7y1tullSlI2gCoXGHGSpcpmWlAF4WbMV1eXXr1594O789UY0HDeOK/epFpNJ2mfpgLYnNhAWLPT0FZdH+nniCgizrFz7D7eiv1TFAQA7cXR/o9STz39VLW+A4pm5mZ/+MNXK7MLkNoSepGBrrD4/AsvT4yOOedy+XyWZSdOnHjphRcLfjnEssaioN803GBnoygL82uJ+/Hkwz9+6+qrtxbXa6bXJH31Sm5jqVBdG5B0NKe7yHSQzaHxIdOSEickjkRQGEA04u6Z8Mf7jj8WwG46Hlk+lz905Mj+gwcnr999bGz4+s0bre3qsccfHzq0v9zfc/PdC7O37moiIWjZ7Pyzz5z/wssdnWXLgBR4IJyh4gACL4nUcnX7zasfvvPhzL31NOZ8GXRh7X4ubYaOu4KwQ2nfZMolmp0Sh+JQGEQIgES43TWwvLSy20Qg7cZAx+x2TwCFhZnAwu5c2oXTNs08oHsf3v53/+bfSpr0dXQOd3QjYqGnM+go1uv1ysqarTcLfjQ6PvalX/2VoYPjK41tKEROiFIA61EQVcBO7qz8aPLGmzdvbmzGgSt7cVje2dqzc7+sXEQ6r33KstbWdjkXEfDu/EVBATKIAozoiP43CPXsXJSsGs8AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAeKUlEQVR4nG16aY9lyXFdLJl3fXtt3dPd0z0z5JBDykPSJqixrMWQIQKGDfCLDAP+6F/g32Z4hWiIlj4YlmiKFCWTHHK2nunu2t96t8yMCH+4r0eW4YuqQgGNrpc3MuLEiXMCVw+//ge//wfT6TxKuri42G62TdOmJDGmkIaYooqqmZmqalJRFRUVHZ9kpmDAzISIhESEimZgpiJqGgEUAJGYiNQspGgGAIoAhGBqgIBoZhpTCGGIMSCSioEBMTMRjv+sCgCEhMgAYKCIhkjuX/3xvx7C4Lyv3OT6+jomSZJiin3fJU0GgEhglpLEGEUECQxMVWIMAGZqZppSQgAidORofBDRkYIHBCQEQOc8IJIkM0FABDBVRiAAA0spjhFw7GIUsCSiKSVFBABCZMcAIKKaEiAyIzGaqUNC532M6dC0fd+LSghxGAYEIGIzizGllESSalJTjSqSVEVUxtObgWNEIgQmAmJkAkIAADUEOF6fqAKYiQACEhEigB0jYCYpqQqYEbFjcM4BgJmOt2+mojbmAgAggBqYGCI6JOqadhiGmFIIQSSJCIAZgIiIHAOvqqoiklJMIqImAMBEiGRmNMZTNYSEhEzIRABgiAAwXr2ZARiAMZJpElACTCmomqolkZSSmQGgqhIREgIiIyISAqhqSjK+gpmqKRggogsx9n0/hEFFQwgw/g3EJCnFGFMaX0lFVMfYK4ARIiIyI5qJmoiajV9GaGMSASDxmCzgnTMzNTVTJZYURYQQ9VhNllRNzQwREQySKYgBACIRGREzMxKNf0VUvzyVu727B7CUZOh7QCAiUw0xiKTxkZRSiipiqgZGhESIiACgkmIa4yIIhggIqIRmmhKYqWNGAjCQgCoimlSFice7pTFdwERV1MAQkQCIiFVFTREA0UQSIDIyMREROfboRDimGEN0hLjb71NKTOyzLIQhxCGlFGNKKUhKmgQAvGNiZ6YpxZRiSvI6oCM+CBKOcAECZjJmHQIwkWca7w1gfENDAAY00aBprCQAQiBERSREM1U0I2YmBgA1GxMYAJnIee+cy7IMKnC7/d7MvPMAsN/v1URFkqQYgyYxE2IgIgBNKYUQUhzGwwEAO0eOyEgNzTRpCiGCGSAQgXMumQIiIkZVMzEzgLEIj8g4Ai8iASAgjNA8DCCiZkaEzOScd87j8QEACGEYhh4RiNilFBBJQVXVQGOMMQZJgmDOMQIniSkFERmTCsyIiIkAUURCjKIiKmNVECIQmcnYSYgYGERBxdTgCIjsAcDMiICQAMHAwMwAkcgRiwgRIwAggo3pnhCPn0CI+BqpAdEB4FhHKaWUYpJkYMT4GrrG/tKJyvjxX7a1mKICASIiqB0PockQDSAhGFuuYMkkaZTjMdQUCQPAmFAwlvj4vwGBiZjR1BCA2XnvRc3AAAHImBBNRQwBcLwdYmd2hDCRJBIBgAhVVSRKiBKTSBRNZoqASqRjAaqIJCRGRDFTVcQxcXDMEQVLyVJMaslMzFSSgTKh9z7P8izLvHNjHwMTlZSGYQhxCENIKgBKHH2WVAAJnQPnEYDNUBUMUSRiiszsYoxJvoR2M1MRkRhCHFIcNCVTYyZAUhUJAQDVxEy8d6aaUhJVBDIEAAVkABSjpBpTmuWTuijLqqzKyudZ5ssir+rJZDKtq6rMMsfIAKBJU0iHpmmaQ9u1QULXN23btG27P3Rdu499a1EECYiNmZwzE1UxMRdiUB3LS0ElhCGGIcUAppKiqSJRSgIIZpJkYGYkGBvsMCQRc8455xVijLEfYu6qzBeTPF+dnbz77rvPnr718I1Hb7zxcLao89IjgcF4+WggSIaAiOw4Z/aZL3JXkNHt1e3LL158/vzzjz/79MNf/+r5559u2003dH3sZBhyx56JCIAQf/sPf5CSxDiEMEgMIlGOxEGJFMBEICUxA8ecZS7EkJICoHMOyfow7PadBjS1aT19+uid7/3Df/zN3/ras68+On94st5vbm5fvHr10fPnH15f3Wzu14fN/nDYSUimRkgKagaIWBZuMp3N56vVyfnZ+aOHj589ffbOO++8+/jsUZ7V283+F3/zyz//0x/92Z//95/+7U87HMwZe/IZ43d//1+EMIShD3GQFFXFVFXkdctTESX0r4FPfMbMZKZ93/d9UoEyn3zn/e9+69vfeOudx5N5sVnfv3z16Reff3R1eXnYxr4NMQbVASgiJkA1VTBAIGZnIx8AQFMwAyM0Z5YxZVleTqaT5cXJ4ydvvvPOu19/7/2HD96ERFefX/70xz/5rz/8zz//8G+2/Q7f/+APYxjCMMQYVdNIocYifd15TFJCRCIGxCNWiTLzb339O++9996TZw987vbN3e3dq5urq+3ttjls+mGXUi+JRQHB8jyviqqq6rKsyyLPsizL8jwr2OWOPRImFTAVkTCEtm2Hruvatm2aOPQAllfV8vT88dM3n739lWfPvvrg4Zvdof3so8/+6i//Cr/6/m9LiiNXG/nayPYBwBQAFEAQR44FMZgqnKzO3nrrnbfffuvs/MRn2nR319e3l5ef392+ag87i+SI8txXdZEX07yoy3IynSyqalLkVZbnzjtmZmJmhwTMTMSm4J0nIgNQjWYSY2jbZmi77f16fXu3Xd93w6GYFOePHn7la9/8yle+8eD8TYclvvm1900NEcAspRRTFFUDIEJTNVVARQARRcwm1erJkzfffufp06ePprPyi5effvTR33788f/uG3FEVZlNpmWWVWVeVmVdV5OimpblrCynZTFx3iEd01Jt/Da1fuSqBORd5l3mfea993nmnGPHLss0an/oDpv19dXL66sXd3dXh+YwXaze/cY3v/fB7+KTr/4DABsbrYwd3AwQEEDVEIiJu7YvismjR0++9e1vv/+t98y6D3/90//xFz/cvOrQYLpw9XRaltM8rzNf1NV8sThZLhZVVSupakoh9kMvKYyDhGkamwYAGih8ybMNCInRETIaeueLvPRlPluu5svT6WSWZWV/aK9fvfj0o1999smv7m8vmREfvvMeGsTYhzg4VyDgyBmyDBUoDtp34eLk4vt/9P0Pfvd7wa3/5L/8p7/5yd+urzfTBZ2czepJ5Sj3Pq+r+Wx6spifFkWlZiGEYRiidYCJGZmBGIgA0GJKZqoGKQkdezLC2K84c5wTO1MyQzNIfS8hWlJCXixPzi4enj14tDo9taQvnj//6Y//Ah88fTfpYACglCQYYOayzGW7pht6vTi5+J3v/aPv/pMP7g6f//zn//MXP/4wxtZXUs+zeXmW5UXmyyKfzKenZVGxY9WILOyA2MRELaimsahsHENU+75PIjoyVrHXMxcgIjE7HvuKd+y889Ns4lxp5FWR1ECVAcuyPH3wcHXxoJ4t8PTRW0iqZkkNBIosT6JNN5Blv/PBB+9/65vlzP3il//r808+2d7dGyVfurKs6mJSuKqql1U9Lcsiy5hIkYzIBJLoIDqYpZgsRkkxhBiGfhiGMAxxGGIMKSVVBZVkpvCaFhERMzER8/gWviirvKryosqLclJMCld49GSM5HxeVdMZnjx5U1VNwIxc7iUIo1stVt/94HunZ3U/7D/9+PmL57+Q2BdlUc0XxJy7ss6nk8m8qkufO0QFFLVokAxELMUY+qEbhr7vU9+Fvh/CEMIRqG0cDlUVAEEFjhwbEMeWAAgjsT1Ox1nmiizLi2IymdXTRT1d1OXMkWckh4yrx2+qIozjOfK0nj978vQb3/ja4rT8+LNfffSbX96/vJpNJvWiKiclY1X4sirrqq7KScmsajHGQSyKxiQhxjAMoWtj04a2HYa+DyHEmDSpiIICGCKC2di00AwQAY91YEcmCONMDogoagTHIdvleVFX1Xw2m68W8+WknhS+wOWDp0hOIRroJDv5zne+9e7X3y4m7k//5N+/fHUFpqervK5XPq+9z3Jyi/mqmpauRLUuhhBjCGkIsQ9hGIbQ92G/7faH2LUSo5oIwJhaCEfwNOZjxIkYCMe0QQRCGAOPiGYKYGagI24pqBiCISoRuCybzBbz5enq5AwvHr/ZaVKhs+XDf/5H//Li8fSjT3723374H03oZDFfLFd+ukDtc8xn1fzsjTPKnVgXw85SjKnrY9/1Yb/v9tt2vx/aJoY+piRqRgTMjADjKD9yBmb0GTqHzpFzXBSZH4m1GwUlJmREBgMzMIMwzigiImISUx9SHySkISgxV5MS3cUb2sTf++Cffv/7/6zzX/zwP/zo+W8+qQpcXsxn81PHHPrNyfTRcnVeTssBdwqNJtDA0h+a/rBvm92u3W7aphmGIYkAoaGBGYjACJKEwA4mE19Vmc+8y3yel1VRVlU11i0RIxIYAjDiOCQQAANgSsFMDATIxFJMEqNoSl2zb5td3zZI9dkPfvDHz95+dGhu/vzPftS2t2XhTpYnk2IiBt4Xq8XpfFVzJgK9aAp9Pwxd13f7bbvfdIdmaPt+6JKO+W0oIqPeSAR5zkWZlWVRFL6qiyLPnHfsMu8zZofMgK+THsbedlQOAWjkeFEQR7lDUVXBDMCSRkQS0aHv8d/823+3PCleXX7x4S9/0Wxvq1k+mU2qYurMTWfVfDGfzWYCQ9Q2xG7oh7bpD/vDbn847Lq+jSHKyJ/GaiRAA3COipyr2hdVVlVlVU7yPMtyx8wA5LLsOJwbkkNAMEAzIOJxVkYiNRsbMwARMAIjMCiCoRmoqAGqaozJPXl28vO//smnH//6sN+cna7KsvI+c8jTxfz0ZF5PcrNh6Pdd3/Z93xy67faw37XNoRu6oKJHDDRAMCbMMi5KX1a+rrOqLsqqzrPMefYucy5DJDMFJEVTU9NkSHCEXlUzSSmmiABJxMwIGMmNOea89y53zjNnzjsAAnM+z9xf/vhHn/76U7B0dj6vp7WHovD5ZFosHy7yzCXp227fDU3TtPt9s9/2u23bd0MKSfQIFTCKsh6LwtWTYjav6klRFLlzrsgrxwxkREyE4/gbQhCzKKnvO00QQwohhSGlmFKKImkk9OMNGHliJCLnfZZneZ5neZEXVZGXWVYwe1ycnZ1eZIvFqq6XjqVw1erk/PzhWYLN0Ldd33VD1zbdftds181u08Ywcm0A0JQUDJzHqvL1xNeTrKrzqp6VReV9ZmDM5jgjysxS1D7GEIbUNCH0qe/iftf2XTw2OEEDQAJ26Jx7rdEQkhxVC8NRmSGiLPN1VVZVWVYVvvfdZ8vlee49GdVuef54Xi28Uoq7vgv7NnR9J9v7w3a9P+zbmETEwMAUw5Ay74rKVROer6r5rK7qOssKQAJgBCIgKh05AemHru+bcNj16/X+9nbXHlKKAApZ6epJPZstpvMFMyMiECChHLUSQRMzBUACx0wIqKIh9DH0KQ4xBXfx8Kkl9eSmVX12ceIKSLEb+v1+1/dh6Lp+t23Wd00YRJVMzXTUtnAyzeppNp2Xs+Wkrgvvc8cZc2ZmAIIkzkE3bLpd3zVhc9fsbpt2O6RBs4JOT06qxTKfTjKPKhJD6oa2bdth6EdBVo86m6HRkWsjOMfs+DXZy3xe5fXMeRJ0NJvNT05OswqGYdM2u7ZtY/T7/bDd7DbrXYomAmZgqirmM6on2XRW1fNsMq3repZlmYGaqUEgRyYSQ2gO3XrTbO7b/X0fgnjnZqtVPTvNqxxIU+y77d16SH0XhmGcoxKYGlhKYmqjtk6vgdYQNCpSQoyERNyPkrVjwflyMV3OfYXdsGn2664dhgBd26zvdpvNoW3DKPaP0F4UWVVn03k5X9TlpMzz3Hk3StBmZhbToH03HHbNdrvf3Q8hGDm/Ojsp6tplmQEeun3XdkPbDl2bBknjHPUln6OxDQICKICijTwVAFUJFRAQ0CBGhIiEbj4/WZ6sXG59uN9v77tuGHrret3c3W83Td8lMDIFInSe8pxni3oyLapJMZlUWVay4xGRVU2SSZD9br/bHJpdF4aErlic1vV84vIyRGkO7W672W+2Q59UABHIAG0kRnDUJUaZHo/aoxwtGQXAkfaNCu94Lwbmzp68QZT67r7Z3e4PXYzcHsJmvd1u9jEIIGbOmYHPXFn56dQvl3VVlz7Psqxgx4hoRhphGELXhMNuWN/u+m4g5tniZPpgmZVVGuTu8mZ9d3fYtX0AwmOAEUDx9TAJcGzBAGBHeRQIUAFRkZAQDEZ7ChEZYLwadFyG/rDd7e92u8MQfbMftve7zd1WEZGBEUjBFVxM8umsXM6y5WKSFZWRBwBzMaU0tDI0ab853N9ub6+2ztHqbHX66GJ2ft5t9lefXb56edm3cfR1ckcjs1M0AHCAYqYI7I4hJ0MklHFYRkhqhOgYgRAMiWi0Lkczh4hdszmsN1/0nUiqbEibu81h37BnFMUExEiVLyf5fJotZsV8ufJFDmygwYzCIYV+6Lrh+vL+/rqJQWan1dN3vlJWs67tf/2zn603bd8FE/Wej8Ayyv6mBCPffO3rKAEYGRhASkYMzGhkZe4kqSZAYNWjRyWgREjEhOCa3W0cOAzW7Nv15X0YAhqKGRsiOZ9RNaHFol4s6npaUG5ACgqaoI9D6mR311y+vNnvh3JavfHs7OLBk6Y9fPHi8/u7+27XjpokAqiIjAIsjbblcZjXo+lIMSggMKFjdA6RDRmIWAF8lqPnOKikaCbstCh8nmXeZ+wyt9nuVKpmP6zX913XjlqAqSFSlnM9zVfL6WxRZ1WBo/AkQwgyDNb1ur5ab+62Qx9Ozk5WFxd5Xa23dzevbnbbXT8MYjp6F6I4Mg5EMoSjdTqmzNjU0Ub5nxmJANEAxomNHbuULARVwRDNzHIE59Hn7B0jokspa5thv226Q2ekKoRImc+cs3qaL1fT1ckpl8DOITEkCEPs+ng4yP3VYb/dIeHZw4uzBw/UcH13f/niRd8OKZkCKBiNxYpHyqdmIq9LGEDtWMoA5jwTHU1vACMiQgdAhCgpSYKqWgx9NBByxI6R0RiI0IkV2+2LZn8wIWMzRc+uqnJXwHJZL5azYlYKCjOR4tD1bQf7fVrf7W8v76aL+vTB2enZeQhy/eKLm8urtknsCQh09BsIkI6jrYiJmCjg6CshEAIfGTQBkKklUwBzDp1z7BwiAWqWkXe+KBwxMrmyKsqycpln5xx7d3N3udvtLCTHTiH33orcTaauWlTLxaysC+VYuNokDqHdd22/57tXh/u7u9lJ+cazR7PpInTp0w9/uds2KanLMYmaARqOao8ZjBq9KACC94g0mt7ATM4pAMaEbZsQCAmYMM8zYnIevXPkgGd5GPTq8ooIJ9NquZjP5rPMe0QUVXzj7dPM5QRmqoBFUVE94emiWJ6e1HXF3qkgG7Rdc2jaYaDnH75MYVidTp+8+zZzdnd9++lvftPtkhIqQRK1dNwzEAXHDAAysmM7CifIYAqEkGXOLBF5gKzvrW17EfE5X1zMmNUgASg5j0BmBMaz6aIuK2Y+HPZdc+i7LsbocszJ0ICdc/Np4XMrZ/lsOSsyz8QEaBC3hyYGbPfw2UefOq9nT1anFw/Q8NUnz2+u7vpeE4OqggADCI3YAuwhxSP3JmKgoyGcOwdHDzE5zkKwmOJ0vkwGQ98B2DAMRcE+y7IsK6opIsWY+nbYbjb3NzeaEqoACKFlDh0gimmWZVVVUC55VU2ms+l04igDxih9d9jEnte3zf3dVi0uzs8XZ2fE2avPLu9u1odmGDdmCMHMEIAIAMgMZXTPwJgMQc3IMXGGyKiKaJwxI3FICZkn02XbdBEt85RnPJnNfF6EgIf1EIYuxcFUvE+egT0yjdMrIJozNCLOCipryqtyMinLunSucA770Hb9MPRufbvd3O9U5dGzR7PliandXl6/fHEdYjwuCIycfXTZj90GzZAYRmsHycbNEFVVEMLXGzk5TXyeEjX7jaRUluViVU2muSrt1l3bBIo9YWRnWcE+Q+bXfjehAYCZQ8Qsdz4D9jafz8uJz/IMAKO0fd+0+9Qd+O5mQ4ynF2fnjx6Hpru7ub16edUOYdwRIiKx1/bquJwkZqZEY7qbKBCjIYpYCuK8sh/1amMHGXMKur67LcqsqqdlXYeQmt0udC1ZyksoSpfnznkyI1Mb5Y/XrRCdc1yUnBcuK7K84jzPybFIODT72FmzD1cv7gFodXZ+cn4agly9enl/dd8eAud4bOx6dNnNICUxAGJCREMAU2Qgz2IQB9EEjICGBOAclWWuACZKBKuT2ue5Cu3XTbM7ZD7MJjabOGQEHNdPFJCIPLMDwNcfa246K6qJny/n8+WiKAHAxxT7fjt02u50d981+8PTr745PzmPiW5efHR7ed/3Cb3XNPrKYGJ4XBiCGIE9ee+YOWkCM+edUXZ335oaAZt4jdGVmOWoyGhQT6qyrmKST35zddgcqgLefJTlWTaaDEwjeUZiB4BmmiSYjdQbEdDNV5Xznn2GyIA+pdD3TdMMGor1zXXoh7e/+tb8/DRGWV/fvPr8BoCISUxB2GiMp4kYohoAexhbNjvwnDNijKnp+rqsiTkO6bDpVCFXKAi9L+fzYujjF89vb6/2ZaGP33CLGYJIiqLI7EqwpDqiGpodF3NUQQWiSIzg8jwrqiovC2JC9F2/HkJAKy5f3pjq8mQxOz1Byjb3r26uXhoQIJmpqRgSIo2AaGYIyI5GfjIWhnMc+6RJHNv+0E0nc/LUugaI2ZVVNZ/PTy4v7za3u9i3Z0ubTinLwECBDBSZ0DlS8SGOS2OiAEksRpUEqpASSDJXVpOiyn3uEbFt9jFIdwi79S6GYT6fzU9WLsuvL2/WN/dDNyiaChzTjxLguK5kOHppzEjETKMhG0IKIQCgc3lVQp65EAUQqmkxn88d5i+eX97d7hnTYkHzqXkP494IEhuoqqQYRDBGDdFShCgWk8ZkKqCK409X1ROfExJEiW23S5G6Jm7X6+VyMT9Z+qJsdvvby1fNvhVDUTUZJ24jHr1XM1DvjouXasbsmLMkejh0iJZnPs+KosC+jzGksirmy9oU1re725vruqTZ3E8mDkm+XKwEI1WQpCohCaZkIUIIEJPFBKL42pVCMHB56YkopG4YGjPoDkPopCiL5ekqr+u2DZfPX3SHZtzp0IREig4NicgkKRg6xzSSYBynJKeCQ6ddI5N5VtRFmeciw34/ANLJ2YwI1/eb0LarOZ2donOWVMVwdKXVQJOoYEoYI4QoSSAkCIOJoI669yjyogGCQyIDi0PqmkSQ7e5uTeT84QVPT7pDc//q8ur6KvOZQ4oqYApInh0xxRQ0KRPl3ndDICafOe8ZjLabdrdt85yLOqumeeGz/a45vZh4X4rZp795NSnpyeNiVg8hShI0QJMRk1VUNZoIxwG7RqNSMhM1FTCz8QWOrhoCIDjP3HatCIC4q8urtm8nk3k1WanKzfXVzdUVHnUMQwJPzsDUVJPGaMRArCIRDBAZwKUE+922axM7nMzcbFITURf6yfQ8L3Bz33784e3pmZ6ssMyt7TLANJpbTKQpmogpxABtJ90AMcEoko7jz5d28t89Bg4s9V3XdbFrhs3deraYz09OgNzu5mq/3oVwXDI8QhmBIyeqKYl37ByraZQ0EooUbb/vVcE5zguup7mZgHKeV0WRvfjsfne/Wczi2ar0XiUpu0xN2FAVujakaH2PXYchQdCjpAg6Kk5/79B/9wuCOzR7idruu+16N5lMlqfnZT3pu+b28jp0AxPbaP8hMrMBIBGM1cYOkAjQMRgbgKqac6yCqqoKBgTo87xkdp99dH3YHso8PXjA3okpmxpgL0nDADGamkbFvF6szh+wyz9/8fnd+g4Mj+4H2N87+v99A203DJ10hzaFcPHwcTVdDGG4u7k67A6iQMSSBMCQCAlNTVUAgR2PoysSOSJmGp1TQpotV23Tdt2h79JsUam67e1+c72ZzW11wkXJYYhgAIYpBINqOjudTM/KarJYrbwv+yFdXV/t2k0fhmY/AMjfHRz/XuzHbyfmdrtNDKEu6+l8lcB2293t1Y2kcWXCkAgNEMAUxh0cZGbPYIgEzETkjuu8RM656WSekjVt43yO6Pbb7url7XLGJydQVJQiqhlYMkEN2cNnX3v05rvn54+9L54+fdy0+7/++U+32+ssp/li1jS3YP+/yL8+PSI4Tdw0jSOeLZeI3OzW2/tNsw9Z5saaZ3Y2aqtqMUqWeSIcx3BmIHKmbrPdIGFRVPP5om3aoe+Lsnj48I272+12vS6y9PhJhSApqqIhUwqigT3Nvv3d3zt7eN4P6+3mkxfXV/d36+ef/3q3u/X5JCvZV5I6M/l/0+aoPBoYwP8BiSpSh+oKJuwAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAbRUlEQVR4nF166ZZcx5FeRGTmXaq6qhesBEVKlEhR4mgZHZ9ZNB7rzPHxc9jHj+OX8G//tf0YHmt8Rp6hRBILATSAXmu5S+bNzIjwj7zVgFxoFruBqq6IzIgvvvgi8L/81/9GiEiISJbIkiFDxhhrbPmGjEFCQgIEREQkg0CICgoKAACgqqCAqqqiqgKgACCqIgoKqlqeRPnuXaoqKqKqgIaciDAnVVARFlZRVQUkVRBmVlAFUVVVAJDyPlUFsAhKRNYaay0CEqApDlhDhsgYMkRISIgAxQODdOfA/EdVAVVUSFBRFQCAEBGh/KuAqAooqaiiggIgoBKqgIIIqwoSoIIiABIjKyuogEI5XlEwOh8FEunhe0sIhtAaUzkLgAhgjCkOGGPIECIRISId7AdEKp81m45ajBYCFAIAAAEABFDAYi4qKiAqlr9VUAUEAAQCUAUp/yYqAIDlhglVAAAQCRVw/oWgUN6jCqCIFgmRkIiMsYiIhPbOfioOzA+AOYgAAeDOtMM3AKiAoIqAgOWuEfHgJsxhBqCAUsJB5pcgIaiqyPwMgICERspvRAIWEC1mzPFTnFG11lhnrC0GE5UUsMZYY5AOwTNbreUO5sAvhs//QxBAEgQq1iLgnCAAoIffcLi1+cPnbIA5zg4nVP6unLoCIJQUVCxvQkREJcISQpW11jlnrTNmjnhjLBljzCG374IZQBWKA4ePxjsrCVAQUUHx7l7+/9ccHDq8/87HuxtCJAQAlMOPh3RSen+rxfrij3WVdZV1zjlrqTzKVSCU05/tR5zdn+EFVAGwHCUoHp4RFOf8AgCcsaKAhpSTh/kuFIsJH0Q3AiGiqiIoouDsnABgyU9ABJpTGGYHbPky1tkSMIZozltEJCrfAsDdUarMQHMIRYD3UVlOcnah2P8hls4INePQAYANqYhkVgWYoXgGUhFhEYU5vA6Y/f7urLPWFhA1ZraX5sApAFD+fBgIWjBRZkjXD+F0tnk2We5cKl8icBfwMmN++ZlzTjGpiKiISMlnUGWRzKxzGhHg7LPONwLWlRw2hogMGkIzW0x0QDSa8+Zw0oiABgAFhVSkQI4cKlfBCAXRGcdVVZVZRIRZS/FSFhZh5pxzeU6ZOevhIfMLpZSPOXjg/WUjYnmyzlXGWmMNEREU6JljX+9gp6CI6gwuMziCQqmm5cyUWUW4VKW50EhJCBHOnDnnXKwWYc4555xT4pwls7CIcPk8QiqpBKoESnjI/DsIwMOPenBg5guABHdhT3Oi3rl+iPT35yTlWJl5trw4IspzGguKqhZzU4opckqcE3OOKaUYU0rCbIlQVUUR0BhjKrLG0AF2COe8eG/KwRERsUCEJWsNGUQDd5wBqJQ/QgXlu3hWBZhx7A4PVFAREIWIEMAoighnYeaUcs4pp/kxBc8plyt2prVYE2JVW1BVlnIrIaScU8mKmGJKUyl7ClLihgrXcdY5d4BOwvnckQ5eIh5uoEQJAIgqi+DM3KQUxff38T6gmDnnxDnlGGOMKeeUc+accxIQACp1DTlJEp5iiDFOPpSHH/00TWmKMcUY7xxQBYYPHHDO1XV1iH4iQFBEBUAq9XAu2geDGBREJTMDKgLknFkEFIphLCLMkjnGGFPMMUvOnHPKOabEKakoIDTNwlmnLDHGEMJ+u93tdrt+t7m5ub256fvBjz5OMcWUcoIPGe+fFb73xMbOhOeA+6U2ldcKFHQB0fflXmfSMsc+5xlOhDnFFEKYQgghxGkCUWdIAQ2CrSoVZeG+2/ddf319td1su92+73s/+inF4H3wPuXMufxGET3k6YE+vEciUD24VPg/Ad3R/Tl1oBQTOaStHNKWWUUURFgkc045F0dSmkIYh9F7P00Tx2QNmbpGQhBg0RCmYRiur68vLi7fvDm/vbntdrsYU84MCPi+ziC+B8oD6iAS4odJXMiSIpTqO4cQEgFisbvgdYF1ZubMoiUpE4ASQimYwjnFKU4xjD4EP4UpTCGn3NRNW9dtXU9THMbh+ub6/PWb779/eXNzs993OSfmzCygigQAZIjIEJbsL2ekekeEcT5zvqtfoFDqsz3AfkEbmFNztn42UlgyZ1Atl6sqDHI49SmEELwfh2GaJmZGwMo5Ahi6/vLd5eXlxdXl1Xa7vb3d3t7e+tHHGOFAtAQAFLBAAuuB5mlhv4gfBo3ihySkdBIAcz9QIkxBRFS41FOQA7Aws7AUpFMWlsycYkxxmqYQgvd+9MH7nJkQjbUI2u+7m+ub8/M3z1+8ePvm7TRNwvIhhpe+CJkPfGTmTCVD70z6wHq9i24oLhACgL1zslQKItCZCCoAqCgzZ86ZMyoICzNPcZomPw5j8H4KU/A+xti4atm01pqb29t3b98+ffr08vK674aCiTMfOwRAYWnlpEt3etcoYWk24UAo3gOPkhzg6FAOENHOsKMqh8R4X2X5wFYyc2ZQFeYUY/BhHMexH0LwOSVVrZwzxvhxvLm9PT8/P399fv7mTbfvcmb488eBmBcflLD0zsUpJITCwkRARJGgdLQqKjI3RDDTRxBWRLCH1kR1br2VpTCTuS4WtJTMopJTnnwIwzj0gx99ShFBmqap6zoM/uLi4vf/9Pvz87ebmy3Agf+WI5+VisLnCwsAZRAFBLUWC/NDA2TQGmJRZLUOjDHOOc6cEysfGFYBeAVVtQeknZHzAOrCUthXzilzSjnllGLwoe+6YRj86AGgAMB+t9tuts+ePntz/ub6+iZ4TzS3O3d9aOVIVHOS4o8IiAIgGAuVM01TWeuMMdagtaZypUNxM6cxM7WDucwXDqZZOKZkCVEPhOYAAggKIAoswILCknOaQgh+GIZ+vw/TlHKuXAWq3ofzV+cvnr948eL77XYrUrSCOYUKY0BEa1AFgMASWEvG2NJ9W4t1bdu2McaUfqwcuTPOGAuH/nqOs7kPVEMGkRLnGKM1iCwFfgWICC2iMKoqGkVQIIGUUvJDGIfQd2HsGNBUzlUujP7y3cXX//qnP/7rH0UFEI09gLFKEWwKxSIoHwzOQNuYk5NV27ZVXRki52zdVjFF733BKUJh8TFxaWvuen8VJUJr7Opo1da1iMkV2Ts6LyJFW8g5c8qSUk6JU0xT9N4Pw9D3fQihiHZK5uLt2/PX58+/fXbx7oKFibA0RAVwnSVj0BCUOyHCpm0WTXNyslqtFlXtrDVEJsYpxrjbdynnnPNsMIDMytycRipzr0aAxiRV9r5nzgBsRWYaNgt4iJwz58QppxjjFCbvh6Hv+8F7z8x1XYUYt/vNs6fPvn/24vXLVzlla2Z50hCggKJaImvIFqmJTFNXy0W7Wi5PT9eLZcul4xJhkRDjMHoRAQR53wwXpDdISFTavRLjoqqjD0QqkgnBHrhvzjmDoCIWwpNzCsGP4+D7oev2fdcBorWuqurn37/8/T/9n8u3l92245yLIMA5AYhFUosqpMoIWFfVarVar1fro5WzVlUU8n6/CwfCmTMziyggGUMkIgqlPhRdgWapSkGtIoCKKDMgAoKxFQHaQ0uVc04qqADMzClnzmEKQz90u10IARCcszHG8/PX333z3fnrN9EHVbYGCQFFDCIRGkIRAQOVrVar5b17p4vFoq4rYfFhTCmyZJacOOdS3ZEUUQERSIsmAqaQ+YOQA4UTESKoFBhGoiL+iWrJAc7MObMiKGjOOaeY0xz6XdcDiKuciGw223/5v//y8tV5t91XzjhHJKUmsSEoWjAYMoZWy8XZ2fGDB/eMsTnn233n/ZhSBAJFlTlGaJYueVYdAYskRYDluAsXg0L2ZzZU9GWkwspsKVOzVqCFJ+cphL7r9rtdP/SZs7PWGPv06XdPv3v2/NnLYRiMQSIkBCQwiAZAVQiFiE5PTk5OjtumriorwoUrdd0+pqQIlXXGmJwSABhDMGt/KgqiehABEYEA0JoC8Vy0MQRSFWZWVSREAjJYHOCiegCISI4hjH2/3+2898JijfXeX15ePv3u2YsXL7uuE2FjEEER0RIRqEGoqqZpm8VysV6vjpYLAMg5jWMfpjhNKbPoQcaTwvaJ0FhCgyV8REUVCUEhMwvnQp9UhCWDChyqFagSoTFFgzM258xcUklAOaXoh6Hruv1+p6KIaJ1993b79ddfP3/+8vb2dtZeEEuqGUMG1Bo8Plmfnp6cnBwTkYh478dx6Psu5swKzlYWDGsWVWRFpIJS8w2UwimioswSY0pTSjEXqYOFEQRUmRkArDHGGCKDiMZYm3OehYPMcfKTH4ehD+MQp6lylQhfb3cvX71++vT5MHhQEAFr0FoqXeWU5eH9048ePTi7d+oqN8WpG/rR+4IvZK0jolkaREs1zXoJIBGSjTGnGPl96w85q4igKoIQlZmCECEasIaKvJfzPMdRQFukgpRSStn7cey7bt9NcbLGqMrQ98+fPX/18uV2s0MEpAORVLWElXVN1Zyenp6enbRNkziPfhzG0QdfuBwgYSlnWmRKstYiITOnxDFNwacUI3MsOqa1tq6cc652tnKGCAwpoZYSCYgimjIXNSwlyVmsMOfMKaUYY5yi93672xLR8fFx13VX11d/+MMfrq+uAYBM4SPKIhq1bquT9frjjx4fLRfG2N1+P4zDrt+XxsVaCwBZGJCAyKIpVN6QAcSJU9f3280uBBFRZ2G9Wp2cnKzX6+Pj9cn6+Gi5XLQ1oBIIoRTVgUUTc0xpmqYYY0p5ismysKjEmPp+GPY7Pw5NXSuA9/7pd0+//fa7YRgQ0dVGWFTVWtvUdtlWjx89PD05btsmp9zv96MfpxgTa5HIU5EdGchZYyqyJCqS86brwxiCTwBwtFh/9tmD09OTZVsv2qZta0dkZn2ElXvVzJJBMiohkBSoVbDKZKi29bKpLXPKKU4xDEM/jEOepqZtp2na7vYvX7569ep1jJEMWUN5HunQ6qh9cO/k0aMHi8UixjilOIyjn0LOfJhIgLIAABIZ48i4LBIjT1Pww5BjrmxzcnJ8//7ZR08enp2ta+esASLVnDnnGFPKMeWoGlWy5gxCqEYJkYwhgwAW0ViDRHaaJh+C96MPY85ZFIVhv++/f/H91dW19/4O8ut6HjadHK8+fvKkqeucUrfbhSkwZ4OEBlS1KLeq4Jxt28q4isXsN+O+2/f9rqno3unJVz/7+Q8//fTx40fjOIxj54ddTJ45ljGJCoqKKCPmok6VhhiNECmQzpwVEURtitGPox+HEEZSY8j4EC8urr/507f7/R4RjCEEFWFradFWpyfHp8drVO32+2maphg5c2mdVZWITPmyxhgDaLquH4ap77xz5pOPHn7ygydPHj08PTmpndlvrodhP02jSI7RT5OHeQZiUs4xJ+aoIERUhjXWgTFoiCrnKuvAOUNog5/GYfDjkFOozJEqbW53b9++e/nqtbVkDZl55iNNXR2vl48e3qtsNYXQdV1MEcnc9dDFAWutIeOcyyKjj/td5713tnn04PSHn370kx/+8N7Z2eTDfr/d7m6nOIgka22MMQRfqFrKEGIKU4opKKghU0qgs0X+h0XTLJqqbZyzZLtuv9tvpimUlnIcwjfffPPy5ctDpyksYgw0lXvy0aN7Z6eVNcFPQzeqKiLxzA3xruF1rnLWCUu3H8/fXbet++jxw7/81S8ePDhrKjf2/atXLzjnGKdpClP0YfKzHM2cknifuj5MUdJMIEAUaJ4Tz/qJM1Q7c7Rw1qjtu27yQUGapup2/bu3V2/enHfdvqodFZlPua7b0+P18XpdV5X3o/chpmStK1NbgaKQkDG2aVtQCFPquj6lfP/s+Ec/+uTTT56cnZ4Y1KHfD2Pvx3Ga1bDQ98MUIwKuj1en99a7bT/63b6fYiqDuHmSTGW0LnNXHxEC4ThGZ9WOQy85W2cq625v37z4/vnNzU2KU7ushVmFQelosXz88GFbNznnvhtizKqgiAAgComFVa11rmrrejkMw64btpv98fHqyy9+9Mtf/uLRo0evX768urzq9rfkUIT7set73/X+5rpLiU+Ojz7+9OzHX3z+5vzdOEF+s2URRMOSEdUZY9AQomIWZRVARAEYJ26UrIhUdS3Ct7eb169evnr1YkoeQSUnEbbWHK9PHjy4f3Z25sdxHEKMGQCsc6WiJRVBIDKL5RIQN5vNZrObpvTpDz/9/Mc/+vmXn/uxf/btn/w4iOSmXdxsrruhB0J09fq03XUxZi9Iy/Xxk08+YcHb7YD0SlGJyIBR4ZxYgOeZEZSRLSACWTy5d2ZVhcj1ff/q1at3797td3sFtQZVxTp7tGwfPrx/tDpKnH0IIUw0tyBQNHBAsNZZW5GhEKbbzaZtFk+efPzTL3786MF9VBj7frO5TimGGMOULi434xROzo4fPrr/4MFDAPv27YUPeUpCxjWLRbNo0czD+sMEVIqESnBQ9VStNYvlYnZAWDab7ddf//HmeiMsxhapTJaL5dm908ePH6Wcr66vg/cg2rYtM08xcZHyiJqmqZs2hNB1/Waz+9nffvnbv/2rs5PT3Wb7zR//qJBEUj901zebd+9u98NkXH1y9uDjJ5/85W9+3TS1Av7hn7+7vt5vdx2rGmeNQQApgzdUodIcABzkXRUAIHtyenp6dmZVdPR+t+tur3dxikSlKQIAXa1XpyenAJrSFNNECGgPey4AIhmQmqZFxODHq6vbuqr/7rd/8xdffXlyvL5493ZzeyMa/Tj0XX91dbPdDfs+JhEy7MdxGHo/DpWxy2ZBaMdhurna1U1dVwvnWiLOScrWQokZmk+/SJBgLVaNrRpnFeB2s7m5vhlGTyDWEBE2dbVYNsfrddvUKU1T8JyTNdYgsXDR9Q2Rca5pmtGHvh8R6eHDh7/5zS/XqyM/9jc3l7e3N8H77Wa3ud3d3uzHEBMrGlCQaQrD0Hd9R4YWbVs5E0O4vdk+evywrpq2bcbBp8izqFvO66CuKoC1WNVUN6aujVWCV69fn797oyDWmtqQqpysVp999omrrEruhy54r5zRGAAsQq+ILhbLum2qprq6ur282Pz613/xq1999fkXP37x7Pnzp0+Fp27o3lzcXF/t9ruhUFQyQBaANOc8DON2s1PEdtksl8Tit9ubBw9Om8au1m3fdX4I74fbf740UlV2sXB1TVWNdnOzvbq47nYdGSr5fXp6cv/e2Xq9DmH0IQQfUsoIxCyssfAlIlvVlYq8e/O2ceZXv/jJv/3b39y7d/by+YvLi3fDOIzDeH2zvXi3GccpZT5sjwCAKkuKaRz8drs7Oz1bLJfO2Zxj1+2Yk3OmbWvr7GFfR+f1pENel2a6qqq2adumpct3l5vbrR9D0bitNQ8e3D+7d+Yql3MOPuTEqkBkOHNKiZmRyFYVGoxxur26eXC2/ru//vUXP/lBW9Hr719cX12Nw3h1tbu63nd9YFZjLJXmCrBoyzmzH3236xCwbZq6qZhz1+1jikS0XC7qyr3fyvnzUQ0AGGPrqm7aRV239sWL7/3oVTRHqRuzPm4/+vjRcrGYpuBDmKZYlukAIOXEzM65ZrGomoX3w9j3ztJXX37xu7/7m3/9+l9evTrPMQx9f3l5+/bddkp8dNQikIj6MZQhlTGgBCIapzh0fY7RkTler7wPu91uHIambU5PTxeL63L8OP8H8z5UoRLO1U1b103larq+ukkpIqIhPDpqj49X1hjmHELIKZceCpGEyx4AGVum8dr1vbXm3/zVbx49frjb76+vb66vb7a77fX15vJq2w8hxpRy2VucFw4RyxxAlSWnPIWQpoSA69W6ctXkp77vc0rr1appGzgsVt2t+uBhcck517Zt27bOWtvte0CwhqrKnByvTk7XzCmmKYQgImUnQJhTzkSEZKw1ohymMMX08MG9v//db1Hlj3/65vpmu+uGy+vby+vN7W0vikgYQnRWEFFUiqY2T/SLA1Ocpqmuq9XyqK7qnHkcxxjjYnnUNDURlrHf+12ledCqde0Wi2bRtsaaAq9QOXu6Prp//+zs7BRA4zSFEACAiJg5ppRiUoDKucXiKE7x6uLyh59+8qu//OXjjx+NMXz7/ffX+/52H16e3+z3E6KZdxGyxCmWCd9dUS1fcUpDPwzDME3RuaqyjgBimOIULVHlqspVpR4djr+IjOoqatt6sVxUdaWgFhCsgeWiune2PjletW2zT/Mct8xtYkrMfFhrNGXCW1n761//4mdffbHvu+vNZtPtNzf95dVut584301LD+p+UQ3vEF0gZzmsUUwiXNd15RwiTCFMIRBR5VxdVZxZWfAw6xFQMqZpq8XRYrlcVLVjyUQIdQWro+rBg9PFsiUiURbNZeopIjFGFa2qyjmnCrt9B4AfffT4t7/9659++fm33z178+5d4vzm3cXbd1cpoyiI8gf7PWUSXNYQtTiVM6echZk5A2hT1VVVIYAPwRcHqqppmtL+EgDBvARhrVksmqOj5fJoWVUu50yVo0XbHh0tj46O4jTd3N70XRfCVGSwsnkmKsxSuZrI3tz0n3/+s//0n//jvftnV5eXl5eXN1fX3W6/WLTr9Yrww5UMnQepdBh2ETpHzhEgskrMcRh674e6cYtl27R1nKZxGES4rqujoyNr7Xz4oIBqHdSNW61XR0erxWJJxvowkTG0WCyWi2VdVTnncRinKRbrC+0hoqLmpZxz5rZpv/zZT3/3D38fpvD06dPddhNCQMAffPzks88+OTtb1Y073Pm8uUYEZJCKOGDQWCQzy+Dej9MUrKFF2yyXC5bsg085usqt1ivrLBySABGryi6W7Wq9Xh4tXVWJaD+MtvQry3ZRFupEpbzaGCrTB2utq6qqqm9ubgjpp1989tVXP33yg8f/43/+93/8378P3lfOLhdHf/HVr2JkZ82zZ+cXb2/ujhxQkIAMISnMm5lgLBAqM08xxhgRtK2r49XqeruJMXjvjaH1+qioY3PxImyaZnV0dHx83LYtGQohbHd7+/jJ8aPH99q23nW7YfR+ijKP1TUxi4ix1lauaZspyenJ8T/8h39/7+H9f/7DH87fnt/c3u62++P1yb17Z01zdLR0P//5zwGMSN5vfUp57tvmud+8BgFlw9gIaB5HP/oxcQYCWyEzBC9hDIjUtJaMUSDWMud0q9XRerWsK+sqIpJxHMZhtI8en56cHoFo1+98mKaYTdlZAcnCquqsAcKsUjXNRz/4wb/73e8Sj//rH//x4uJiGMbbzQ6hWq9y340nJ6cfP/k4hMA5vXp5ud3up+jL/q8cNmiEQXheOBXWECbvfc4ZSauahDVNOca4WNTLZWVt6ebBuqpZLo5Wy6OjxWLRNLVFlKHf+9H/P+0747jXzSs9AAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAjXUlEQVR4nAXBWaym550g9P/ybO/2Lec7S53aV5eXsuPYcews0+nuaNJ09wwwDEMzAiGEkLgCJISQgAukueBuuEAIJMQVQmqYCwZ6Bk1aDNNJp9Nxtkmc2I4du1zLqaqzn299l2fn98O/c+c1OQEuEh4FGAkNtJyfY1XGBVhYa1XJwgicRx5tbEAJN2m20m1wKfZUNL2FynuHnbAuKRrQV2DOGcbDAKWSPSyFgeyLSqucu3Mfp+W4W7goWJa9FqIfhkm9d7aaOyh6symOY1HV9ajoQpvtOMFZ6NUK/VZnDptF7XSUcQFhtHFopFZqSSRQrEcwJrLrJlaanec80mWp1uusSgEgoBIKxwBVhjYzpGnU3qztalSKorwkyF8svdVDM2oi+hiCi+NGmTx2klIMW6rwm5jZlKFThJYnemd/fHbUok7KKDNWEwNx3IBF564Mt1/U5awBJ4hUykyz+aob4SgkeLOse7fhtL1Op2JL65K0sItNpiExiFgip2SJc2KbkpWkCUHwDFTgIUoFWbADYO9zislADD5LBgalse654CQpMeRcaq1UzGudFom74L0CaVBJSIFaDBIpK5swFyLlaIUE00E3iaNCSvSWhAawPmIGjUUcLGSv0wADt6kmhxly4qQTDX7YpKqo85ikBKHIYelhgs1YgCpskT2UoyK6oCT3ziYoOMaMmWKBThgnTVaCgGz0ElcKcjaQHIPQq1JCzUJ7mIRcsZSZggs6CyCoasaCR8iZwCcXQgAbgPrRhjcJUI5iQ42JAiJLp2WQQkQvh1ClkRpTJ3XUMccsR4YLX0211Z5WZMxlD4HZ6CjIRaGqIFTOmJUqm967ESlFgiHqLDiIfqDQuQ5NpFpGtLHL6HxJ0QmZcia5lvoiJhCqQg4+9sZIU0ogRpFJDII48pBFHzhFz87njK512aeQoUpZCPZoOmLhhQyYORvqC/SQYiaJoDCADFIUAbxTXlQiqRqlRhGUTBECkQJMQdkukOpHQmNaG2FlhJAScMTMmLDEnDlC0uS7LIyIOjG2mYWNpqAYIZZDCkUm1KFvkzPlGNpNZr2FsRssl+wXcZqyLGJYJNAbFoUBHCglgSUYVhFQx85WrfQbd1FLQSF35/OiuNp3Z90WV1Kt24XWWigEzNIhDMXYpE3vW69RBSgirhFM8lmDVCrigs8TcONTLBddNkkMMRQcIQThKUryZS44EKtMm9RTZl0WbmNz4iyj7TbJl5RtrCBEtLoM/dBQ55EnWzueNykMltYXQ85+/fxUR9VOlqZWu/MIX771btVMANsb375fXKvXh8dBak628zg/ORaqESkTpp5KnaIoyfcxmkpbmur+wMMY09KiIDJcuC62k0TrrEXIXJaRnODIQYCD0sDG9hqk1MgtDSaVkRWp2EczHoSYdHq1LVU31EYLOM8n3SbPxnlr8tuPP9/EDIe9NurKlfvd5fAn/+7f/vDjX7365tv3vvbV558+np+svOScuqON7M+WWY+djA3UscnmUiMEyZC8ERNtjxl1Uspj01qWRYswG5cS+gEkZswajeUEFpxgxUOIFnJIAyvmwAPIKockdLQao8gJB42lKGPOwnn0VfA9LoFFWYi6uZgI4J322cWly7e//Pp3rn75Zpnp8dnps+enB6fPfvLrj9uV+PO/+Mlf/tlfTCZamsb4pE2wCZEEdpF0vACFlHISIiUfAyOFgscixRYRmwIxGo5ZxQLhlNuKLvUeW1mocEJSA6SQIyKhkxHWKcYYTAoDesFCirUPkSB5Mn30tMk4t6KvR1e/8l5z55KN8Tc/+KTaFd965dWrf/PqM4zHB4c/+u6fnz7rMSx3NKhRWWR1zWiRwvnELu1wfjasc++HwSzB49Cqoore6TjT0mEhkBg02BwzqejmmsaKYOh6WeWSISFrO3ZJIHotFA3kBaHnKFIAQlyrslqvV4GHJCqfiRHR2nK7WBz4I8FifLW5fPfLL2+bonl0+PTp+x/okP71v/NHT44OH/X+e+//zJ2vkrXjKtwfVYu8NQi+GMSL7FfnaYgcktFR5kQ9CuplSWLJw6UuHmhW1i17XRMJ9APraRHzxrgYBxlKVSKHUqQlsUm6F/WWbduyqXneDxEMC0FAuWSHQWTwrIaYMvSZY9xQqJbRrt1YXnnl/htfGt3aajGsLhbf/7//n7t37v7Bt393VY6/96ufHH+2CK2rjPDMFeLBOT31i1FfDdImkVOXTHTB2t5BgkXwtomiwbSBYZvimvWoU64KjS1oogQYTG1K0kknrJdJsMQM7jAq6KMvhWRatAM348bDSmTUERyGkHrIkrMJFFHV0Lfk/LJ1K5lu3nvv8q0r5ahZl+r9jx/X883t+1e+8s7X4u7sz370y9Oz5Q7FThVNTUsawKsXOcehvTas56t1Basu5O0Qn1Orgyi8EHIlks1JLlF0wlWdR9GthG4u+k2K2E0FIkoVA1IOGRN5SqkTCGB0joGsi5xqIEpmJHNrM1uVEpSxT5toZc6cdMq5S9CSGL/5jWtv31FQLuPm8Xx59NePfvcbD6q3Xn365Lw19OmPf8GdrotR259ScvOebDiBUzzMJ2rQO3E1T7wTfT3gXIaGQ+8jOdmJqB32qbdGoG8vqBgHsGGlvXI1W7cWYNlzQCFNKfNiXk8m6xMXlY6QIAFKEwt0/ZpSWZWz1J3DAIbEST8n1ILc4PAYnNq9dfMrD3b3bi42y8+6Yf7ph9v16I//1X+tteGvvv/BEGy7cTFjhdl7n2w6OPrMLbp1GNDrcUpRrVaYC4vrOBADDnrNZkS20KILLmEIrHZytCrnBCpigjBA4UJgbAVBGFCKKAccEsBq6XJeRAplli06F5IGVHLcbXoQFHXOhDklqY3NmHNxrsTOq+81V24kJR+t7cnZor84eu3Wy9X17Q8++vDw6UXr+kphElkFcbg+7k823foMqY/J7lpaBRiET9Bjb4S0DkFHlI1VsK4jKpIsg8i4jsN2zs9z1rrTFjJIw87lGBUIDRmjiCIUUPfFHMHJ0SScctJtEDGs5mPcbTNBAG9Igvd44fX1dpNTh/LW1u03XtL1/nyBF+fh6PTDa7p87a37kbZ+8pOPetcqNmUtMaXVcn58eNaf5xmskvQrdNiJVPkd228kcZ5tFccriyBZUFBWV7leyo6FzoNsdQabniGwzHroO8tMcW4LIcBlFp48kQ+BSECMqS5G3s5ZidDHCkshysH7ENqcChgaR6OAfWo7z9W1d29tX7u7seHR2WL1YlVU+Xdff60ZVx9+cfTw/Kdlzp640KrdLBZHB6ebRYx9obD13cgbQanEHF2QiWtgGxfgKiAQsIZkVhq08yjNod9UUrONUclxxhaGsKlMHSI0YxiOc1JDEpkr9KEqdMh9HqpkXAiZKLOQzjOYUBQpnUWGvvUOUypCMyd9/c3r5aWr5yG2bTo6PL5aqVt3bg/WffyLh/NNMCkjpy1Fnzz6PLbr1C1UBvSRoI0pJ1BGSSNWwRcCHXlMyAAOBVVptIq+Jih53bkKVTmysFRQp4jSQBJ2uiXiQskA2u140UUnMjBHsP1QUkroo0/MYgAL3iqfI6WcUss7FHvVLY+Fmt546ebl/UHEp/ONIjNfzX/vjZccpvk8PX/24sKlRGvt5Gk3Pzs+mPqyd10PvSRNIIHlDuYOFmWbtSh69FEwJcqpG0B7TCVarLJwxuMUQvCcBxAyU8cD85I6QfJQtlv91opDLaOtxEjkMGRgrMboT7XLEAIwkGOOkXSXpCBQ/WqlKrSZJ7eu71+/s16uLEFnoV09feWlm50NC5+ePj+ADClsFn0bFu3SbkzKg91sggdKhbe5SAg5ZSxRE2UbgQnLoDZmUw6yZae5WcsVMPfcE5HwghOiKRh7z6WLCkdRJJp50ccyZIIsASRxVfpE5BOQCowO2jSw2mks5t6qzUC+7yVwF4rZV98rtvefHS1WnZs7B3T+rbcfKCgPNvbzoxMXYblYrpdLf5FdaCcbsM4N0Esioi7K1NiovPYuZcs245kgx8qRW0uwZQaKDFKoKYOp0lZyVdJcciF8LNhv+6Ix/Y2cNNt2DyZ5Imsq08hyKyiLUmaIOaqG2KQclgXWbc6RAFqpp5vNMKjhxjvvRpFzYJKrwWEahq/fvvfFyfKsHdbLhW27uHK9W7bBchcAXctESVFM08xeKoSIiYve9qJcyLYJQqABMWxqXfgJ6g3CpGTqEbhSwYUKyo1ICTSNZAYpfI1SYgqqVPUG21kkvIQ0n+F1UXIFRvhKpvaCp9P9evrf/+9/+kff/KYJV1t9ZFpnoXrr9799kXlIxvYnKsg2dfduzo46d37a2+jm7cWwOVtF5AgzgUd2aQJi4TmtleIl2CJRhGRtH5USxvMGc51BsOCxyYFhSDyCBgGoEsFROSJaS6xDhwVlmRjGRrBLBqPNUiXlKdMUEHe2eQAxVLL3SYuR5HyO/tnFoWRqL06pKvy6X9P2jW+814cUZLGeH1Z98BP13s07vz3cXJzPN93pucur8+hyobNLi+VZ50sZp0znMRGMB+xMgAzUir4uK2AM0JlaB6GZbR43bohylGIGlWWCmICTAOG8UsWIqkDgGTONJNOu6+NM7ICBMaScPXdpUxhUwslCNjmm2EKqZ3vLw5P33vpGCcHL1At+7fd/z7vhoBWha+vcrPj5vf3LJ8/C0cWmXa3X6/WwiaFLVdmsXjymYLPOTrjexyKkPiDWaYBYxOZKHG2kYLBZ7/q0Mo2qUTqSspnquEzohS259CpwHf1qDGNVykmtVn2CqHRjI/kyTriOq8FdviSfHw1ZUFFj1jRWpZJalkao1vV5CHjrjXcooyd9871vD733WSa7FES9sW+//W5Y48PFnGOfHLZdAsW69N3xwyRSNqogy5Ba1gtiYKnbUZkLgWKtDRYkyi3JdKXc2c7VSM+22ZSh1yJP1A6XmwIr4txPpYRSwCq1xwi2EhboolKrOoQBbTNJanmma97CPVJeiySWNop4kc09WU4W66Wsx1LKnt3eK3drzKzko/OV6xyls7ff+fLp4eLx8ZkkeL7cDKcXEy5D182PLxQnIQZKSFFSMDHLClVUMWfFIkQhSUY27FjvQuwjaUxYsvasBkwyZrIyZhDdhKqYOhaIprApoiTIKx6EzloBxsHGuGFFBeJG5d0UNiGKqpzIoN2+2TyShKl3vc1u9torpdbGx5Oudf1Aim5e2V7MV0/PFxnl8fLg7GyTm6JuN2cXrdQ2Oyo9dahySJXukKOLUTILDLlGEUWpNXEdhPex31dXok6DR/SdVhvBdSdVXY4BPHlshTCuX6193pBNK+uC0BZiTWkg8C16HWtVEhbspAOhhY1n2kztYZdcloVrLjYyJ2mkYdMOXRezFnTl3jUJ+vNnRz3Yvm3n5+e7YmZ9e3Z2hCDAbxSM+wSEa6i1c0IPBCULyixqQU6VBQFS5EJQVmGTB02dojKSTVn1KivvejzJ7fjMH3sqNY/jaKuY4v5sbx6PY2xCRhaJhIjtpo7VhZtPVsNqc+EhCXHerbbBlleaoOcHh8VIQznQhsW4nPcusWh2t5SoDg/OvY/WDefnp1BO2n69OnhEG0CxQsI+dAWVS6W3hthP/cix0cWQ48iIjcwRAK1SY6BBOKMK9FEgAYLOMIzXq5P1vCiu3jPXLu3vzBxWyg7EgkXCmK+We4mi9hhjT2Cuj2P0oUgjCF7FO0mWgmf1ECbUJ5xd6x/+duvua17Irdn20eLQIWQKe5dvvnh+th5W3jq3XEYldMjp4LkdgiqwcJgDKK0iBIlgWEk5ZsYQO9KYmWpmAzqXIuQOCp4AO5UVKrd2DtBRMbv+lfGb24JSCsnbpEIbFO5Pm1o3owLPN6v5erN1+ZLu4sbbzvnE4C13m3ljMZqNmO3fkxu/zG7Rb+qX76R6x0R7sDqjhG2/uH3v9vJ0eXK2IBrO+m4Tw66oT06eP7N9IXDwS6ErL3R0cazHxFaPJiRcDrnKo+RTp0MpJPWrlqmUlciYlHK97S7WzZ1b29fu5MEMi4t8thrvFDtjU052Rjuz0/P29ku7tjdKcJ2SPDnZ2psowsJubIJrRX1ydrQ8L44erT94+Cthyy0pvH3x+NYr99pzgkybfgCgC+9n01Fe50dHzx1m69t+vR6pumvPVidnY5uFQFZ1zpCDkJyI4rQYB1MKzDiq+qEtAJUsale2NWlAzCEMsBrWxfjWzt3XCq7yOhbG7u1NqvFsHftkVGvj2cGRyuLFwyNQuinHCsKlRoduaB3BMHSr9FGaD6F1KUxv7u/JVhipaLzlHn/6/Ozg+njv5Pxs7ofk1frZ8/tvf+03z387rAc5NqfP2wpFBfTx0Qtho9RiwZsJyMhUAGbT0ER1cjr1fVKjBJtYCCGl9xnV0HCj7WK9dFlevfXWfRdUyfG8XVy9vu1i9l18+uypqZppOb10bSZJ6EqFRCiZmIL1QiQl0XnSaNp2zTxZrfqzxerg4CiulUDfXSyX4y11fXv/+Hzu+2xSvRhO//g73/no0yfz9Vqq+vjRUYVm71L9wS9/M/SuYBQhiVRECegwVbuVyCIEaS3MkLDW0BONQjptUDpgPgsrkXdf+lIc7ziIo3Eeler80/jFR4e7O5PZ3vbt6zdkXSgpQvIlqscPHz2ahwcv7Z8dnMvdmeyWr7x+39nVP//pF3s75srOjrOO3LC9V9+8pkQORdycTnZ2u0UfU99TXrj22rXZ8/n84OSRyHI9bIBxtqsOHv06LldVHiYsYw6uqHwatkqiLS+lxgHERBjBvV+lFBWslMQYc+c6WY53736ZoJ4mrnaL7nx+8Ony5pX92Y0Z6qLtQ4/OODk4BI0L7q7c2js4+6zZM6dHUpbykmy0kC7JSHzlxmx9fjoMEXRlsIOYyckYRFeI8vD0mbNp6YeCu3df+9rTp898JDlEOD99eW8mV/3xRedSHjx1FL0QPbYVk5juKsfZk2eJKCOjsq1gVAzZw7xP4xuv7r58b8r1jbs7j08ePfzoYVNO33j3zav3r2ESbtkXQmjETbQxOwVUovYRXRYjmsTsa/Czy9e6FFarthAoc9l2wfbZBdfF0NlBLM8+v3z33nzRGTlaoZU+vP72N9//9S/PbJulurg4urw9s2QPnhy4rm8Icy3Y2gTVttSmnPqIQouSaxpJdOuuHy4JbjlQF/vY3P/S65nNpAyiKJ5++MWty1fv3LsGmWzfd71dLru6HMdoiSQTkyCGnGJURmckB8BSqEJNtycPjx6erdazRtm+tet0MazFwksRgZVodpqcje2fbMrklzRpMIN88fxgQGA7FCJVhXpy9PTc9lMzUv0Co6c6xFxWdc3ShwzZjHVdi7BY2UVdFqswcGs6UV1/8LIfOOL6ZB3ix89/54/+Ruu98yGs20ILZKH2au/CkEBF1IITSakKQJw0tYKoOdMQS1OFnPI6zFf+yvUpUKxIt3bFSBdnJ83WiKbbN85WnQfOqT4/P/jDf+XvffTzn144m9zQbZbTyaQj/+x0kUGGYYUmZTF1SNW0ygrRuVqbSnFMaz+sU8L+ot3Y5KvdnVffgELO9sYf/uWHu7p864+/vth0oe9ytN5ISyKEnEL2hIXUiSkYrorKKEkCEQAyGq7QRxFT3/YVbQ3Rb2+PIEtvgjZmbTtRGPZIaWIkDGCRwb3z4N77n/zSxSUhqM1mpy6n5ejFwdMUhsK1OCp7HrmmKYqriUuiZaVGeTKxpXeu2/hhsIgD18Xu6MZ1I+XIjD/56S/+/f/w35zduf2bX/2WK9UzZa0rJpApEChTIIkkvFKyMnXMthvWKCgk8FmshbNaD2SSxGK7TOhGQhsjx+OKiAshFXOPQMfPngNzL3Xq1+Or15/8+gtnGYY4JFmY4nRzNp8PxiYhtYBeqGJGebq3paoCcTdO6rg8GS37MD+Tbae4HV3fu3L3jUqh7fsXnz38xu+/8/nDR//zP/if9m+OVZdGLClmZkUJja4hx8Kgi54rabSQWglTcIQsQsBckybIlRTSh+PD+Vbd5ByQlZJqNBaqkdrIupHU8KSz/aSZfvnOS/Nn/cav++xz8KoShUhnR0ctB5A4mNrokZGNGZVeVsKEalo7dkWVkl2jVFyZsp41t+4oCSdPjpfn519590sp59l4eufy9vd+8kGSMTECJlZZMGulhJQCqSSpSYaUEDBbC0RpcAIy9K1M2aVgxuPT4wuvOCtZJNREtdY7VTOZjSuliZvtYrKXF4dbV65//uwhILlhEyLts1ovF6lzRcoZjeQMSU1ndTHeTQk0l1GCyMItoxB6KkxGvX//K6NyknDY2d766lsPku8QRD0WX/u9Lx/9/NPBUIZcVAYYy8IwgZRUFsWkmSiSRCgEF6aEkBkVIRWmiomIUStxYVsjEIQapGdmW4id/UvTojFFQaOiAeSXH7z6888+dXYTQhKo6kJRkQ5Xq95HIie0LCrcvbw1mZZRYlMIqYQYLPpeSorglhT3X/sqS3ShOzm+ePWVOyF4NwQRrBuo2tubiUs/+utfFKWGhMrUSRMqlig4olSFFEoKlVMO3gUCAkBiUgZZhhBWm9ba9buvvBZWnpByBB1og4nqSmhNOzfuYPKX1Ozo0dMcRWs7natapMXy2A92KfwWVYIUyy0dZZDTIUUAjtKJtkfqCsV+U9z70jeTpBpHF5v23Tfe6BYXKZHW494pwqQr/tpXX/3xD38OKULiGMmu1yImwTz3Q5uCVJqQtDKkWBkUhjJAMuwhQqaLk1WlNNTm6GSlpZamkFKLJJTUTVVRU5U3t0Z/8elHPgfmNC3HFlsY3HCeyOdCCk/KlLKRQY6aYF0BhcCUT1dokoY0DP3+m3cdZk766Wcfvv3SXRehS5kK7XJknXOlKRVv/t47u1j/9MMPt+pCEG/tX5NaA8vaNMjYd120Q0pegNEhkRxFGGSkBFEpeXB8trW7I5LX4FU9UkIKIRCDlEIg0ee/+eDO3deeP3qaEwxuMMa0Z4tNu1oN8x4tJ4CiQiG3y2aJlCg20zralRIdgOVY0uyy2ro8bdTzJ59//W/+TlZZ62pv77LRiiCWTCWzKtVgu+986yv/5P/8HlZlWXEkJqWR2ZjaSAUhYUbKGCkNgsqiELE3szLG4DIfn15cunptPdj7t2/Xo3HMUSthSCQEoZnk4tkPfvxXpNNAvmrq06NjCejXwwDIQMo0qlDjpulIAiAnuzw7iF2PqCqjWoBrL70sCJ8+WX7r619OGbBTjMFHN/SDj2k03SpNEzIUVf3Od967amb/8rOPMbho+yH0JGDwbYyJjFSF8TEKFk0z++QXD1+6+upvf/rZe/dfJdd98+V7vltPTfno4MnP3/9FWRQgMDCu58vesnj59vW/+sd/tkib3Hqxs31yeLSr9Srn5JmQc9FUUgqhhnpUDR0TZN8RYUbV2mH3/t0Qc7ab67dvqrJc23akRyEGTBlCoIwuRFHrqZeu4tLHr3/1je/+o3/2xn/1nxYJu6GFDAK5bVs5GbsQQkh1ieyHB2/eQs2JcrS+J5xRcYlU7t3VO3e2vPP9ApByzoVQIIP48NEzK6DYYFDm7OyMiQKkIUePSppCZfSsVNHkGNhBRCcApU4BuqGa7hZbQ8iKzaWRKIRCJoXJkeIEGPLgelOwkRyNKKxdC/pb/8Yf/OV//qvny9OXxteSiMRyudgoZVKIKSQlOABxwGUKZ+er/qQ1GZpqdJHnvY0pJ0M02iqTJCMVdEPQ3LYr+uzgqQgxYOJxcfH8Ym8LyAQpRFOikt4YZqUMKWUDcsj9nNBFka2l3UuXNSu3Xrx8d18jW3Rao5IFkdKyyEI5D1IYjEiDD4wSY2/Cm+/c+af/6/8lTKFGUwUKiBUBMUdvQYHSahHj88+e6CVMKVdTs46d0tVOXWfEBsWzg4OT03nGNpKQUhg9oovlImaKgULfptDvb+3mlGKyhcxNUUYQISbnbZbJuiVCUASpB967JKrR8vz8pXu3YszKCKYcfPYhESEUclRXW6MJCzMwBUZAqYUZgvqTv/d3zx53/+Jf/H9hvQZKUguWdUwShZpszX74g5/tT7Ys88XFE7Oze2n38kzXClO11ewY+vzshR3i/s7+97//s2qkQ/Bd6gQJ4Ya+GdWHj59tzapV60yWjlZGTZGayhgxYmspQy5giIRRJ/Ry++qtsazWopuOG0KGCEyIAhFCJcskJeQctIpKFEABQKpKZGlU0M3k3stXHtx6pdqepa6vkVzyAGk0rX/20w++/ta7f/6DH371zddm29e6Ab749POccEUhnL944/L117ea5cnF97/3/37rm3/jhz/6yeuv31VZUhiGvmundaFD2t3ajRkwdGOzo8pRxMjMMrEMPnWrnJxSKrZo7txUrL745NM33nwQMmpVkMgpeG9DjpwDMEBne1HqUspMqFkWUpNWgeH02cF//J/9R9v3rsUIoqhEUXAWW/XOD//6Z3/47T/49Nlv+tbWTbVetNZ3+7uXUwRapyrQ4GloV7Mr2ymaH//kx//Ov/W3v/vP/2qsCf/t/+S/sUw0eDc/vX5j9/mTZ223UnosKtm3abJ9admt2J5rGLAfWuii5a3XH1R62gBev3dNiFozBGtTxIhJYCahjdEn50d1PRqVYyBEYhYqE7DM7EXs+0gpEeeUg8gc4be/ffTSgzeOHz/Zv749WDxfrfYvTZRRuaMuee0iFiILYts9eX7cTKeiPfnkV093H1x58ugpsYe93d1wcLY9LahLOVJTTkxlQodTM3N+mKZAzjkfmSrKMLt8d2/76sGTRzdfvqKUKYXKMQQfBCUtEjKAoGFwRtVGllkykTBFk0jknHMQQ3CBMiZpCSBBHLrO+ZevvfTTX/7y5VdvJKurBh8/eUIbOvrszHYdDxul0V9svvjkScTJLz/4mCXPB/vWO6/94z/97v7WNZFIuNViulPWTe16KxSSFJPx5KFdjhuSzoVyzHbBzjqxlh711d2wWH/ztS/JUIHS1naagExBVRGcq4QKCLqQMScULJSESDFjSklKhgSM5CEDZp0BKWlpPvvs0xFtvfPOa20bgW3KMKrGpNP+7W10cfDR5aCn5d2mjEP70s0b46LsVH3u1n/37//JJ599RNkU7vx8tlVro/tokZMsjV21YJirMmmG3CH0JLKIaXblfqmr508Prt1/bbGagxuMkkmqrCUnSiQiUa1MiikBAFNKnAgyDZIlggAEBIFCZkMQ4rLvV6vcTG9Vk61N1xGBMObpk4OvfuVtTwCMyMlIWdSVRS84eR5ef/WVjz7/TVGIw2ftrw9+eanaJVWqhomIiGWKvioriRxiuGJKCiKHHN06hRw5WGvb8fThk+d3X7n/y59/8vnpCz0uvQ8IsWBmyVpJTDmmzFJopZgFMRKLlCgE72LwwfkUU8reWsjZyPzi8PPnTx7uXBudHV4cnByenbQ3b95ZLs5+89mzDz96nIGYRXBBsgwEudAe8r3bD1pHKS/tQ3v5pStkN2e74xFJuZyfMWShtEF2OWBhvOuAAZ0zJCVlVW3vjrfa883LX3rw0Y/e/z/+l3/0Z9/9Z7FCQSpGaQcnpRRax5hCzgTEWQCgEEKSLIyRzIA5E+SchQBPMJnMXMad2ez546N+CFev3pcSnXWD9ccnyy8OziwQKENcaF1LWaokeoh9uzg5PLhx99VrN7Z/8PP3CbphWhYsRO+tMRUWhRBSKSUChdzKVZtDj3rIHZi9K0B0/+6Vx58//aN/7w///nf++J/+6Q/+h3/4v6UcvGRZyYIhCxEFJsg+BoEsQg7RAaUh9AyMaISPNbBISkl5Np9f2bt9++blx08/M5wff/Hpzu4USJSjyf7uzGiRKKYYnUs4ePKEISpClfmdB2/381UuCduB9ktWmoJM7AMD6gh9O0RdepWz4qBkTDm0yWXEnebo2ZN33379/T//0T/8B/9j9cql//a//g/8i/V/+V/8dx99+lNJsls55QKzJMmslfcuS0GJc5IF6EwB0WeRenSbuA6QFk/O5vMXSPDNr3/j9QcPZluTBINUVJV09+6VnWZrsyQWLEecSu0qIZrSBjdofxFWjw8f15J/53e//v8DSriTByH2/ocAAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAjYUlEQVR4nDW559OvZ3Wevda6yl1+7enP7nur7CJtUDNCYCSBMB1R7EAMTqLYHtwmcZx57df5kMT2ZDLv68QDLrFjEjKOHYMRyLEpkigSqFmogpCQUNtduz71V+9yXddaKx80+RPO89NxHif+29/7fRcoBFUKCpDQIyeJdeYUQEHzZNFDjFU7rkNsgjQ0nVYbddLUYlsJzxLGqJbAdrPM6AzFEaQAnNRkOptaLxE6YizJCJgsYTS1MBEUfZsvFB/8lU8evukNIcwaTQBtmIbRzCz386sXFkcYm4rOHT/7yA8eO3vi4ugCa+sar4UdABUWXcdm1qVSeOadRs0dIIqoxUQZgAi3II1pIygR6Hzf6cAm1lUsLhOMbbBqOPKkquuUUuTIFJs2tuPYCDPVXDXGuORBYmMlihjMPNqIqVCwhhE0Bemy7SazzaZHvq0BKx6sp+PPnPjW2W+fOb+2vj6l1tRtm3OOPjcgJXQzNYPSmcwX/a49dNMtW5ubaKk7GGCkelLV9XhWbcxm67NJSixIC8b24nTTN5sKoGgDJQS06BKBeN/p9kqIiikE46Ntw7Cqpttb03YclVuyYIo8sWRiVbRJwGBZk0+NBByP172VPOH4wpkzJ9dPnbh44sRrGxea0cVJ27aFdegwd9YZawBQwTtH1kCe5wf2HD186OrL9+G5YQQGQBRCUhElFWGVKDxs2nEz6Rfdvi/GQR5/7vsQK2dcbGJp2PiFLO/1i8Vp3W5N1jtZkc3PV1vtseMvnjv13POPfW8ynrZp9KZb37yQzU1j02AiRdPfP7jiyq212YXvPbJ+7LmmdFe/a1/dTi++cknHZRgpGGmoydE5ydDnvdwhsEHXkmkATK/fvfrgR9976+1vvHoxyxw522oLRkoqrQoSASiqAUQAmCtKhQ4AkOp8mfmbr2pxMge9WrfnsFvAIoIVYEJQ3cVonUpEcrfu/8HJy3/1sYc7xp7HrY//8gd6qwsSuDRLIZoDi/tKTs++8Myfn0qvnJBO077wzVeCgIPMm9YQojGrxgs7zl0ArpFw0J1b3bHv8r1X7ti1a3X5/TdeMyh6CkY0CbIFSREDYqmqCkoAigkUFQwAswoggpqEWsc2mrQJo8DNTKZLFgZmGRQTKgEaEEB2oAqaGckMKZA1IenZojeXY75I3a3J9HvPfOOFZ1761tfve+21jRx8MjmoZKAGUwLnFRVxREVnpbtyYGnfwb0H9u7atXN+edfCQi+vK5pNqomZllg6MURGAOwM03Q2mu8tqACSRoRh2FTDfSpVicFk6FrlEY8ijgW4VUnajKumcrNdnbZr+lMZznRa0mIX5hSGDDHA2Dut6zYr+s/8+OHDh3df2mi/9YNv3/Ptx089c3YQFoF53pYkQA6M6wtP6jqyy2nn/BXX777x2n1XHlpeWi2tK4zpCBrHGOtqbbtOyQ7lYlfLDpYE1qKzT5z84QBof3+fGnICCcJQNhpoG5zLtBR2U6iMo0bWQ9qe1CnEBoOMRrNBr1vYM9vZdJTqcZAkNOjkhTEeshFUDY9rUWzn//6vzj/04J9e3BxvvVrvD8Wgl7MECykvMme7ZHkSEi5fvmff5ddes+PNNy1ddmBhVg0raaKJpZ2fN7tiRiGNqul0YzjqFFmuey34OrURhs56C7XWnfzibLirs5ggzsKkrmfktOIpkgo6ASaRaZykGLXlOG2mo7pO4k2zXXOIs6aNEh0ARqCIVWHS2oX12VYtnHMM7el07lRw6HZlhfWA0WieWKyanBbK1QNXvv9t119+2eUF+Vk47/OtSbMxg5g5zYzJKffYyZMBm7us6A/gwNz+/fmKQzs1YRhOR67t7uVBlpUT3jzPVpqwUQ0DK7bR2TCStaApt66Xe0RFkIJq62Kbt86LLzWgq9oUAzoy1lIX+pNm1nDz4Le+F2tQTQSKnCxl5MhAMOrryDMHV1xx5Orrb1i6cnnHnt4VO3cDmq3N9VSnODWpQBTJeibL+qVZLLDfI6Nk58sdWx232Ck92cRJsG1h1Ehls+7CnOsamV4cX1rf2q6GsyzPrc+MNLOmRgPSSzbWXdsDayVTSzCwhgUTNsBUSObIAIFTJDSZ737zf3/3iftfcBGAgoes9dZB621W1e2ZhEd/4pqfec9br7n6EPkAOS3vWMm9qauxapiOGpPMqOszmxPBjvzoKuwCdCRJxKLwQq/oYScAoMtYhoaiA2cdxKTTSIlh4hwsLe9sJm1JvWY2slWtuc5msaqqhbyZ75FxBig3ea+yQTQ5221TpcLe5hLsDx99/O6vPHzy+Qu2xjzzgCREPVTFXr7Yv/LNR256x1vedGiPQrs5bGJMg37pTQxN2NiYbFzYStW0KjLbuNSTKnkfPVkjAIBGUQTUJgmuFnJTvTTUU8nmsfV2PBv6AgFNy41gBaJFBiHUk9mo4zoZZbEZWYT5rlkpLivccuDZJA57Vsd1VTWBRUtbnjt+4etff/TYo8fjqPZqvbGekA21LKZX3nDrdTe98yf2XbG708kDt4mxKEwkUOQmzWajcPHcRj3hvNvPilT086xTenIJrCIgAAAw8zCmS+1koahEqiGfm2kdkiHJrYduqFl9pRyFk1ByGTpv5oo5I2jF+O24s+zuWNxtyqOYyJrKUAi60SBUkQvtPPzNx+/92qPTLc4Vrc16aK21oxgnpNfcduO7furovj3LJs8HWUZoyZGRWOu0mVUJtCnNpTPDyToWWeGdEjBxU6on6zbSds/2Cs2HOL5UX3ptYzhtJ1qUNU7rNAJw2iZOUxt1ZG2REntQsJpkKAbne3sBqApbJtWGgvM9n+0OkqyzRotc94Q2DbLZj184fffXHz717PkwTgYD+cLlPijMimLHNZf96iduO3TZ/MW1kxMeL2XzPusaowJMCayCJDO6FKtmK4yblcGBtq3Gm5u2imGs9Q5eXG5cN55oait2vb1QTavNde34QiKv0ZptoVBVwgpa2+sDw0QDRUDGSGCzwnS6CVVNJinV+aAwmEc1zlhQRWVjOnXTfP0b3/3GPc9vXpgiG0RxQqiuEdh59YG33nzdW2462tvV2WqG3vXz0pc5oQ0MRBLABDBqGfpmZXNax2qmxUxD66Op6jClNJF1X/Q7eZ5MjEAa88nmaDKrXJZmmkzbSJvXKEBECHax2J1w2jgY1mPXab0tum6hA3nXdJLhNtskhUz6DkRZ2BBCdurSw5/97OdfeGo78syRTzK15KrGxVX/s5/8yFtvvGpab011JLMAKCYHIZzFSYVBxPU7AzLG52Kz1Ewaji1y2tpaN84GDVDX3f5ytz8AmSZMDqcZtgmaRNxwKMuBM9CqI+sQMxEVYNuBecDlRcI8374oxpraYjBqVZgIjDqHeWEWESTCSKN56fj9n/+rL73wBEdBpcwhZnm3cnLlOw/9/K+8a8/izq31DU3DNnG7lSW1YVL7LHbKTuFKoxRilZV5vxhwx4fpdomxhsCiElGBXQ4pnywu7hws5mKHLrcOU2my3rAIRcgLdbl47loqCtuJUDWaLKrxyYJxS7Tah95EtmrYHELVhU6uBao68IQGgOrp5oMPfeOhB545e7oQH1DAKCYH3V3zH/7AkZ987yFWuTA9uRVnUcW63BJw07I2sYUAHWMag8bbHvsUY63EalTRp+SIyBtLkNVQld7UYWQD5yZF65NyVdXbdYu56XbmenahiZSb3EM+Hrt2xNZToZAQIwkUWmS4u9K+KBVaGFCBjjEMUDYQP//V+x777o9l5qMk5xEgn0B97c2Hb7716PJyUU+qTpY548jkvW7pyKnEIFPjsXDktS0dAPsmxTADg8k6wxKLsldNKiS2zilSr1PuWRk4n02nY4HEHAkKbnpHVnf3syyE0flL28dPnHr11RfX1i6dO78O4q2qMBkHoIhMQACldIisalJQhEIVzk8vfvavPnf8uaEEL1h7YxNnraX3fOTG99x8WV7ak8OAG9DmCbybdwve51ZEjRliZSm3CB6n3gRlx43UU5/nJjStSFTlPPdtNZI8984DUL2doq0bKz17YDXfl5vs0vq5l08+/fiz33/2By9tn9tuY6uRDbhBd540WgXxahUUQKwaJQZU0WjIsagIv3jy+Ffv+db5Y23OeUJujW1SvOzg/ptuuapckmmTbKm7V4Hr1dMXN2NbFSWX3bYoVKgValUXokiQKTtxIC5PUWAyVgioSWbTUdO0CB4SZmXZYOCi08WlOZvVW6NvPfp3z7547LnnX9m6sBYmiRIaqcEWgqAWqnrYNLVFQkhEBhJFUCA1jAaUOYkgPX/81a/ec/+lizXZLhVN9Dlae+3BPW+5bm/Z821bJWVXwlLXtwMK5M6eqZoxthOZdDjrapEX1hVNnIWEIAmtK5wnBeA81i0HIBFEJmvV+f7iynW7d63Ptp/+wVM/fPrxk6dObp0bTavWABgWB6QIaExUVBFOIdYNAVoUIyQISGoFAFENRMEMjb507vhXv/HA1gUltnkXa1q4/pojH3vndYLp+JmTs+mUIDR127QTzruK3vlpOXBxmtpmNhsVda1p2ffzcQfSwMy3YjibjupLqS52zO8cyaSqJr7wHN3OvQfffP3Rcxun//vdn7/3b+/WUa1RCNGjKW0JQK1tNMVCKVjvORljCb0kJCSL+DpuACgQgKKIOgJ97uzZe+59pBlzNwe1mVtcuP3mG26+5goDeGm4NZuMCIEVA2MV83HsBBu74GTgN1Kd5WK8E14sCjapVkalorAqiJts17abC2d+lFHBki2t7t1zzfKFc6/8x0//7uOPPTW5sFXYIioJKAnXxCCaC2bETARAqMyIqmxUlVDRWFRQ0NcDKCopKeAzJ47f+8D32m1wrkiZru5dvf2WN122c1EAAcD7LHO5kcZQXrpsAbMVt3MY1thvzlEYFszYWeiWuelH4EnUatxIs505zEpyaT42jQR5w5Grup3Ok88/fdfffu757z8/3pwQiDGmRUZWl0RQNCkQtQigjIKKBo0hJRVRFAAFZgsAAKioAAAICnrswpn7H3rabHGyLam5/PChj7z9+oG3rJknBgCkjHzHKQCErMgXi519u8tB/0w9TshqcxQi7RjwPtnM6hnA6SyNNfGlaeGKHQv7rr5mz8uv/ujf/5fPP/v4D/NhZTIPBAYRFGNIVkCVEzIZTwIAwKSMCCoUBVAVKDEiKpFYBQAERFRQYT21dum7jzxrJ22DbBRX9yx/6Kar5vp5rqhGk6glgJTKrGMi1xrJFEW+iOByt6zTnZN4zECJ0LO66k2uOEMwC0vWuHo4QxnvPrjryhiGf/QXf3j3V78J620PNTkO7dDakskZCySRyCRRVtDESqSqmJRUWmCChMAWfGHzwnmCaF9vXQWY+fyltQe+9/3NC1tQtQltNjd4x1tuWF1eRERBIQCLVhRHvL05XHeptT1rfc+7rmo0kM0Xu2ZtlfEss3NZkZEzrWRgpza2WS9/Q/fKYrH/pXv/99999Wtrx88VjMGmSWh9dMoGVARTYiAwbRA04FQRhJlbjMTiyOSWutbkYK31CEiYSNX+Xx2K54br997/yIVzm9zEDLrDjD/xzhsPHtiDqKBEEBVVwYQkDdJgMOfDdKJVFYOIgHWq7Fya6wxG0nqDmYMim+O23pCNHe7KQ/nOh5957g/+6++deeYVHyQDKwQovswMsCBZRVAWC1Y1IQkoBgWBJBAzYwfOZ5YNcq6WCY0QALDhAGgVxCJdnIz+/u77N88OZ7Paojlfn/3oT737zVfuRxIFDAgExopBwNzADuP6876b7fjhmfPLRVZYUpBK4jRdbKnxxBa1ql7LzKxqOiu6r+86//UrX/78X/w1rI8chCYGgzlbQgOEpAYNAypGFdTGEESllkPHmIFRa4vc5A5aFQWyCsRci/EFqKI4KiwkPb529q6779k+M2wCcysXcfKWt93w07f+ZMRkwBjVHLDlpApqCEVKa88NR7XP6lQlSyIIiFEnM3NJJTC5aXOxxo0I5bxdfu3EuT+468uPPvR4nLUWASM5KpXUsyiqqDqkJCKaCESVIoMjXMyynMSQExVMDaIBVAURtA5KhKQqmqwQ2NPnz3/t3vvOvnpeW4spzqQ5euOhO97/3hlXzvoIiSC1mDalySLOZV1L7rV6eHZ7vbRZaOTCaKPsZJ2OVHq+4/JAyQLNWJfw6P786Fe/c89//8u/Hr62gSGChBgSCCmiYVFgBFCRiGpUQZVFPMJi7nMUIgR1SZWADCEiEyCBEWV2BsEm9ohocGa/df8jx350om1DQLUg85fP3fGhd6zTVh8zqzFhkDA7X62X2fKq3TWt6+PbJze3NodjbrKZgR5vzo67J3a6+X3ZdV0ZBIyayf7VfFKn3/+j//TXf3PnQtb3DSdtUBJHg1ZBWyZLAC0wE3rmGGuPZqnTLUwCTQlAFS0AOCUVVWJEQkI1CdEnFpcbgISJkezTzz4vtYSQ2EK+NPjg7e/kPiOEkdae8znojYGmddqVzxkCdK6datyaWYCLG+P5HpJnpyqxn5kuWLViY5Ljp078yef+7MH77l+CuaaZBmZRYDJCjAHUGICEwJjUATnSQVl0nbWqKWIko0QGFUBBLAoTAaJDRQIFtAyYOCgAADhwth7WqLbwLiyV1938tm5nMNusXJZLAcvFXKF57otJPlYzZem03FQ8nLRpOBrHWG1Lsn5+wR9c8jvBkgJG0OePv/qZP/+z577/rElZmwJZYqMg5MAmrcEIoqiAKBFqx+KcdwWoJomGBDVTVBVVTUhtSlbBOgusQBqVKTaBfYLEVKIDL0oh1S43p0v6+B2f+Gfvvc13Vk7NzGgSOqlwtgyIXijzrgf9jBwRNDoZhfXt2bZMbAITs/KKhYNLfsEAIuhLJ0/+zu///8d+8Gw+CzlrYxpJSpjQijpCUtEUuW21RoIVl+10zlgJpIxoEI0hQARBRKsGBaLNMgmtoeTUI/asL1qjHWvZAgE4Y6y3ndfM9FM/9/EPHD2oGq9ZXlztD84OX9uOYSdDhq6ByvtO3wwCN3PZ/GU792ytra/GleGwXl1eyLxLXKs4AXzkpR99+v/79PrxU8wxKrOFIioQiahoUGACMUQE1DWUU0KUGjy3SCiOFBMbJTWWQZKyEyptZmIwthDM0KK31HGr3lTJSKmUrG0sW8H6xje88eM3vd2gsjqR0PHtjoX85PjUjyaznhnUcdjLB2rAGhVtd8+7tYPZxrGi7PewrpNNFzYvrXR69zz06Gf+5DOTS0OTNCWOoiAqnMgSJOtUAoeKtWOpb6kw3gCkFKOJCQAZBVABDKAAIBCSAIAKAtrorXe2LEp0NpHAyv6lXYuDlXLnZXv2Hthpy/39X/rY+8hEAQrYzig4cKu0aOb9WnViAqPEtD1eKxb9ilkUlTlaumx5ZXRhC7Yk7xZNm5ayub//znf+86f/2EwCNClKTKrMgIlZRFN0YlrRhNTLcMmYDDWKGDEEjoEBuAFIQknAGiBhC4SKCZFUUpEnQ96mOk10eeeRW2/95Ec/cVkvj04k047N7C984o7Llg8woAHxYK1aJGOBd5mFha63mmEnO16/8uLGk6PujpVitcTck2WOmS9bxesO3/DSiy//0Z/+Ga9ts2IVA2oUlYKKGpOxRhGDMqem582AfAYsohYMmBAQhU3GhkxiAlFIoKgajSEgQxaQI7WV62T7D7/5vW+77d03vXX1yi6VRgOIVYOgYK/atyeqehVAQPCEoCpgLIM66KKwIT2S718osrOTk6eb8wW5aVU5N4DWXXvF4bWNtf/2l1+oz21DMgGCAVERC1CH2gAJaEgMIsuFW3Y2JGFVAAZVJONEBRQNeMKQOL5+ciVwQIokhqYWdx256p989KPXvO3ozqXFAXW66Eg1ohAJKgqqzV0OKiIKxiKoAiOiKJAAAKGhhEIKczLH5Y7NyBeHunZ6kmvvuqNXR+UvffHO88dOakKOjc06iWfGKQuTSuKUQC3ocua7DqMKvw7vYpRBWCFJcmQBDEMSaWLoOGdcweC001E3X4dXrv3JhU/e/u7XqrUl9n3jEAwqWHX4+gwAJITooAlcqSIygKoCMCohEKIiGbWInm3ZM7uO+CvesvrGvbuuOnDg4PxC9w8+96ePPPhISG2EVklabghRmZAJGFQSQ1jwfsl4BZSkjg0oOSUVAERyJCRgSa1x1haFK4vcqto+veuXP/57f/o/ev49d3/77gdO3LdQlH3rCCwKRklK1BhgZEAmYwihEOs303qFlQKAgFEEQgEGFUVBYde6XHtMFrW5du/y3oXeZ7/4v55/+MnCdjmCxIjMBA1DVBFOGkFRYafp9i21yI6RFWpUEgXFzFiLhIYyMok5IIKhruYKvcvedOPt//Tnjhy66toj+W/9P7+09eKRL3/5C6e3Tg6VBRgoGQsojMKKoijWoWXEXG20sBXXBrRYQklAoqKo9vW5TMgkhOoAmSwLfumb9975hS+XQRFYODEgkzVIlJg5ttAaSTs6vVKQIZJSQCDCiNIgdAQTiIBYMYLgjGXAcZTdh66+7X3vc10TKG4221WKH7z94Dfu+8DD3/mf9139td4H+j7fX6KP4CzAJDbW2J54IiKGaAR6UlrE7bhRQyXAKoqgogJKopwBOnCg6ACffu7lO//Xna6xoW7CbAipwTqoBOLIKbEAJlnIfEmM2BiFKCLKnrAbxEVtNDUQI2DDSZwTZ0exvu7tP3HHv/hnu3Yva5IO2o5Mz45fBkn/+t/89GTdPP2dx44df+KsnJ0CeJQhj87MLm6mabBgZ/qaoXILYkZJqG44jAUTdhyZTD0AJZgZIkEnKoJ4fjj55v3fbra3kWGSGJ2xRKos3BAUEZm5nfO2JGoTGzWGEEEJsOUkBsSa1DKpiGWyNjGzt++742Pvu/32EOnihfVuv5NlZsbb5+tnYzy2f8+RX/t3n7rrz//ssQfvvXbfAS7syObn4qhTZvMOM002SabSi7ixzqcTNKDZLPEGrAFOFu2gMH1LhOALjBZ74yp896GHnn/yKWJN3Dh4HcvUWhdVKo6aUmFNz/kMlDExqSQ1BkGVyIIKiBqF1KoVmFGd71n61X/1G3sOLTY67XR2zg1kFAOCWK2zqANTTnDr1z/8kePPvnLfA1+59sb7rj/6Fm40aJfcagAXsbRVQtRqC6abVQJtcy8d6yC49XFzPp1Z7dvVwa7Crk5AO1w8+eyLf3fXl6abWygcIysQAqAkBo7ImtQbWPTOEERFx2gJEhBLVARFiiyZM2xYnFRtKA+s/OZ/+O2DBw+fHR3zpj9XlIu9pQsON7bOkoTC2yIzc3ig9PTL//hn/+Xv/ugLX7yv+LUddfQLnbjYHXGzMMrWKNnXan/C0BmE9UbGiBWbaXTnqbdO/exCa09Ot7bTRgYrJ85t3/eNb4/OTqrRLDSzlAKzqKiIJpaYNAcz570zIJyYRdEoABnwkOXifZSuI46M6CoOC1fv+fV/9//uHexglk5/v4XBxsb2aHPLWZjMtlvTpBwRl3rUA18cOXLwA+/80MuPjh/8+tMDylOls3Dx7PCl5y8+bQlnLTTTNLPe5STWNnNuJdjl0oWBLRs256rh+bV6cTF/5ImnHvuHB1KtNbNVVkkKkERFUFiRuczdwFnlAAoRoULOVTUxkSgoWGIGa21K4cAbD9zxS5+aK+anbeNb6peLraEZVS73p8+da3jay1yP+qXOAeJUp42Zfvh9b3jiHw4/8I3H9h0eXH/V9ayBnPcp2jk87BhbPAXlayXQoh5agYOkeUU7IpzPcVjlxLL/9Nntr/ztXePtmpIJsRJnDBpgYeWkrUrq23zREIJGRCEwqobFkgGDUVpyJjKEBGQgApRdm5ekGLkhmUpLU6BO1O3XNl7RYsKT6Mn0kEpuajPaakeVjLp79JYPvOlLnz3z4H2PHj54xWK2ExTUj20mS110825lBpcjjL0ZgBaEpsCCdDCh0FWrduUL9/z9xVfPOMiaZmbZqbWibAhiEga1qL0MFaKqLcC1EpDAvO5kNFq0IiQSTObUZfPl7kcffml99pl/+al/vbL7SEw116GhrYonZLmf99c21tdG2+U6tuCYhiqSqdmKw1veffT7//Dy2ScffuKmx/7RB3+Rm3pGjgiJiS3JABb6uieXjlEFjAYJcD7XXRkfPHVs4+H7vpkX3heUdSyWFiEASisaFUQht6YvJNEQUystGCQgQWajs6aZhRiCAqIovudj/+gXf/u3bnn3zxx/Kf7hn/zx95+9t4XpKG4nGfbmXG/Qn85qro2jxWkL57c21kfbKuB9N/OLpeM7PnXLbHvhnjuf/P4Tj29P2jALhACgCoiApGQUUKCJPGMVB2q0aKb+qSeeHg8vefIcAkIQiZAYCVGShjYH6JNvIbTIagQBVDVKQjTIyRoyZNXESZTbPvbPF3cuTcbn3nHb9bd/8GMbp8xf/vnffPvhO7NSirJLkabVdP3ilkE3N+h0vTGGywwdUuK2kWprtr64B3/6X7zjwov6ta/df+L88e3phBANKqESoIIigkUWRFCxkDTVdOb4hScfeQQka2JryKmgpWit4RQbjmqhdNZbCsqEiClatSSAxsxYppRasOpszfM/d8fv7nvHmy6ON0dN3YbpTTde/a9++3emW52v/NUjd935hViF2LqNV+vR+YuT2bkc49653bt6l6tJEzPcTttV3GLktcn4bR+5+dDVb3jh4dNnj73YzQpirUSjaFRJAIIABi1BAWIU3fakefrpH6xfuODIJq441QgO1SmAVULRAk0XLCd21mZgGaglDagcxQg5dN5TSvHIG5b+yS/cOry04ciTWBYat7Plnek//uFnjlx224Nfeu5/fPqPfvzMU/V00oLv9ldWs51LfrmDHUxuuN0OR7Oqbkhxsd+zdfj13/yFolh86L4nz24eI4agwAqSNCRsW6wDRdGoEOqUTr528YUXfsihYlYRr6oCosKkEEJrATpkPKIjNKKMqqoqgV6ncQOiqCQ0p+//px/aMuNCo/G5c97aQkma2C6v8K/81s/e8t4PPfO9M3f9xd9sTU5cfv2+7mIZ7ayR0TavD8MsKIADV1jfMa1Mtycvd1aqn/rwe869pA9/+0ki8AjEIAHbSiYVzCqYJp2hqYezyasnTp89uwYaOTKAQWWFVhWDpBkma7BwNhFHjQlFMSkAkrTSshHUWLBLYq5+28H5o92njj8dYhx0F5NoSLVFWlle6S+W2fzswz9/28d/9Tc2ztAXPnfXA9/56sKO3raF16Zro2ZzHC6p3RwUYaUL3cxUbV7H8uzWyze+c9+Rq974xDdPkSI2pp1iVWtUxC66HvUVaSLh9KXxi8/+mKeNswPv0VlKZI2xZFEEEKCn2Ff0Ah1xebKGWVSNuEKMJGnJNsR+OT903S2XzlbTtVlqokXolD1f5uWO7vxy0c271q80oO/++LW//Jnf9NnyY1985mt/c2c1uXhxdn7YTHqwaON8bIvRNK1NKg1YWvLdopzLrn/vYZiUltQYcTkYQx1DxgAhYGnzqnInj7167vR5TZUqKVuJTJIrtCxNTCFDLPNMVBCRLYqIQkLIDRhRRStAOI7NDW95Qzk3N9uWGMTkBnMuOjZPxaAsG2hjC5Nha4yOJ5s33LDT/s5v3PWHX/zePT9c31x7z0feNL9zT0xNkWVrsY0aAMyyXx4COZrarD30ph2H33qtJTAZGVUAUVJCowqsrKGVEyeOj4cXiUQSEEakBkFEAJCUoAQ0qMIimsQag4jJIdaZLdgatmRj2d0Xf+bn3zWehq2T1C1X1CSbS3e5O7w0rIf1zNpWmhSlhRArrpnz2Nzywbf/4OHsxIM/+uK5+z74ybcfPPrWsQBia1w2X5R13BaZlb4PrtjbXfylX7uSQBUQgBQIgBTgdWOQbWyNT5082dYsYplFJCIFgCQpcVJCWOoPEIAsWmskJWTVRNaW2wJTCAXJaTj5/l/8iauvWNqxV/YeWezNd9HFhdUsYSOaYkyxEiT1eSrAyxaMz2/N1ttm2r713dfd+v6PrB/L/+JP7n7qR4/szLoD1+93y0GvZzuwsNKHOdcr9+2m3ddddfj/AOHhhit73RkpAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAApyklEQVR4nCXT95Md92Eg+P52zt0vzUsT3iRgBgMMMgiCAINIMYmiAkXK5m4tLcla2yrV6ry+tWx5q3y1V6492yvv6exzWFuSLXtFmaIiuSRFSCABgghEGMTJ6eXcr1/n9O3v/XCf/+EDzl+5kcmMVGuNTGYEYTGGIIETvf4gmVBhGMI4tt0AAOytt946dvQwzdDjoyWEgVqzmc9nCAz5gYcTpKKoutYfyY60e/3BcJhM5DiGbbdrCVXq9nojmTzASUEUhrppWbZharzIAhg7lpnJZBQ1UW80dnd2Du5fvHbtWhiGhmWePHkqmUrV67XC2KjrOXumZ4zhEEMoCMJ79+/v27fPMAwc4Iok4ZVqVRsMFFkxDKNcLhum0eu2CQLTBr1uvxNFYUJRBZ5XVZGkULGQ9RzTNPV6vRxDaBiG6/gxhO1WMwiCtdU1juUkWWk1KltbqzD0R7LZpJp69913hoOeNbQwDCMJmFTFOMIYhqVIplAYi2LMsNyTDz4IMWx6dvbQkSPF0dFKrbK1s/Xd7/2jY9swhKbtQIgiGLuel1AVz3dZlpEE4ewvfwm++/3XgihyXS+Xy7MsY1tGQpZIihZlKQwDfWAYZpRQ+CC0cYDRFBP6UFQUQRLS6XStVoth7HkejGKaoXVdH8lmMYAv3Vw6fHixXN6dmZkVBcW2TU3rF4qjmtYnSazdblO0nEgocRynM5lGs51IJvRBP5lI4hgyDD0MA1VN2K5jmdb83rlGs5lIZYzBoN1pb29uJJKJZCrBMKzA8TBG+P4DB0VBymVzsiRl0qlapaIk5AiG5fJ2FIUkAQbDXqPZQDGgaNa2PUEUcYLQdWNzY4vAyVQqbVlOZmSkWBwtTZYgiiME1VRy6dat5eUVXR96vieJMkUStfJWt1UxdK3T6bI0jeJYlaXA9xzLWF1eCbwAx9DP3/jJ7ds3CQKgGKmyOj42pvX7oiD8y7/80/07tyWJm19YOHToGADE7s6uoiph5JFb2+sQRr7n5kaymqY9/cwznucqMuG6rus4FMkcWFwwdUOWZRzHczm+2+7YxrDVas/OzkUw3q2UeVGQkyqGYTKjxgTBMJSlD6vb24cOHlRTST6hrNy5vXn/biohKArPM/JoviCIfC6X1fr9wXBA4CCOI8e219ZXSqWx6elpWVRsy9IGXZ7jPTeQZdWx/bnFAzRF2qYpcpQscdR4bmNjeaJUwtfXV69fu6LrAxzgnhv4QWyaXgQBgTND3c7lxqIwwjCsUql89NFHfhAwHJvJjDz8yGMj2ZGxsYnR8Qld01xjGDhm5AVYEHVrzSAMT50+paiq57r1ajnwPYAiEkcwCkLfO3jwAEFTmj5odVuSLCUzycWD+w8d2h/DKJPOCLzc6w1FUYqiiOe5ycmJKIq+8Y1vDEyNQAC4xurS+/3GJsfQMISW6YCLN5cEVmBZzrJcUVF7Wl8RJRzDOu1mt9tJpTIkw0RBYJhGX9OmJqcmxsdN04ghFgPMdj0Yo1w2A6F3+/athJrGEJbPZzv6AEWRLIiVnV1JEAAW0xTodVv1eq0wNrZw4OhWpRnDKJ/PBb7LsFzgB65jj4+Pen4QhRjCCFkSdL3XaTdGMmltoDOCXBwbbe42I6e3vX17as9irrhnY2PzyOFF0vNxSRI03bA9rzEYFEeLnXaHZ1mAY5rWkxWlUmsgGB1YPOAHfr1RpyiyVq2JopjOZh3XXV5Zt133iSceK4yVRvOj1fJuX+tqmj6SyXR7vbn5OQqhja2NdHa0qxnp/ESxNDO0nPHxCXOom8Ohrms0zeTzeY5NmZYNMFIUFYhwDI8phmq1W6YxFOXEVHHU6g5wgBBD7T/+0EAPa53OwNRN0yG77SYJQATjTl8jaHp1ZX12vGgaA0niOZ6TJDmdKezubHW7HUWSZUWGMZZMJROJFA6QxDG8wF7+6FpmJHHy6JFKZZcMIzqKZsfGCY7v9vsbG2uObU5NTwOCmpnbRzOsbduSLPMcbeg9hMUAgMGgXywWRjK50LNxgiRI1nNd3/O9IDpy4tTWxiZBsJ12f3fjjsBzgGVoQhBpnhMJIp/WbQd8dOdOFMar6xuAxGOclHlZYGnfcxCCQRAVCmM4Qd28cW1merpQKMQxIkiq1+t0uv3JibEoCpVMlhPk8xd+OZoZYRja7vY37975+Kee+4u//svCxChLkvsP7McJGoNYKp3rdTWW43zfB8jXDX1h34EYxhjAaZqBEIMoWL7/0Wg+JycLQcgYti0KTOTZjh/mx6Y+OP/2xMRoKlfo1AYPHT+OAMQhaLRq4N0PLgY+JABSExJF8drABFicSCTv3r3b1/qqmjjz8Onz759XFTkM4YH9B3fK5WQi4fkez9OiwFeb7WQynUiI7WrTtG1o23q3+chTz25VtwcDTeD4dCbf7rSzmZxhmYqqOI6LYITHeLVWLYzlE8lEr6+tra6Pj01SLEtjnjFo5sbGo5iVpHSn0+33aooqJNNpz/VpJtFoD7R2b+nyW0v37np+hGldcPbiFd8Nx4tZTWuyNAsRFcaIYTnbsjmOs2xzZXVZkSQMA7l8Yaw40dMGDMvwHN3rdTc3N0zLfuzRR2u1ndCDgKBq5QpA4bGHzqgJxXfdXrtLUqTnuTTDYADfLZdlWZkYm+gPejiOcSy9tbUzNjbxne/8I0mTPMctLhw4c+aher12e2V1Yf7gnZs3HLtLMVS53MQJfNCs07GncBHHoyMHisVSnhST4BcXrkm8gKDXbpVz2SzCKYxgSZKOY5RKJizLuL9yv5DPJdRUXxsQODk5NdNo1HvdFsJBtVpLp9KlibF6Yzc3UpQTGdd2aJaEETCMwaUPLzE0M1Yqmpa5uLBYq1b3LSwghMqVSgSw0HOL+Xyz0UinUxzLxzEIo8j1/G63v3TvjmdbmyurGHRRPJieTGPQhy7aO0YenFImZmc9pYBHWKuy40IS/OMP3picGOU5tlatqEmZYdmh6dAUS9P0YKCNZNJhDLVuT00kksm053hhjBiaskw9mRmJMYKlmSgKvNC0dVNWEwk1QTOUoYWG2f3hj187+eDDyVRKlqTNtVXXNjmGKtfKJ049aFmYKkuppIpg6PqWYZnbO/UL59/vd7v12u5IlsunuJzMYk5PIINckt0zmR85dYaE9nC3imHKesd+5/3VWqPvOAT4/k9+ioN4JJMVeGVicnR7Z7PVGrIsBwBIJRKe66xtbn7yE881m82rVz86ceKkKCuNRi2hSBQvuB6UBYlmGSfQA9tSBGlt+f7rP/nhSy//+/HxhCBQ7dZQ77ucQLY75U6tNejqn37hZSUzQpERDFGr3S/vVM+9f+Fff/ia0dn95GOLoeeFnvn4sVJRiksH906eetAnebvVE6Pgxlr99V/cvPzhlulGnU6f4bhUcoyXaXDl1r12ZzCaz9a319wwQBRLUHS1UiFx4vDhwwAD9WYNx8lEIqlpWqk0yXGcZTuKkiQBMrQeBpAgyyTDm6bNMtHSrRWCohIp9c6t6yk5fezoqVt3b7ienssUp/ct3l26rpAMl8zcX1v5wY9+vLVyPUkwsgCURPKlp/aNKWhsvpjZM97c1vq6Q5NitzE8f2XjfqUqJlMfnFuiaLEwNotwEuIhTaFusx34Prhw7V6l1jgwv7e+s4GRuBchgiSbzUY2M8JzvJpIxAi22+3J0pRpmrZjoTjmRYVjZd8dBJ5WqZQnSnPXby5PTc6mUjRNiZYbFNPq1vaa54eikhwtjXleZA3d7d31C5c+bG2X29WdZIZm4vjRo9k9Y5nR/RNUagr3BqGjkSTVauiXb+6eu7wbAko3HdMOWU7KpEcQdFiO263W/SjkCEBEmCSnYwInEY5NT070+10n8EMXjpemRFF0bCuKYKFQqNXqnV53Znr63t276XRSlsWrV6/IauqBEyf1nsNQLEmzJI1NT40C6N+8dnlh8QjLS+XaMoQ+DggYwXZr+M///C8rK7dB5NqdytPH9n5iYWri+FxhNCOQAXJCWuSrW+s73eDs+Ru9Puxpw75uC3Ihk80wvj45Jq2tbqwt3xUFkeYZiiRGczkYRp4dGJ7nhB5Zr5Z918MQdubhh7vdLkVTDMPkc8VXX/3+1tbW0aNHGZq1LNu2bUWV0umUbVmhH9imxjBUu9213AAnyOnJ8fd+9a7j6xgW6wO91mnumz/5o9ffuHThvX6jcWxv9j9+ajE9qYppScIDOc0ybmRZg/tbzt9+9/2tnT6NkyEQ82NFmpZ1e5gfye1ubmvNcnF0HANwem7PoRMP7K6sNLd2AI7fvXM/AiidS8cQThXyYHl9J0aoXK6LyQzA0HDQbbabjVrjzu07MIa/85WvqEqiVq/FcZROJkWRv7W0xHNUT9Nt2zxz5jTNc51Wyzbdxf1Hao3y3N7Du7vbnU7vD7/+jW5394HDIxNp+oF9o7N5Uiqmc5Pjth7d2AguXFneXW9rBmpa+Nz+vbdu3IjDCEOw37O+/BuveKGxUdnhaWZp6a5pGDTD+K6npsTjxw/LYrLbGcxMTJ395S+Gem9MkcCli+c932fk9LkPr48Vc/lM6vbde77nP/P0s/l8oVavcqLoeZ5lDE3DpGmaYZgIup7n9lotRU2wPB/H2O2lpdNnHgMU9ebPz35w4Z3y5r2H9s/82vOHSpPM5Nyc5yPbt7eWW+v3qh8u3fvgRhtX8gTB4DHkBE43jUwqIwl0GPgYTt25e4sgmcnpkirInUF/78Ki1aosFIo8oP7XB++FOJ5QhbQsfPr5xxcO7v/R938KLl66SFJ0iIiN7Uo6lchmMhhOaf2+LMkURXa7PTmhxDEq5Avra2sMRdA0azoWyxNJJUsTLMQ9EAdLt2436+5P33wdczqH5oRDh0bnZ6ZH0gSGLCyWlu7p71/ZuH5ty/LJjqurYiI1khr09PHcTK9XhQhzY98L3UK6gAGCk+jtjUqrVRsrZKeLeV83C6LMAHJc5gopUZ6fXu82MZrRtQqMY15QwXZDW1ldjsIQYBhJUhwvhFEoiRLD0K1mK4rggQOLjVYdxpFlWTFEhUKx3mxKgqwokuXoWIid/dX7Z9/+X7E7HE8Hn3n+8dkcNjKlSlK60tBf++mlq0stLyRrTTOVz/b6vVJh1DGs7UaTwRjHC3J7RvKkyKpSuV42K3WKk5JF5dknT6dTiR+8+vqj8wf3jacTDG3i7JCgL77zVixQBx48ksyMnvvF2fWNHZJhwNU7a77nwSikGSaKIoZheY678MGFVCo1PzffbLaSqRTNUBcuvPfAqQcuXLhy+tTjPMebg2FKpS8uXb547ua5X7z1xOnsl7/07NhckYGGu9u5u9Za3Rm8f239/raXGpmy/WHoRziGdQetbCopxkwoiT70fGvQ7HXS6khBkk6lE0/szy7bzrW1nd/86r/d3N2Y3HPi3kerr/3ou6f3lJ565lPs4okfvv7m0ttnUYI2XYOIhGRmpN3rgJ+98142m719+1YylUqlMzGECVW5dv06wzA/+9nPjh07vu/A/tLYOMcwK6v3E2qqVW9PLizGDPkP3/rrc2+/TcPex48Xvva7L9hujaSZKMRf/Z+3Xnv3ro8rpCQLDB/oOkFjnJRkcZLhmUqr5rlhYNuTe2cmsnmuW7M1bVfXv/7SExEePPDKc+Xdzosv/5ennzv8hS98YSw/87ff/od7H141BkbZNTJqYkocWdVNnGdIiFumxrAUWFre6vV7G+vrU9PT7549+8yzz8ZxXMjnEUKDgYZjgBVEa2hEYYAIBMPIt7xGp/s33/mu3ai98sLx5z69b2wiC2DcbA3Pnl1/9/Kte/d1VR5n+NjxfIqhPN8iAO5r+tjISK3fGd0z22116/UmTdHPnDzwfKlY5IndNPbOpSUyM3P0yPz28vXtTry+vuHoA9eFieyY6fqB67Ak4fsGTpCykKQJOoxCUWEJAgOXb96r12srK6vPPPusNtRd18dxMptJ4ziwHSuOwigMWEYMohBj0O7W7rm339tdvjk+yh0/Mv6pZ2YADSIo3Ly58/a7Gz89u5tMJ33fVyTatLqCkNQtK13Mu4Yxl0/ZhqN5cGjpjxyZ52VHVtK27ogQf/K5030Qei33m3/+bY8QB7p26IFD65tVU9MhSUY4YmSBJkjGA4AEJIcl4zjGY8RzPM/jBCJ933Mc59FHHoYQrq6u5XIFWZE4Uep32mHoiwLbGeoIemoyd+PW5t/91d9gfuMPvvbJQ7M8m0mwPP/Lt6+/+tNbOzXLdLFkRqEZjuFwDIBUdjQIPQFRRmdAYqhbq0ZmAGieCL1HTs4Z2vITn3my23f+6E++3XyLunfr2mSqKNNc3zAgh+6s3IugxCkKhL4oCQQgSYpGwOVFgcA8KcZN6Epp1fMsWVLARqV7+9b1bC7jejA9kteGBsB52xiaTp/AYKdemy1mR+dnvv6Hf3rx/NnDpfSvffHEs6fHvYEHpNwbb1759qvX6wZbmhwDftjp1gkcJdVEFLuOg1q95r6FA5Vy6+i+qVMZcY7E29b2w7/zpW/9zfcXj84VBeXa8vqWFr355gWOESgepwgap8jA93iWoxk2BjSEkcwTDIQ0QWISCHEoUqRKc5rlBgC4nosDHPzwp28yLJPPj8cYoQ/7BI3zfMawXddqlfKlj65d1mz9Z//6jt7e+q3Pn3z8+SOFcR6gaHvD+/0//Pu6RsWIbRndp55+dvn2ejabaNfKyYQ61DU/9CBGGcOAZSGDe195/AlRq08fL3lT+Z5uvfXdN7cbwx6gXYLGQzywPFYiKTICgI4hKYkApyMY05wgkMCWOdocujjL+lFE46Rvuj4Ephs6jmfbDhl5lsDzajKzU2n4QZhkhTByeIkhgYqI0HO81//pRzznfPHfHH30gaRVvdXRJjeN4O//xzsr1TiXSTV7zeJobntjy/eGvfYQAWx9c0tOZ0PHPHiohMHgc888MGL12tfW3tjYxc1h52eXxidK93bttotIjiRpDCMjjA54lsURiGEkJVgMi5WkZDsBTQPfJQwL9IYBbsWBD3038P0QAxQAGE4QoqiCG5fONzp9NT/OyKl2oxG4TjaXyeUy3Y75J3/+Z8vXrj3/2OznXtgzkWIiJqrvem+9u/3Gh+VUbqpWrbMK16yVc/JIVzOm9ox3m3V1JK/HtjQw0xgxMpHix7Nmu/rC3ET31notV/rlam1lq0qRlERRMQ1oHJAU4SOIAyRQhMwLMIYMywRRyLBkq20gRBrmEGGE5YQkiZMYHSOEMxSBkSSOwzgiKYbcWbvf7LQnCKiC2Nb7M5MTsiTWd4Z/9pd/Vblz/oufOvXZzz/CozalpNZurHzvR7cv3O6MFMYbWgMyCEZRUhzBcZpkiHqrHkAU1VssHo6RxMOzxfFDBfbInr/+q/v/zzvXHYwaNlb9AKNJICcYCmejyGJIHKdoLAaKyNE0LrKC6/kBhK2OZhu+H0YhRCTB4CTBsDwW4wAQFI4RDEkTVBhEcQQgxMj06FQA6AcffOT+Znlu3wKJwNu/OP8n3/x7lRr88e+fPnTmIRYf8Xfaa3cqv/3HP1Cy+1wIHcOv7dSKk4XKWu/UA8evLV9/9OGTuysrAYqPqJKqGVW9+z8vXv+/Xv7DD8+e25vKflDd2RwGDAsYBiHGDRiRiGFkR4DiRJbBSZLmWNfxtjq7A811XBRFLkXRAPAUxTAMQRJ0EOIIC/WhGfkWBhCOk3EcYgBhiADvXriSlOUYI9oDM4JQlYT//H/+8datG//pNx4/dpTSNE+mkvcrnR+dXV/ZxUYzquV1Q8TAME7yTNfTEUEA15EYDgw9EqKXzszMjyaKL338r//utUrD3tlq+qHPsZSP4QLJuRDHKahwDB7aJEFBAiNoyjR8O4gH7Z7nQUDgnMSQiIkCQLNUjEEEIcBiSRJJglFTiX3zs5LACaKcTCYOHTooigrpxxQnJaMgSiig1uj83n/5fVtv/tHvfewTTxzGoFWcVX726tW///EtkhfzquAj34K+LHFMRIUgmhkrmUMDz6u83vvcidnaIFr3DCIx9eFrbwE9aG5XdQxnOQHgBEYjgmXIMEjQnEzTOha4ERxopmE4nhvjOEGRrMhTYQzjCLgwIDBsYWEfz7OJpHr48OHpqdmEKo+NTuBEXGuUk2pGEKQg9AAAJIgjYzhUVJGIiP/2p//V7LYeXuReeP5Ia2d39NDBt7737rf+6ULMpdK0aNpDjmN5jkcxDTAAcF/Th81KC7HYn/3uC2h35d99/rHvvHv59Z+dN9w4cG2O45QopjkWR0CmcYUiRJpkCGJgWd2u4ziu54ckwfMcwHEijGAQhYLAliZLTz358cmp8VJpwvN9hmVtx5nds4cCeKfTdj2bJLG+ptUbbYqiHN8iJUmkKCZC+Df/27cqu9XPfmzmN3/7MYqmMqXR+7+683f/coPP5ViCGvZNNa3oQ21PaW63us3nErZpi7F7+KG9e0vFD9/+YECShnDn4vUNDeKeGzCs5BMEjsUiSQZByHM8g6Dt+A1zoDle7JMEwfA85/u+47kAYZIsfu6FFx84eeLBkyc5jo1if2V1pVptyIoykslUy9uWaXGsOND7NE3Oz+83hw6GIZKIwKUbW6Fe/tM//6u1ZjMJov/3Lz6TKxV8qx7a9itf+U5FH5U5IkaRkipabj20PcsEYlIkYcA7QZ5BX/zfP6sO+vW33v9XK7radimgiCrmBZGAMyRJRwimU1LoOU4ADcMc6FYUxqIgh3EMMMxxHEkSH3n49Ode/GxpfEzvaZKqDC1bFIQIwtXV1QdOPOBYVrm822039y8eSqUzu5XdGKAYYoqa5Gmq06ySMIr1gCrv7OBs6uQCxQlyMOyJycnXzv6qXiPpFM2LKQwzTceguQxNeKk0RCFgjejRUsoyezfPL3Ey3xZkmuNVveqGHopkIoa8xAAUiwxnWKZhWfrAigKcokSaI4IgcF1nds/MU089dfTY4cceeTSOoyuXzhM4yUbczZvXH3rodLVaRTFqNZu+55XGxx86+eD61tbW9tbU1NTG1qaaSFiOmS/M2K5Fum7vT//7/z0yXdrZWjvzzGdjgRIUZv1e5Z+/d8MnFDYIdLsnMjEeY4GF4RC5ri0R5Bgfn5mQE5hynaB+sl6rVjUHsyhWprGIJAga0DzHRnE0GFr9/tB1wzjCKIJCYRxg/vzCnsc/9uTDj5yenJxwbLvZbERRVJreK7LMTqXMsSyGUCKhMjRVKOY/+OAiRdO2t2O7zkSpNBwOJydKS0tLoigSCACSId9+4712U6cE/sTByQdP7q001xRp8urFWz0DyYriR46ayIT2gIwhRAZDqV0LnVpMM5X2udXy0VJxy3U7pm36ISeJCEKZJ1kBQIweGkano/khwgGPIILQi0J3ds/0Sy995qHTJ+fn5/p9XdP6PMfpw6E2GCQSqSiA6XT28ceLnU5HUZPT07MQwtOnzzAMI4hSt93VtB6OYwDDCvn8SCLtGRZLMeQ7779N4YrrWGMqi9P+zFQJueDnP75ESiVEBoQZu4HtR36aE2ISBC7khejZJw/NZQr/49U3/na3Ve0HrMLxkkwTFE0TvELQdLy13em0BmEQkxQj8CiGIYTh1373a89/8pOKyrfb9XfPvn344LFatSZIwsTEJEXRtu0MepokyzhBKmrKD4MwihNqIgrCTrfTandpktIH+tjE+K3bt08+cNK1HECSCkvjPCdkM2xkO6efeJwShEhr/eB77zhx2g9DHEUCm/KMgEEiYETdD+zA4H27U6sZYsilpLYT2phDYQKGkbgEAtqNcLCz22/UNYRoUUgihBzXnJoae+2Hr371q7/j+VZf63R7nZFUxnJsOaEapl3eKcMIybKUGRlJpjNnz/7K8yKKYjEMNep1gAMAgG1ZoijmC0WW4Y4cOea6rhmGPdeJcIyMwriqDUTSSyXw2kY1kVC2KoOBH6WyGQsOWYoO/BAnMMt0eIx+cH7k9MHFtVb/p29dvnR5haYkyolwgNMIk2jKDOzaak3TfZ5XcJxACE1NTf36r7944vix4mi+22nxPHfz5s0YwdLoHoBRq6v3Hnro1ObaBgwhL3EIgUtXLp84cZRlKVEWLNOsVCrb25uu5x04sB8hGAQBjpO27ei6xiqyzAtETJBkjDlRUKIcionHJwt6s7WxU4152faGVmj70JVEOYrdOIBJmppEjlV1ilML//CDt+2IwUAsSgLkfCWp6l29XR8GIaIlAfoBy1HPPP3USy9+fvHAYr1e6fV6FM24jvfUU88ahoUiYLtONluMUbywMK8PhjEWIwzbt2/vxMREjOIg8DAMLRxYuHP7VrFYEHh+ODTGx8YgRGwmI0mi7dpmp9d3ItIJkUozpw7lshm5t77skqrW9QM/EiSGgHRSTqcTQrvTC0nz4wulfIF+59INasiwfMZxXJ7nEYoyqrpdqbYaPQAJkuE82+YE9nMvfvq3vvxby8truqGn0mnUhY7rmpYVQ2SZViqZwUkcAHTzxvWF+X2yIrfarampKS9IBWEQR1Gz2aYYBifwvXPzAEMoRsViYfn+sihKqXQ6jCKGZqWCbJsu7sIAOsGxuX2eaSaKshsijGMTioBFcZpSGRrfbuxAJ2i2O7HCzT9ybGI8eX9pA8NIhkShr+eL2e31cmu7SQGa5QXXMjmJ+/Z3/vYTn3iWFYRsIX/v/u1arUISFAB4KpWmaCKfy65tLBMERuDozEOnEqoSBD7Ps1EMEcI6Xc2yPEVW8/mC5wXdXj+Owe1bd987915ClT3PbrWacRz7Xri1W/EiF2cZ0nS7vVBTMyop52p3li0zDCFybN+JPWM4oBAESWayVBx0q2fffB8RKVFSXAKI6YysJBvV5nal7kbQDyJd1888eubD8+8tLuxVZf7DC7+y9N7s1Lgi8b2BRpDkcDCs1xq1WnVmZsb3vWwuByFmeSEEJEAEGcF2rVYpl0VJxACqVSqe56MIhoG/sDBvOW6zp5muT3Ps8vJtggyLuQSBEInjkcBFZ5476lpDFFFcZiyIcICALCsxhqPIZQgldKNSQXjpkUOvvnZhLQBAkiVAB76v943d3RpJkAROSJLwwude+M0vfcEcDi1L143h0cPH9KEBEB5jGEPTAMPefPONhx95ZP/CAk1TURCahgUIHEZQSaiVdnPQbQcwjiG0LWtzc4Pj+LFEgqUpkiQ0Tdu3/wAGMNd1MYRmZ2cwFLMcZ5kW6QWYxMtIZ03a8jFrZ71O0gIiKd9zEcAACmVFljAih4LtrSqQxUFdByRSSKzZ7jc7Q0mSLNNjWebrX/9Pxx84RtE4BRg1UcrCCEaQ50UcBzBGlmVZlvXyyy9vbW1tbm4WCkWCIMMwsF3HDwJBFHLZTL1aTSWTYxMlkiQ5lpVEMQp8kechhPliXjddSVZ0fdBqVCLfCSNfllVJVPAQI6AduRpKJzM+hmobZTsMwyhAMVDlBEXRlu1ILH10//yu4274jqgkKZLsDrW+NiBpzg9ijuNe/PyLjz72sGENoijoat12t68Nhrppur57b2VVGwyqtWqpVIpjOD0zvXfvXp7nWJbZ2dmiaTKfS2tad7dcjmNUb9Ru3LhuWmYmk+n1ugjGjuNEEMqyEvghhjCKomVJQTGKIWJZHiKEJyhSGIn2fnze7q6PLX48WxrjRYmgwOhoEcZxKp0xTGO5vEsnVGYk0TUxiMWDbmd9p4wBPAz8hCL/5Cev/x9//J8lUUqmMnfu3Tds86Pr1y5dvULQ1JVrV23HbjRbTz35tD7UM5nMaHHUtm3Hi2AMxkvTKAYcK2TTI91ef2pmTxhBmiZwAEqTpXQ6rajq2HhJVhO253c6bRxg44X8/N69H3v88SeeeNq2PZYXyL7tfuz4HtPWgcR5rVuDjh4hxFBEo96gOUY37JSS8gbdy1euxQkqKY84dq/T1fCYiSKMJrFv/NHvFwrpRqOKA3Lp+tKRI4cd2/jU88/XajXHNI8fOTwc2oVcXh/qPM83Go2JiVIikfCC0HNdxzFkSWw2K7ZtLx481Nf1PXv3ihJ39+4yBjCSIgfaAMIYxhjAiP379/M0Hfre/eV7pmWKonL48LHNnS2cxULKibrr7WS22HdhTw9DP/RdF6GI45k4hpgfz+Yzn3728XKlE8SRNbQdN6ABy3P81/7DV44dOVTeWjV7LcfUVpdXOJZvN9quYzebtdXV+67rTJYmtL62s73FsQzCkON6juc7jre8vp1M5b7//dea9ZakyARBWKbFccLG+qbj2BzHy7IMIUQxtrqy2qg1fNezbbvZbNiWlctmaQr3fFcQeJzh2aFrpEo5o9UbKxZsx1clSRVEWRZZhg7DWBSoKHbMyFdYwfaDnVYbA8hy9ec/9eQrr7wEHcsbdDyjL/HUr//aZzAMwhjs7G46trFvfi+O431Ns22LpZkoDAqFYgQhAuD60lVN65IU/fznXxybmAo8FMdxJp3yXPf6RzdnpvcMDSOGse97DEUdXlxMJZK9XjeMIoZlIhi5rt3vt9dW7oV+QLpuIIiiLCqDWtS8esf0ozAmWZryLWfYbNIsHdOEIKlnL1+JGckLhzhBR5H34uc/8/Kvv9CslX/+gx9YujZ/6MRWvaP3WzN754+cePj8uXdZivZtr93rTc/OZUayY4W86zqmYzM0vVvZVlVpJJEC0AlBlM3kfMutNHdFnp+Zmv7EJ5+jKLrRbP7/0QEBqpUyy7LpkXS9UVm6uRRF0TPPPC3J6h/8wR+defQxkqCF+/e2e9VlPEC1ptfqOrZNIhATcczzAsOSsR+Mj+eOHZ758fWVerkKIfHYo49/8y++ORwOm7WqE4Sv/NZXJvYc0A3r/Xd/nh7J6Obw2NHjtmk0G00Sp7WBVszl795Z2t7dOnz0BBGFhWxucmz8o4sX9k9MdNpDlyBDEKuK3Kg2ZVHFCcIPvVwuG0OEMMDzwkfXrkVR9BtfeGVysuR53r2795aWbi0uHvzv3/pLhuFIno6fe/LRoO9wowVCizq2ryayAIdkjCRV6vU7IAj6tSaaGQ+tOLQ9kmW/8pUvG+bAcv16d/CFr/6e7QZ3NrZYilJz48XS3qFld7qdTDJFEv3CaD6II9ez1KR4avRB140q5VomnYlouLdUfOfVbxsBOvbYk0gUcuns5Njk9s4ux7G24xSyCQBwhCFN6z///KcpilxeWWs2GocOHjQMY8+e2ezISLPZiYIAx0Jvfmqs3hrsrNfOvvMhRDSEURyjOEbVWh3DMIplMJJabXd2W5rr+//25X9z/Phh0xwEnqn1G+Vao6fpqYRqW8NkMl2tNqIgIAlqaJjZQtH1fVlREMLX1rcjCEiKyWZzrVY7V8jVyhuJBD+1d5oXFRQSpuUMzaGSkBJJVeDloTEMw6DVbuI47rqeYdiyKOMAd1133759nVb70uUP7929jWMIx7zg2oXzScrJjqrakMFIFkYRQzM8L8AIKqqKgngsnc6SZG13uzQ++r/9x9/t6wPHDjr1jipw2VQCD5x2ZdezdHPQM/Se1u9wPG/abiY7kkgmURyjmJibOwIjIopQp9OWZB4gH+LY6MJ+kEw1tA7C/ZjEDNeNIBaGURQGsizZtoHjKIw8DKBatd5qtebn97XbHRzDeV64cf2GNug7roMzSvzcK19s1+8TyL1yc3skm0UYiMLIdkwEcBRg/U63Ue3AiOrr7r//7a/euXPj6o1LptUWlMRoaRaPQgQ9huMmZhYcPwQgzmezA0PHSLBZ3tna3j539lwY+DTLaEPz3IWLNCvOzy+ahj1/7DEuvxdDrGO6qpKyB1oxqSg8+/ZbbxEkNrS0e/fuJaVUfiQHEGy1Ku1Ovd1p+L7j2A7D8F/60pcffuQRURbJvMiVZsDO4FB3x2tbvtzrKopiGUYc+6osEwQQFbof+zfrzVwx99CZ4wSO9XqbIGTbrW2GI7lM8t6du0ceOlNpVPfN7+NZZquyy7IsSdEURRWy+dL4+NTUTKvdXV1dfeJjj6cSiaVbt4vZhBuGerW2f26vYbqu4zEkee3q1VQ6e/LkqUa7UxofO3LkyEBrbW7d7Q/0kyePUwynaf25ubmBNuBZfnNzk6RobDDER4sSbfcXHnvsx7+8Rag55JmubWAY5AWep9kojDCWeuLJ061me2FhLnKsbqWZSY8EPojCYKD3tqvVffuOsBxP4KhRq5079x7JMgADxlDHcWAM9XQ6dePmjQhGtml4jnX37i3T0hEgC/mibZoffnBRkvilpSXHcQHARVEcnxjTh7bvx7o2KO9sMjTIF3OiKHtOyLJ8tV4TRMG2rNnZWQAwQZT+P8stWEVLvf2VAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAfV0lEQVR4nD16x7JkSXKdi4i4IvWTVV2qewAYwAV/hDT+JxfcE2aEARiAxAxGDzCiu6tFqVdPpc68KiLcnYt8wPKmpWXG9XA/fvwcx3/6w1sBJSUDMwRAABUkzjkjMiMZgZqORvVutxXJh2b3eHffNl0RSlAVVXYegC6unr18+UpsGIZYhJJ9MKAsg5mJqIgaGKiCGQCZiZkhgxmiEYAaghkQISIgoogAICIDgJmoiqqqKpiAApiaihqomiNiAEZQABAAA3VMYsbOmYKiEXHhw2a3TbE77Lbr1Tr2sSoqFTEz4FDWk2fX19P5ZEhHQ6qqmsDFmJVUJJMRARpA1mwACEhmAAgMCAYAgGCAiAinRwMEBDVVAcgiYqpmZmCmYqIGamCqaoZo6BCZzQDBQBhN7ekBiQysqmrRfDzsh6Fdr5bb1QoMvQumpgbs3eXl1cXFpXdBRH2oDFEFRKKB5RQNkIAQAdTAABEQAQABjJAAzNSQEEDx9FZmqmogkrJKVhMwMzNVEVU0UlMkBEZgYvCI6NBOv60EZGAAgEQIOKQ0mc66vu/7pm2P6+Vys1ojoAtORZBdUdRfvHo1no0kqyFU9RQA+tiKZgDLkgAMgBQNAIAR5BRfsFOszRjREBDMQNVOaaUiAqqSRVQBQVVMDRGRmJgYGQCNEQEQHSE61QRogGhIaIRGYKCmi8ViiKnvurZvH+/vN6uVd4HIgykw1fX4xes3IYwkq/NU15WqxhRTigCqakQECgqAjohYJYECGBg8XQKZIQIAqKqoqIomUVVTNTVVVVNj58iTIyQEAlMkIkJAQEJEAkVzWQWZTYEIGA0ZgqvKqj507Wa/7o6Hu9ubrmmD90RMjEbFZDp9/uwLdM4wF4UvvMuxH1JMksWMAMxMDMyQkFCBHUk0NCB2pzJQVVGVnFUtZ8mSRTOYmSiCGQIzO/JG5IgQENGIyDwDIuBTNgIQgToir6pIRiqCTpGfzef7/W5zWO6b3er2ru865xyRA0Bif3l1fX55ZWrsuAhBzWJKMUURBYBTRhCQAQCiqZJzhS9ijKAIAGAqaqZqIpKTiIie8l6f0gQZiIiQkIzInZKFDIkU8OnwhCqGyGbmDM0RgZkq+BAW87NP95/boWkPu/ubj6Lq2BExoEPm12++Go1GKaVQFoUPABjjIJoVTU8oiXSqSzNDIl8WYDYMAyMhB5GsqjlnE1HNeroIBTMjQmImRKJghExIBPQEo2ZggGBqCIRPJYGoYACOQA0IwFdVcX1xvt2smv64O2zvP300USZ27BTIhfDmq7+oRuM49KEqq6pKKeVhMDMDPIEcAKABISPaKcPFxLErQjhsOxGJOapZlqySQE1MkYidI2JiICIACs4r0QmXzJ6g9YS4iGCqRGhqBkAkbOoISRTqurqcLvrmcLe+Pex3j/cPKakjJkVlrkejl6/euLKQnMu6DCHEOMQYETBrMkNARCRAMwUzQyVmRIeiIDk1KeY8pJSziKhkzWjGxMF5IkLyRIiISMRESARgoIZGiGCgBgiABoqGiKCmZoqAogkQnCEX3j9fLLr2+PH+026/e7z7HJN6DgAmhLPJ5IuXL30IZraYzYXs2Bw0Z1FFMzFDAzAwBAMjJEJkZkDMmjWpSFZJSU61amaKSM4hoWMmIkR27Pgp1qdeYQZIBCRP8QcAAEUwExARAVMTiZJMk2NXvLl61vXH9483j8fdzafPklIIHtXAuaoe/cVXP8kZ1Gg0GdVFcbu+TaJgAGqCRohmwIhiisSMiEToOMYh52waU8qSsimmHJEceyJmBmJyjj0RCYERggGrGSigI2LNogTes2ZNWRABxSVok6Q0xBzj0Dexi5oVP656Vf3+5vvdcffpw4cUMxETESCNZ4vXr98YgAIuxrNQhOVupZIEANTQQBHAiIlM9dS5HTEgpJxSippzyoOJSlZAIkIiJkdPYXcOgEwNCAzxlCWOGcBSjk+kzHLMgwhkiX2TJA0px9T3ZIhAVKhndqby4fb94bi/vbkxUccMQERuMps/e/ESiE11Ph4T2Xa/MVMwOkGOmSEgABiYgnomUFAVeYL1JDlZ1iziXEBCImYi5xiZ/7M4n8gPmqkisoiYRpWsklPWGLu+67s+xdQGKrznqigqNhmG/X59eFh2Tes+PHzed+3t59shZQRySIY0nZ29fvNlVkWDV1fP2/646w5qBlnNAA0NDZFO4GAGRVEAQkpRc1TVGKOISBYzC6FkIkMkpuADIhiggZ7OjniKtIGZyGAmWeIwdG3bH5s2t13wrgzl+WIyNP393dvVah+7pt3tVY04g5HbH/bv3v0YYxLPLEDo/ah+/eVXIsJEL794EY/t5rAnR5azAqoCI6oiMaGBMRGhc3zcH1VVc8o5a5aUxDl2oUBCAAtFACImVlW1Jz6HCABmmhXURGLsu9h1x/12t8lJxvX49Yvr7rDdbW7/8Kt/u/9856gajceFx3FVpuQEY4rZfb79NAwdkfOKSOSr6i//6m9Ezfni2fm5xLRpd0iUogBAkuQwAAARGpB3DjyiQXdsRHKWnIfhRA5DEdg5IOe8KwpvACmLggGiIzIz1Sw5m6nmGFPsmuPhcIhxILJZVVPIXbP9v3/389XjY3BlcO752ZsMQz/kY2zUkiJJJgZyu/3eO2+Czrgcjb948zqpBuaX5xd97JfHbUwJ9QTNyoQApqduRRjKspMuNo2mlHNOKZ6asS8CEhM7F0JVFKAaJRMhM4EBqGZTBRDLkmJzOOx3m5xSVZWl9/vN+se3n5vdRnJE8vPxmeQ85H5oGzM0NAVh5xldCAhqzhFLFubCVfWrN1+y8wT44uwi5nS/eVQ0MwQwBFUFZgYEZAZE730fu35oNQ2SUsqCaoYYisIAiLkoS/YOTkT/RGPQACglNUl9PHRDt1nvUtufLyZhVrz/4fvbDx9i1xW+QEJw2Hdt6qOZKYNnRHDOI7FzLpzaBZo6ACuLUpFeffkaHYPaT169in33eXWviN6414xgWdSHkHIOrhBN4/G4aZq+a01zTjnmhGDMjopggMEX3gfHrutTGQIyEJDmZAPEPKTcd+1ht1we22Y8Hn/x4uyHt3/+9s9/Gvp+OpqOqjJ2/eF4aJtjqGpC9j6Iw1EoHDsDUQNTFMlogqYOiRTg1ZvXZVWA4OXZeRu7+93yxCV7zacJlZlUzXFwzl9Mz5arh75rU0oao4AikmP27IAohOI0JatpWRV1VXV9O6ReU8Qkbddsm81hvw0As9od1jd/97Ovu6api3o8O0+p32ya9tiwc5PpghwTkmPG4Fg1ptZMAJiI+GkQJce++OKLl+PJzBQuFmcGdrt8yGgqwkCniwIgA3DEZVlNx+PV6jH1ncSYYwQwA/Dee3ZIVJYlEokBMbrgTWTo2+a4z5JS1x92291+WwaaFfzw6f3XHz4OfSyKYjGd913/+HgLCsiuHI2KUDjniVBTBsmxjZAzO0IEpifmpeAJzZ1fXJ+dX+WUR/WIHT0sH8XMJCNhFiVDA3Dss+Tr59dVUfz47q1IartGFbIoExTViJCYqJ5Oc0qAdPonAhzSEIc+9e0wDI8PD+3x+Pxqvl/e/O63vx7aIYRqNp3F2N7f3yCSAZP3o/E4EIKYDHFAoZNUwRycJ3RIQMQKoCBkDKDu2bMv+q67Prsq6vJueYdMIEKIJkpACsbsVO1ice6J3373jWqMKapKN2TvixCCGhTOV2WtgIaAhMzsmZvDPqYu9t12s95ttyr55UX5x9/+7OMP74uicqEqR+H27lOKWoRAbKOq9mEEYH08gigTEwGjI3IuFKSARiduraZPiADgRHU6nc7H1efVoxhYygoZ4YSHyARAdDmbXkzGH25vMmgcYk5ZzepQknfsXPC+rsZZJWssqyJgMJPDYR37OAzddvtw3O9m5Wi7evzlz96ul+vpfE7ETdsuH9dodD6fMrMrXRxSjL1qZGTnSyQm7/kkXpgRo9np9Aksk5qhmKorQ/n67PJu+zBIxiyJ1AmqErAxkKC7mM4vRqP3tx8P/dG6TnJWTexq8kTg6lFd+XLImlVHZe1d0R2PbbdPaTjsV5vVtgw4K/23f/zl6v6eiM/PL7u23+12xFTXdVVVDCZZYtuLAhI7Xzr2zJ6IgByAmAoiZhHTQRVEkpmAkkJGBPfy4nJ33B/6FoAUgREzmYF59jHL5XhyNR7/8buvjaHrOyMzVfaTwoH54ryaVXWxbhtyblzUVSiW67Xkbc62fHzcrFZXZ6Oh2/z0p//baz2uF9W4+HhzMwzDeDwuyzIUIcVBckIzMAguoHPsvaMADpnx6biQ0pDikJgzMYNlUEUkNJOY8bub7fvtPbKzpEZgOWczQnQQFtPZ9WL+46cfdqlP3RCHmEGJXMHsQjmp5i+uLt4/3FdVVdcVMXz89E5lGNp+ubzvmq4urNs9/OqXP6/DPPggljbrbQghhFBVRQih6XowVcnBeceBnAdEdo5dMFOzmPq+7xtTMVUCRlIA0JwBVAAggZq6h/2aiGKMBA6yGkJAJyrkeTEZ3T7eroejJokxmmhB7HxJ3p/PL55Pxh83m/PzxUU1Xu43P3z6AJqH9rB+XMaun5R68+7rj+9vptV5KPx2u2q7OK5GVV2FIqQU22PD7ASorEZMzrlggMxARCopxS6lLsfBzJAAQMyyJMsq2g8AIERMICJuQNMkDCxigAAAAxi5UE2rH5efhqa3qDJEE0FmVxWM/sXZs6oqPuz2Ly7OR97dLh/utitTTYfm8fGOQEZl/O0vfpb2fT2Zt12z2W0c8nw0nU5mWfPQDwDg2CNREQI7j/jEt3OO8XgAFdAsKQKwqiCYSIqxlZzBuHYFIq2P7YdPq/V6507KjCNDVCAmLsYYXjy/+uaHbwUhxtj1LRoxY1nUHfn/+uLVYlS/vfn05tWbkcGmOz7sVlljf1x/+vBhzFAG/Ze//wfCMFlcbvabvovjejYajUajar9di2QmB+zU1LviJLAxk4qYDrHfoxIpKCg6kBQd0dC1otEHX5fzzbb9zXcfv39/u9sPADwqKyeS4T80XRSoy+LL62dtdxBJUUVyBkNkCMUYXfjLxcW4rG73my+ff+FFH5vN5+0yGey3q+3trbVrqfin/+9fJ9WsLMbLzTpmnc3nk/FYLT8u75xj9p4AmVmNRNKJmJtYTINDwaiMElMUhSHGQBxzGobc9Onz7c37T5vtrjMA511dTpGVkdxJqhMFIibFZ2fnJun+4Y4INEbJ6sj54IH99eLiej47DN31dM5Ey93mbnnX5rRZPx7Wj5XXfT787mffT0ZTH9xqswKgy6vzUV3t97sUM3Ng57z3kpL3LsZICGaKAjH1BtKnaDn3eYAnqU637fHz7ebdh8emSX0SQGZfIdpJsTbJ7IMzRUCPhAA4HteKerdeNTG2/aACBOzKgjksZrOLyWTfdbPRyJn2Md5vHw/DsTm224eHUQH/9pt/3iyPJdU5yma3Lcp6Op5VoVyv1qbGHJiLelZLTswu5QxEDKKSLZsMfcqDqipA1sG7cDjEf//j2093ezW108SPhpYB2RDMVFS984joiFjMEDAEl/Lw9sfvFSSnJ/ui8J68r3z91cV1l7rZdBZQhxhvHm+PXbvZbnfr1fWi/tk//u3m/jCui0G7/qiT2cVsMVfR1XIfXBEKT57q8Rg9D2CqllMiAIBkObbHhsgkiRoAQdvLb//w+3fv1qo0qcemmQBVlVTYO0COaUAiJMdEIupC6WJMznHf9yIJwJJIkmxgzMyFHxf16+trBJtUI0SLKd7vVqtmv91tj6vN2Odf/vPfx6arx3XXt73g2flVVY1iHOIQQ+Gc874oq/GIHJmJiUgaCCQNvapIjJozQM7Gu6Z//+Hh2+8/D9G8LwAspQwMSRMampixqgoSq2amQOxTHFzXNSoah0FVzUAkS4qmGZ0ryhEivrx6NinKkwDRD936sN8fm+bYHDYPJQ9/+s1v9ptDWRT7ZqtYjCaz2XzeNAfJiZ33ZV0WZVWO5rPZev3QtYecOrAkKUkaTgKbY942/bcfbt6/Ww69GVIICIYAqCRgiEhmSowEiIyAaMDec1EWBuZSzJKSARCxmKoIGhBiUdSO+OWz55OyQgQDa2N/+/jQS9odNrvl/aIq/vWff9o1cXE22262xPWoHp1dXG5Wj4zkXFEUdRhPptPJYjZdPz5uVvcKQ6CiaZscDxKFkSaz0Z++u/nZL75NCYmIGAxITc2AiByyqiIiOGcKRGSgxC5rqsqSiMzMpRSfdBlVAEMzIPShQnRX87OL8YTAzOw4dJ9Xj+3Qtd3+/v5Tifnff/Fr6YfC+eX6QYWranp9eXl3d+e9d64I1agejefn5yr9/e371eN9jA0THLqD5aiSDLEX+5d//M2375bEpScEUFEjBgRERgRzzEZkRGqGhAhG5ESNiNixmBmAO2lsJ4GN6PQtRHJEdDabMRgAHNvmZnXfDP2xPW5Xd0H1+2/+sF0/OAh9GiTR4vxsMplu1mvngg9lPZpW4/ri6mLou/XDzXZ5m2MEwKSiKZqmmPXdx+2f3t40TaqLkUhWADMgJDNAQgQMoThZfqrKxJY1gQZyKuKInHN9P5iZcz6oKv2n0Id4usu6HgE7BWyH9n67boauaY/3tzcV6Me3v1/dPSiEtmsAw8XlF2UVNtsVoivrST0anV+cTaejzW51++FDc1iRCpnlJDFGZGt7/ddffff5bkvB++BURQSQmMgICQmd95pzWZaH5gAG9KS9AujJUAZiZudybhDAEbMB4JNKhkQMqrPxLCkcY98x3N/f5xxjjqvHe0rx8eHj5vGBiXLWLsWzxeLifH7z+XMoavLlxcVlVZfPrs4/vPv+06f3fXvw3mXRHNPQtdWkvLuPf/t/fgns2HkSk2xGzigRZEeBHAJizhnUmsMRCYgoi5IjTZmImHmIqaxDnwZTZWZHxHRymREJOfhQhHC+OLt7fOxT2q02atoM3Wa9TN0xHvbvf/izqUPLm/Xj9Ozy6urq5vNHdoULo/nZYjIbF46/++brzepBYwzBpxhzliFGYP73P938+nffc1WaAehp+lQi8X6UkxDDSXRAU3b8JE+LAFEWIccoerJSvA85p5Mx66qySCkioHNFVYbCubaPd8ulK0LT7FMcyCwP/fGwoaF5+6dfoTIA7I/9/Pz5xfXF43pJHOp6PJpOf/LVT/p0WN7e7nbLlDoDAU0yDA6hB/r173989/HBF+VJ1GXvkZ7MLgBlp4ak6MCAySURCh5VAAgNmByaAbtsTyJI38SqHvXN8eTBIiIq2P7YeKYkWo9GwfHm2ILmrmvX9w8jR3/45o+pG0JZLZf70Xjy4uWL1XIpCvVoPJpMnz+/RpT1/ef7z59y6mLqm+boFVzwUfnv/+m3m0NP5MDMCAHRh5M7SicvzJAQEAxM1ZCYSFMGRCJS0xNCsaOc1XuPCEOKhFBWlTvJVUkSATA7sezLIoSw36ws57Y7blePKM3Xf/w9SZ6MZ/fLx9n88vzybLtbo5Hzvh6NL66u+q65+fBd1+xUBlTrD10esgvY9PQ//9c/+Ko6nZOYfVmoinfOkHJOCECICKSmbCaIqkJIBJhydsyG6JgtCxKqpMloBoRE2HWtQ3KACGZEjEiMZAZlUXRNI6pDbA/7XezbdvO5O66llcOx5VDUozqnmFMu/Kiez+aLxdC3+81yaI8xZs15v1pJSp5506d/+Onvi/EoiRLwCeTM1Ht/8kZUlRFF5T9MSEEAJEA0FXXMCMBEknMoCrUMAN77FNPpc1M77bc8OdSoOh9P4xC7votD2zXNev2Y2u0P334jfY5DUoO6GnuHQ9cRuqoeF2XhPR526+awS8NgadhvVqaRy+LT5vDTn3+bDGNSsywiAIQIDEiAMQ45JXzaO0BGgBP7QgREQyDv4Mn/USI6sUtFc8F1XctIiObcaZxQRQRRvb64CMRD1+YcU0qrzSowfPrx69yqw6JtGu+886FttqbZkWfHYLZZPnbHvUlKKR63jyQZMdyu4u+/2xaj+Xh+XpSFqVdABc1Zc0qSs2NnZiftCsHoaaclE8KorsizmKABEwMAMYsIABA7QJScQNV7BgNHwQdh7fNoOvbO3TzeoyPLsu83NsRhe3fYLIuqfHhcjhaLcVWnGA0QC1/UhVnMMR3bxlKnfR66nYmAo21H73fh2av/UjnXmo6b7f27b/pDS4ZqOalzQFVZ910riMoOVECN0AliJEtdZwoGBF5zToCY0yBkBCE4SzkCFmCDZk4grg5lczyCp4vx+PPjvVk2k+bY7B/22q0+fP89Q9G0Q6hH1XiUNBkiGJVlGYqiaw6AiMMg2h+HfWx79rza4u1ne3ZxOb+azWb1fn38tA7d0PXxBxVFYxRQkaY5ngwbE2Hi4BkRUzRGVMjEmEXQEBlzFkIqXEgpTWfjkyUIeFpQYCJTMVtMZxZTn3vR3Pf9bruZBvf46cf2sE9Rmy6X1bgo65TFAMq6LooKARCh6/fJcn/s8nFfkMu70K3r6/PXl+fn588vrl+en11OJmUxP7u+fvkK2SEimqYYBcCQzIAMGUlVVTIzBnaeKXhiOlmBhqcVCmA1PVss+m4gRiZGJgSjw3abUjqvR9v+CGCac9ccZeiPu48PNw/z+ezQHEfj+mJx0e3bgKGsquBDzjmmYbddlYbtYRuHDoCOg99tR/Py7MXl1JfIpc+Mi+dn82kxCWE+v5osZhmEGE/DlAE6Qnfa/WFEPnU1NYQs2TtiIgAkIgBERGaan827vjfNgKiq7Bz98P0PP3n1WkU27UFy7rtu9XiHcf+7X/+Ld8Vmu0fnpotZe9w7IOfKqqyQLKd+6I7eYZfS0B4lRbAZN7O/ufiqKqj1kCuMqRkkZ82zq9l8UoQQLl+/KadTRTDUPPQIYKoG+bQRGBwTAJgG7xGR2CmCIYkomCJaVVWO4XxxFoIz0dM70LffvJ26sD7u0UxyPh4PhbOHj99T9kBy2Dfj8aQoiiwR2ajyjjkPA4L0XQNmTbvOaqCFbqspzicVuEk45DbFlPoU8wDOz88X07PRtCqqanb14ktlr6pMkOOAAABPOjQBBHZVCAjmfaECpzWQwJ6ZFfJ0OlmvVirChM4xEQEAPdx8ymbHoWXEIebYNd367v7m02i6aLp2PJ5MptPdZmOAvi6D4156Lih1HYnFtoO+I/aks79evJ6N+c/dspGofWsx9n2f2hxNpeB6Mb68mAa1yXR+/fxlUkDV1A8iggaMrNkcMzo1ywUjgHomEkDySRUIU7LxaPz4sB8Obc4CRKe9EXr/7sd3N59FdYjDcb9hG95+/UdHnhkA3Hg8k5wBkL0rqhLMDCwOIqo5DV3fGQdtsdvjJui20A2m0uBZqOOx6Y7H/fG43qyT5snZnEaeCyNLi8vL2dlFL4JkKWdml1WLsjDEuqxdKMQgMItJWZSQc0Akxj5H9m7f9RqcFX6QjB4lRnKl/+2vfi0Kfd+mZtPv7+LxGFy4vf0cQlXVo77rwWA0GpNCjhHEYn9UTVmzWOqHgvJ8Ol78ebv6uN9/UVSd9LfdNoGg87v22A+xbVMxGhdn0/H5qCx9Nnj+4hXWY0SWlGJO3nsx9d6lIRmgARbkRkUhaXCO1JQRy+BPLCODiaqKgEFwni6fX//iFz9PMW7XD96Gb37/u7qqY06+qCazRTPsASAUJbPLkoLn2LegWVKf+o7U2b48DmGvXY4RzAqzrj9uuuN2aFb7bde2fR+TWpd1uljU07qYVLPppJ7Pf/IXf0XBI0FKg/dcloHRiBnUCucQ1KlWBbtASuB9MS5HkpMHpCQeqSpLT6wINJpOv/3m69162ezX77/+U+77qh7vm0MIFTHG1Dnvq6oCAMm579ucGk2cht5M+2N9MX11tOE+NmWBvaTvtg/L3HWaujgMKH0fhyEemvbYDy6Ui6trP6mcIyV7dnV1fv0sqcQ4pKEjAAIacmTHBGBEzrE5dIyoRshnV+f7riF2TNTHoctxOp4OIhSqcrl8/Pz+B2c5t41mvX94AKTFYtG1R6bSFd6HMAw9gEnsIatKFoVjTwkW69gr27FNm/3hGNvDvtnH2PT988UFxCTZumO3X2+TCYxqqEbFaDKaj53Dx9XqxasvJ+eXAtAc27qedEl8qIAchFKRkmP2hRGVVYnOFuczQABGV4ZQFimmh4elAyLHvijddrX8+MN364eH6Xw29HEyPssipilwGYpSRBAh9l3sOxPL+RgFj0PZA+25J7OJK7pBNA2ZAAxmi8U+9W0cxHToO1NdLlcP6814fnZ1/VwcBu8unl3FnF+8/nI0nsUst5/vU4acUh9THwdDimKmkCRjVdbTceURFZBgkOTILcZTYtKciBHHs/KH735wAhBzn3MoR0VZdV1HHELBjgs0NYlpaEE0WSSGfZsaY2ETy6pKbIK5ie2QWnRgprvNlgQgKSL1bZ+bmNqcs/J0gnUtTG13DKWfTWevv/wKALf7rYp6coQ4dsj4BERm5l0xHp1ttrsuxdPeV+xj38eyqkJVkBkszq42691oPJktFpvVYTweE6GKelewK8FAJbfdMefEzKAas3xetX0Xt816uz8aUts3Occskgoe0HbNcUiZ2JFiCFXuUt8M+/VWAUfj6XQ8ef3q9X/7H/+9qHxKcTwev37zZRRph46DK5hHwTuE4JAAxkVp2oHkQy9IDIRgYFk05912X1Wj/w+QDR/cDuLyVwAAAABJRU5ErkJggg==)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAaiElEQVR4nF16WZNdWVbe96219zl3zDlTUqpUJdWgUnXR9AC4CXCHTQQv/CJezJtfeCEI29gO4/AAQdiGpsEdmAhswhhoummwm4ai6Ka6mpJKc2Yqx5t57z1n77X8sM+5EqSUyntT5+5hrW99a+TP/uZDdyfp7oQQJOFu7iYUOgEAXp4xlldGEC4CwmEE6eJwECABB1DWhDtANwAUGNyA7CAoENBXKwOAQ50ADe5w0kgte6P/ojtAEgbQQSDQyqYACIeje7qcw+EOOkmCAN0J0GndxwFAHOy2MUD71crF+4fo1i3tQiEoZWk4CHcAZQd/ZXO4uwCE9YvBWX5SURZAEMAh5aT9YxCHAyxbOrQ7vBV9GAmnAI4i5ZV8+PcEULbqFqYDBNn9Dwm6O4VAURPYfYribv0bt+6Z8v7lKxgAOgNAev9LAk64gSQ6RTqZAenkWAD2ciEHiwbKt7MDnXsHJu8wRXaLsNNsEY51EIW7s4i4/1MOSVmpo+xpAoJwc9CJUHZ3OHslukPY4xhkwSoA8hU0+kq0BVQQt7Lj6gl3EErSpTtjL2jQuyuwaMCLdLnCMJ39rbuzvALLImgHQA8vddKZVA9dl/5q3gMQJNmLtDNvuHVLlXu8xD17QXMlAV/tDwhfQq83aLAYC1G2cvbKdO8AQVIKQ4g7nEEcTni58QppDkAIe2XjXtMri+/R4DQ4CCmSKjIubwqaOtPBS7SA5bJFJ+jMCQSs3LuIA4C7dVuVG4J0d3d0tILgLLKUzixkJdtOEIXnBHC69zYqHYq6fzMKs1K8Q0F/T+8u+ioVdkLxclfxzrh9tTAdRXYdrKVoacXowg6/Xj7Qry39Q97/7VEC78EAp73kSe9xRHQGQJBC9pbqbkUp5IpeffXxlzu5vbyhd4TRw9H4cgUCQnarkRD2NsCO4Lxb86W02EOyXCeUOxPeWwnclczFmM1XnyRIKDs0OHuKKQAo1+h+WQgJLnTArDjB4iBXDN6LHySd7lbcXwBNIB04+BK53TGcoKCotEgGVjDoRIZLz6m96XQ20BlJsfnyiSKFTlikw15htV57hQPZ8QUETrq9oj2Sgh42dAQBO8ru0OMratP+0s4ealL8fDE/cWLFUNqJlCsGdncvTF6QTHiP3cKCsnK8vXAM0plksTd20QG7Z1cHIalwB3zlB9Bzb3nlK8fnxaOtxLnat/AISZeVVbj3DrfzZB3x937GrbckUroH4XCGwpTSExY7FigA8N5gALqIw8VKlIVgL12J09H5PZdCdAYnoC/jDKcLQIc5XDrPLb0jgJNWbtDROAk6M+Gv+EFCQBUBQDE3iigIc5DWOyMqFYQbpHgGB6wQrYGwXDgkvGKvL0OZ/kgUEOJFCUV8KydtL4/Uo3flujpzJcwdRnF3wFmMzdwhZrmhi0NBWm4zMgFCS/BU7iAkOlEJAClu2XPht+JPugvIS8qk9cAp3oP9q946vXCtuIBmPewKLXNFzK/4LivgdJSbuJEQt+Sw7BkEPDObUkyzdyEHCeRibOZGKbRR9s4OugtJIEjvDVZGwN51dvbkHet6Zw69ulY49ZcWUyzA2VkgC3RWl3IAZpJAJSuawVqQTklkdtOU/B8EHEBYxW6gunREpNEtu1so4HgJoyLcPmIssu1o/GWusgrDeq2ht+MVMXrnPsTpmYRDvEuVshsakaRQ1cgQQ11RlBRCKQRoJTwtesjJ3N2yW5NTm3Iyc+RLd4pogZB1D/cSlT7W6AOBzmv0kC++vqDH2EfUr3BFr5gubHFHStYYtK4m4/GuhtrQpma+WJ6fnz05PX58eXq8uDq/Wl6lnChSxSiqZibU0WR9UI/H4/XpZGM82azH24N6PEBM5stmGUgWxuuO+NKmux/uDlkJ3eGgO6gQgWc6teR4JFbJYa8Ah0PgIYhMa4lte3l8+ujJ0//z/Mn3Dz/53vHjTy+PntrFJXOHyVd9KLoIlyWNNbhH1+Gw3tgcrm9v33h7b//23v5bodg6HZ237mMyQ2bhQIfnYoMUUes4LmgVc04krE0qIkLPbi4GJ1UVJJy5befnh48f3v/us/sfPP3BX58+ue8XV1E1hjCMcXuwNty6Xg8GGiTGKIEQYdGeeVANqoLcNu1ysWybdjFfNG07f/joxSefPs95idSxkOGVMLnL6LpVCmhAulu2giW6tdaYUdVAkZwtJGRKrKpIz9bO5ueHB3/38JO/fPLdvzj5wUdXZ8cDYaV6fTRZ27sZBpF1FLKk+YAvUl7kFhnmls0MTlCl8zSxqsJ4FKZhI4RK1HJ2OFJeXM6C9eFPzyQl0vKeYnpNAhQxs0L3BjF38SQuqUmEZ9Gqrps0P3j6gwcff/vTD//v848+yLPTQVUNhuONazeGw0FVqQPZsXS3xVxFlAK4kgOnUB2ZoqZi7hRKsZ6cQ0piGWjaVpPA3AATCXE8Cr3r7niH6HOwYsPW26ODShICEWdHvSkvkQYMoYqHlwdPP/yDv/vwmw+/98Hi6FmErW2uV1s3SQ0a66iVBohk98XVzHNWkgoqNWiMARmEUUNx4qSoCNzNDF3u79myisKszW7OlLMFBsFL2u0Jtc83V6EaqEI6zCBdkEHPXqnUo+lhc/7db/+Pj/7gay8++VvMz9fG0/WdLa9reK6CrI+nYTCet+3V1dV89mIUQx0Z66quRqK6zHneLq+aZdu0KaWUzbJZyiUdh5nDtIqDuq6rqopxOAjVcDAQGWqEpZzbwFed0as81B2+pA+ezZygEjDPFhhHo7ULXn7z2//9W1/7lfOHn+LibHdvb7h7x20ZcjsNg+HGzmWbnp+f2fGLSRXWJ5Pt6fUF/Gx2ejy/vDg4yvNFatqck7t7NqE4pc+DHYCAUpwC3ekQARg0SmA9jpON6eb6eihlM/97Tgh9dMYu9XaICEiz3HoejoYe9Fsf/sE3fuuXX/zNX42bdOfGfr5xvclLb+c7uzvVdPL4xYvTTz6aZNvf3pvs758tFg8Oj05OjpvZ3LMHCVAEEQVFgqhkMboR1tXyQLOc3TOBAKWSDBR3C8EEwMyPjh89aT8N2aXjYHcaPRDiagI30aq1RpSU2t3avFSL440bn55+93d/7ecf/dHvr3v83Nt3dWft4PnzMF/cuXEjbEw/PXh29fEn+9ONz7z3xaPFxfcfPjj8+GOfLxRa1zWiMLhnQ0IjAF1cxIx0VY1Ud4dKY5kWolHALGY5C5hTBti45TaL0CWGofCf/ebj3su6qwskZEmeqZLJZDmGqkL01IZBPVf/xu//yp/+xr+L5xc3b74Wr21fXF7o7HJja4PD+mw2S7PF/vr2cDq5f/D08aPHVxcXIVQDBgozmVB8PEQgTBEiFKP3gYBIULp7Ngc0xJxyTskdohQtMUuXUWWFZHryl+E0CEUQIImJB0sZ0gQTusvAciUfffSd//WVXzz78M82p+ub9+45fHF8ujudyo3187OzfHYxGo/02u7Hz58dffcDX7SM1WCy5m3KZpEqwaMgGoIIKG0WE9cQK0LhgCeYKdVJGgAqQBidVBIiXd5bV5VlK0VVBgmd7Ev8aAJEWANNRpghOKLq6eXZn//xV7/12/95aumNN97CtF7OF6NYTzc25qlZnl5Yk5x8dvTi5HsfWZPCoPJhraCnVlRECVhwCRIkCAEVUbXWcsqtg1VdVzGaZcCVMpysAa4huPvFxcwNIM2tJN2imkWzi4q7taFEZoX/SWS0ULYpO01FGKqPn374jd/9T4ff/sbecDzZvIFIb9IwVA5fpsViuTw6O7s4vcjLJd1jFTGIhEd3ijOSTrKiUgFRaWHiHswH5NZwNB2Ph+PReDJeW5+OYq0QrUI9rEMMKupm89msXc4dnM8XV4vF5dU8pbxcNpfzy3nTzI0BrxQLnDAxSQ5xyW0Yb3/w4de/+ev/8urpp7t7O0FEg6HS5WVT16PW0tHRyez4rF02CpEQ6U63KFRR5CxQoYiqMpq1TGkUq+3Njf2d7Z31zcFkbTCZxNEwqFYxVoNaVLIRQhf3nGFO4ab7IFA0uFtK+eL8vE1tu2zb+XI+O59dnIeunOmrYr20XA6simu7f/jNr/zZf/lXMjveu3EruU7Wh5fzE7/AaDQ+Pj49OTlbpiXcJapmNzdTwj1mp4mrWgie3JtmVNve1va1/f3rr782Hk+0irGOUQNEE3CZ04tmPp+dNk1zcnwym10gZaSc2jbTW7OqqjY21tcmk821tf1rezGn6VZVQZp2vlwugjIY8yBHFxg0MddktbP7td/5t3/9X39pSG689gaAvZ3tg8ODjdF6PR189P2/PT+bEWJiDB4oNFGRTM9mUG2JnFppFrd2dt6+fWf3xmvD9alFZPhCdblYzo6Pz85eXJ5fLOdX7dWcqa0pk+GgUt+qqvF4WA9q0TFEU86zq7OLZ4/n9cgna3/4e/9zc3eHIQ43d1+7sXNrZ5s/99WnGqqEZUieVIYWB5sbX/3Kz//5r/2Hva2NwbVNmy82966/eH64v3ttxvZ73/krNQR6zk6IijAkIQB1iqd2uZyvMbz52hvvvP/+aG8z05O1KdnZfPH85PjF4VOZzcfC9Um9t7n1+rX9za2NycZUh7VUFaQqWVhJTSHqzgwenBx9/U/+eHdr40ufeW/25Nnp0cHD46OnB2dHyzZ48oaLyi2bVZDq2u5XfvOX/uxX/82be3vyxuvN06eT/f2DJ89uv3br8emL+3/xwWRjzWBJrPTPkrt4pNDRYrnYHkzuvPP2a+/drTbX08IW0JPZ7PGTp88e3c/Hx29ubv7UO3duffnO2u5ONVq3IEnYpDQ3QybaJGgp2jnWLpd1b5sp9bO33oxVxfHazhdv7Ab9vNZIi8vnT/lzX31CE8NSQA7X/uRPf/tPfuUX9gbDtf0bx8eHW+t7V+cnm3s7Dw+enNx/HEYD5hSpBm9pNeiAaNXmNInyubvvvnHv7gJ5edmoxOeXpw+ePLh6/vzGcPzeO++89dZbG/vX21Atzb1NwbKIioqqKkuny1xMNZKCVSHXs6UESl0PzL1tk6XkZg6DiFYakuVaYuOoYvir7/zh//tv/37qHG1tzs5O9javzy7Pqsng/oO/uzo8jqOKKSlgnlRVqeICS+Lzd2+/+UOf/9G4Nj0+OVpcXT19cfDpo8fj1j7z5p23f+bHdl7b13qQnGcpVUiBQFSVqBS6Q5iRGxjM0UCkJRVeKpKlsCNwtJdzVYkxCilKodLh9ABKi6tK6x88+Juv/8a/kNnx6Pq1Zr4YrY9OZoejun5xdHB+dCx1kJwhzNmDhlJrEfru3u6779zduf36Fezxk0dPnzw9Pno6Ab/8zr2337073dlyoUGZGYNUg1qgcIOzmS+XMAYhXUAlAsmgpJQEXVULuWdDzjkEBdCmJEIzgCYiQTUEz8J4fHn4R7/+r9tH96f7N8x8OKzPTy4ma4Nnz59fHJ4gBGR3ZxViQnIJ8DwcVG/feuO1O29ic/L45OT544fnL15MNX7pvfdvvXVnvLGpohmIEqOoiGTPObshAxR40zbZ0tZku4pRSCVAWldKKp2grgAEqrtJiSZgXWXQkN3dLECCT9f+96/+86O//vO9Gzddq9Ggvpidb29tPTp4fn50IiLm2bIpQ5vyoK5Tu9iarr9x792tvWut6JMHnz6//0mEvfvmm3fuvDnZ2BANAg0aRWhubTMnRVRDiELGEFSlbZs8L9WAYKUTXkpqwpclNRcHKCXgd0PpvoqQph6dcA/b12/81td++eOv/9725qYNqqqu583y2t7ug8dPzg8P61i3OZeiCWGhrhbL5c3dnbfu3eNwdNEsHn764Ozpk/dff+POvXuTnZ0QK41RRUBv2obuMcQYKoioqgYtcjV3c5YCjZsJJbvRTVaZFSHseptY1f3NSwvMStZIODx8+/vf+tZ//MVJHbk2FldP7WQyeXx4NDt8rpTkRoKZFK/G1Xx2+db+7bc+/36b/Sqnjz/4Tn05/6mf+entnWtjxhxVY3BguVhQMBwOqxCDBkfpFTBnBBKhwzhERISi3ewAygQD+uhGuroUrOtlrTwE+qYKJfzOL/ysXlxM3n0jNUmGrEK8vJwd3n8Qx0M0yQUOhlonWl2dXLzzzt23v/DZxdV8uVw++Oijt67f+uxn3+O0RhSNQwhyTg7W9WBQVxSqUkSyE+6iEqhCU1VS6DCzAmdz0k24ErWbgTTKqlLF0uZ62SDzLj0Ipx//7bXbt32ZQ11l5ob67OFDHURvkgNVBkVU9HQ2e//eD937/A8/Pz6Yzy+f3X/wY5//wvV33hpoNaxiCt40qVmk8XA0rCulgK6iQnF/papHM3exDHHSBSZwwoLGvmBOCASyajGUdnDfeFmVEMvkBgiG/ZuvS4yL+dw1r2F8fPjMU+4pQRI0MrfLfPfO3d277zw+PV5czc4PD7/85S9vX78OOoMsM9qr5XBUjTZHgiikqpT8o0hKHZBSMXMRFDEKOw04kHN2WAnsu2aUezFoIWRVcHRaBoBi5+WWod5cn50eTQaT2dXFuTezs/OgtNy6VgBE3TzcuHl97+6b86uLs5MjLOb/+Es/Md3dyWBV1+7mtlybjlAPRFTgUkzTS+FX6K6lE9glrnSIdK13OiVTlNo13jwXD9aVRFwc7KqGfeFQyrV6kwiXp2fTtfHlrFGNh0dHQG6zwzW6000YNrd3bt5583J2evDkyTQMvvSTPzHd2gQ0QqxpWYe1yaZAWoEKLZsBQVSoztIicIcpVTRAmC1neM6W3Rx+dXlJEVLcslkmKSJCUqRkySToJiJ9r6QDUQxRRVUkVINq2aKK9enxMZKVPpRTTCRIno4nt26/nYTPH9yXbJ/76X8ap1vJIJ5EtK4HsaqySSpzO3BSVIXO7FZKYaLqkGSe22U2S7mFGSyZtaNBrY4gwqBVqGCgqPQIW42sAF1xH4SbFbPP5ik3LRAYKs02b67mlzMRmLlDIVBhkMEbb9+r18af3P9+jMMv/OSPjbe23JGIUayqWMegxXOqEJZLPROA0QmISIZfLebtcpHbFDVUg7pSxrpWGUKgUKEYnKJRyniTrAaO+mYuusYwSFCjrsrw5cmQ2jyp6scvHiCIZxMqVYJKu2jufvZz67t7j588wuXy3o/8yMa1m9EkeRoNxzEEXXWtKe65jKc5IRRVbdLy5GLm2aqgw6qSwZCBVCGhlDJfUvAQNDhgVginKxB2R+w6JE6ups+kNFRLU4hgWB8OHj19YjlTo3gAswrRLHd3rl27ffv45MXFwbN7771/7eYttnD4dH1aLMnpVJHeQt0sxEDhbDmfnZ8puLW+ESaVqMBNsos7EhzQUFoHXTkNlnuOl25QpLP5MnjRNb4oLDcptWehFE2FZrmYzc6HEnKp+Ik627X17S/+oy89PT169OCT26/dunX3XXVtYYOtqSVTEZSOhnvKGZBAxCoumsXJyWmA7Gxv1sO6lGnVCREXW+bsZB2CiCK7aFe1F1EhzczNeq7qezUrsum/rB/xMrcSNYXHB89CUMskHcyWnSG894UvzZI9+/T+aDB854d/pGHUttnYWL+0FCVUDFnpTglCsgqVez45edEu59sbG5PpWmpTXuSqrqz0fj3DvA6Rwhae6RKkmzETcbdkTgpFYZn8B70mAtZbdum492MHFAKyXC4JbwRZkCWTduv1NzGcHJ4enR88/8y7n41xYo2FyahB3smRlKZiDQHczCg8Oz9/9uRJUL2xvz8ejrxNgUKVxpLRHQYlorSw1hMBMYcZRUB1h7mLBJDZUpm7yZZTzjlns9JfFzOawQ05w7uBJ7i7mYW6EuQcu85yHgzXNq/tXZ4dfP97f/mZt94frm2lZj4cVIEqIpe0AGH2Fi6ibU5nF8eS0vWdrVgPUs4ACsJdHELSpOs7k64UL8VQCo1QJxzZXcXhJu7mECmXdhGhlGlU6dlCHakMba7m+YK6NDRxUWWb0vb+vgR9/OT+hPXdz32xTcY6hmqgZXJK6EpzTwHL5RWuFhvDwXBjM9EaS2WQqtuwNLf7tji6eLNv/7uLm0My3AOXlmllOIbuLiLajy6YGWDmSO4557ZZtpZTTgLWdTUYDoJTxFRVk+XJ2sZ4bXO5mJ88e/bjP/5PwnCal4vhcESnK6EiMDFQeTmb0Wx9bRrrsLCkygDpWpndNBDoVkZSuimEPpg08+R5ia6DyJalm5QdDAJDWibLObWpbdvUNp5TahPBxXxuZstlY5bdsqiGKgQDK0aHG7ixvRer6sXzR9c3967ffS9dNWsb60QgPFCQMxQOPz89G8Q4Ho8RpXGL7MIG71q2TG50xqiF10VIR065bVMZRQXZtE0zX6xP1wZ1pSHM5rPz2cV8vkhXC0kWNVR1xRiGdT0YjUGK9iolSbTN0pZtu1gGmFPUU6pinK5v5NQsZ1c/+qM/ucyoqjCoB6SS2c2qIC44PjpaG08Go2EWqDM4lATc3Atk+xHGjkdSSik1dBGRELRScbqKqPGqncUYRfjo/v3nB8/b5XIYqs3Nza1rO4PhMBPVcLBIrcOiKh2zy9mwHlRV1bZpNFwbj8dt0wZaMfE8nW7Xg+GL44P13Z31/ZuzZVNvrZEKN5Km3iAfPXx6Y2+vGg2zmVIVApgL3V0dzOakBIoIc17O25STKEMIMVQqWiacVagakprG2Fj70YffPTs4mgxGe7vXd67tal0tLC0raZfN5fHx+nSNiLXWmbY21dLtjVE1ajaDMBgR3Ru39b0dS+2yWbx+93NLx7AOGqpsVMKFAhw+P9i5sSdVnVImxc1NnIAbSg9TRaDS5ry8XNDSaDAcj4ZlwKh0ScsYkmUHDA6YffTh3yzPZ5sbGzdvvTbe3DDCU746Pp+MJ9f2dk/OTluz5XJxfH5qlnNKJMw8asxkDKGqqkCXLKiGk82t67Ozw1G9tj7ZzRlVFaxtXEMt0XM6ODzY3d2rNTJ7iXYFdMustDUDEFWz+9XlBbOPRoPhcK31tk05QkXEnRSBUKRMzqhHPnz0eHZ6fOeNO7def30wGc0uZ7Oz86quXrv9Rsr58dPHqUluphRVHdV1HE9Gw6G5tU1LYZvatm3/PwisiKGKhlZsAAAAAElFTkSuQmCC)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAj50lEQVR4nDW6V4ylWZYWuszevzn+nPAuIzMi0vuqyvKmu+mpnu6he5phGAaBBE+AkBghQOI+3BdernhBAgTc+3B1BUhDD9DDMA662kx3edNlsip9ZmRGRkRGZLgTJ04c87u91+Ih+r7ut629vvW5jS8uTGeCDqhA9ohpXnSzTADL5TIbFPUg6gHBEBs2bIiIWY0RBGAKCAkRkAANA7OSRbReMhNByYbWAVuoVOIgiCSkPK88e+X5q+efv3TuimUyBphAQcHnBIEC5IgOHAijQ6MOKXDeedTMiXrv+gc3Pvv5W3/xs4uvfv/X3/xOnmQuy0ziyXnwiMpeVJx4RDXMiABAgACEpIqohoRJiYEIvCqhIUBCZQPMqoyCJACgSqohMbJymQl5kOcmCJt29uyLb7zx8qvjjbplFQ9ehIlIyQsBo/dKoCSoIs4VSBZRPXinvsj7e+uPlm99tvzoxu7B+vLDLz//ojUxPlertUwO6FUAFAFBUbwCEBGLiIA6ECI2QAbYAgIqs5IlAUQ0oAwIBEAESkAGyIqxPs88ibVgEcEYQ1GVS9NXr/3GN994qVqLvIJHsKzeCYgBAA8M6lSJgdSjB0HSQoG18CrOqUv6D2+++6M/+u8SlTmynY27P/rj3ede/LXnX3nNZAJKhECoJEBAHsQDoCqIiAcVFUBCJRBgUVJRr4CA5AiVmIgRCMCSrYZhOQxD0zvkYWdIIM4DB8xcO336tde/9mqjGgJ6g6BAHgDZgoiiCEj/YHd15UFsRqr1Vi9NSrVGo6kERkG8F69Bc+rk1NnLu91+uVK2QalcHqtW65A5k3pkQkIgAFF14J0KqjAwIjEisyUEAQEhVPQeVMEYQhAkZWYkVFABC8wmjG1kQwdJlkoAFBgX1EcbC19/6dVWmRXQggFQAQDAvHBkEVSKw/3rH71z78YDY1saehM14nDk8tX5anO03hrhoKAinlm8cKazaVbWFk5d3tp5Ojq6cPb8BXRicnEBgiEyiLlilrtCFElJFETIsmVVFe8FwKASARIZMEYwV0BV9F69cI6AKpVABTMpeqAefGihjDhx5eKL87NNImYAASFFREwO9m7euVetlEOLG49vvfvuT5v1c5n0P/rl29de+MbNW19sbkzPLS2dPrXUHB1jKg/2+wfbO1Ojk0tz82n36cH2dn8wDCNj1CERWiZiIq8qSke4BVQRVhVxqsDEhgDBExEZVARgo4TAxKCqiOoZnBU33O/ub7ZFkcKkbEqN0ZFnz50MrSEgQY+gB0/Xd9o7myv3by0/KqToH/YaU1MZBTcfvV+v1kwRPnn4oNvbRciE4N6dG0uXrpw+/Xy5MjZ+7AwHle3d4ejkyRrFRS46PDBIYAwxIRPmXhFRVZkwsCwCREhIokCAxMzk2QBbp+gBiTBENKxqQAMy0ped7DDL/NATURCKDbA+WhmfHWuETKQC4MTz9Xffvr+yfH998/nX39zqLH/y6d2/du1bTpOv7n7Y8ceunH7OHQ6Gfjjo9NfylVR5aFbVj16aK330P/+43fWV2ngi3bmzFxa+Prezdt0wgzEYBqwKhEBIASMb8uJUlYABgAkIPKIgEyAgOGaDYBjIkBf0AqQYqIjkBSOxMV7JI6c+bY3ViVkAEEXF7G0/urW6vnjxjTvrf/D227/Io34lwIeffry5fQNqM5PnXv+t3/2b9SC/f/+Lz9/76O6jR7mPD/afbu0sL82dLE0srSZbrbmZvW0/yKuCsHWwY5DAMIaWVTEXLJxDpqIoEJGZAdCLEBASAaAqiQcVQc0IPaNRMWCZQgQsvBMSAHBWgaAIwhCDYmJuBA0LKAIlg8P3PvyzQf/el1+tcambdbFWG6k3J05fuNh590He3jlce/TH//n3jx+fn1s4+dr3zzx/uP7w1t2bX3x0+53rwd7rC5ONR8s399azer18ar68eu/H1z9+y6CCYWQUw3aYee/crxiKmYhUFRSAABmBVUAILIJBcGzUGCBkICbwqgmhB1N2EIYsmcsMqBZQCasWkUERsLPzdGujt3TyhT/40f+oLzzze3//HyzNz/7Rf/tBb2fNEtQDF7me6+7e+ItHH//5O1wPpsar1YhOzIygyKONdmNQU1cqB62i335y+/qnm+17G2yMMYGxpdBYMv3UAQCzIWJCgyqgSkRIKugQ0BprrWXDyMxG1YgaR+QEBAHRGDRioEikACwplULTYAgJAAAVgKPwcH9vb3cvqs1ePf3GhcX5opAXLlz41//m/yyCGoSt8NjJv/OP/o+WsZtb+/v9lTzt5xlUyq2/1BxHLN7+2dtdr5PTZ3c6a+MXF3/tb56689WHpsgL8DayQcgcMKmIqjonBpSJkRUAVAGAAIwxxlhPLEjoQA2roBcUZO+UvS+Rh8AOFUWIUvHeyn5v34kGAADaOzgMY5ck27kLDhLvFIY+3xm6+flXKq3wp5/8otaIO7t7E1OT06N2oj6by0E/XR30ru897fX79YkRTE5pZm4vHZs8fmxE4eD0ldNGRS3bKDABIaMCACIRESCoigowIQcQlALDRiH34hhDpBDV+twTF2FkCFkcARj0VOTept77QxtV+4P23sGOF0/gUblWCWLGFNAPoN3b7ucJQrBw4sTO2emdtv+dv/WPD3u3fvn2nwWv/sag++GND/9QoZ24x96VXTEVxXMbKyuffPQXXG1Uxq69/upwduTg7s2fmzhgSyaONKTCYGEDFGUAUsiF0VKADBCBqQpzDoVasFY9+kNlAKLAgFWgIiBlh6mAIWwkOIiwof2cgvLuzkHifB3YI0zMn22OPtNu356YX320/L/u3nrt6oXjw1Lwnd/63bR3+H//y9+7/vkHVJ3d3lz9ze//pazkSfcDrHC80KhfrQRw5+HbA62UdbKfYdcXF08cK+2VjAAK4DD1tmQROctSDiPDJTSgqIAgIEyhCcpE3vvce3DIhKGCYUXvKWVgmzOmRhVVRNpaZIr90FZLcWlz8+nm3sFoJSZiD/n0aK3Ya+4t7x9rTufG7w0PWqUo2d9cuf1TU/S0VPGNibWdYvkhLMz+49mG//LO76e9OO3btz946+l+eXT+FYltv+jduv+gWZ05e/F3sFGrLs5OH5+plEJc3ex+eGcVbKVUqrAVMshIYCVulmuthmEokhwKYXRoHQRkLIF6VrIcKlo1FhiyIsmTzLtuKajaaMxUl379jd/53W+/UYlCzEGpd+vxV04qN3754d37X+1uDP75P/m992/9kI0Qxf/hv/0gKs8P9tPBwWqlMf3bf+PvnTnZeu/tDx4tP3n3o3dqY8+cOfPGzAzd/OpP1tfWavVj333zN8lGtYNe1u7m/YSzgtgYAGBGIkZiDgxZViwUDhEPre0HdkA0BM1QxYmqmJCiAFF1kLrO0A09GCBErntXyQaFKzofffFOp5+pQmog4Xhp8eqx+sTnP/mzT37xVgLRF3vptTf+rtdjf/TDtyI4c+7sb1955Vt2agzH7K3lW1nqS7Ygl9SqpXLDJ/lWcjB0manUonLT9vIBpcWwlw22D7o73V7qJc0KQFRAURCRwhUISCZyGOViCwkEY6YyQVl9CDmicE4ypMSDUB5DP5Shz5I0SzRLbJ5EvYPhxsbKH/3JjwZZ5skRsslsszX2f/37/++7f/2vDjq3//A//T+/+F8/jtRv7axKHPZl6Iu2GwxcBrsHg1RqY9NnJ6fKVGzicGV/662bt/6gn3acD93Ak2bGeVOw7fQkTYZJlhJbAFEVQiZCawiICkAmFCRRb7wEpEgi5BgYlHxGoiEAeIXCORF0ufFeEQaEpJmJwvyDD95/5uqlFy+dNATAqCQmGP32N968/smfDjy89S799m/8jXMXv/XFzXcka5dLrtascW3hze/+TmNifDhIX/r6315v+3c+/KBSaSEwBHG5MfW1N//Kyy+fNU59oewzKXIvoiJsrRVQUhDRPHdEJhQmZURRFK/qhEgNQCAoQLlFUeWh51RyQWYKfA4gknnHNtc8z4Y+svEf/smPL545XoojNaCAYYaTdm5h9Mr1x1swOxZMXfwrv3vM/I/aO+9fp1CtMcFe9vmff3xXttu7y83WbCR06dy8k9g7188GVBwsf/LLtU/eQ1MZUVVLgSUGkERyCogQwzAKooAQxEDUKNdGSgCFzx0LGBBQAUREp+CJrWJcOCPeo4oUbjAYqhTWlDCKKQ4jjUphRasj//Qf/rNXLi4YMJ4Aiz4UdOvW3f/+w//4yZdfXrr6SmBMXPUYR63RpfrotEYmEGsx04CEHGSePLqCVJ2HMFdJkkyHe4ZUEMEaQXDOexBBMcZaBkEpMGBjODTEqAoOMCcgQQLUgHJEFbXqDSCCz714FC4cFal3gU3FBcM88qxRXkgRW3j0aPPF84sEHoHBBEJ46uKFv9P8u8fe/WDq2Akfeqd9yl2ys7v15XsH/d1kSP1Okia9zDvPZdGwTCWrubMJkIS2FMc1Q4hEiAigAIiKAIAARzgWFbRkkBwggBagBSADGiICYEWv6rz3CF5FfJH7AlEYKHG5BlwhcJ6ywnJA6H1hDACAF6dKxAgKoQ3GRkYnxmT95k92nmzvdbYGaZK7uFFvPX5ytyiVxyafbU2Vdx78vL3Xr48805hfgmx/uHfnyeM7oR0pV2YNgKqAd8LEoOB9gSTMoIreKaAjj+rZp977nEXYkqoDMSKscHT1wvnEe0Fh8qQFEUQlkgBiDAyVWY06cWUeu3z2jBKp2oDRey9oAGT9/u2f/fD/fZKMnLnwRqs8/vT+V83jl+dPnuqzPjno1udPj0+Wdg6uD7P9kZHy5MQY5aaK7c7+XlhdOH7iWYMIIqpeVPAooFAFAHDOG0uMaJgATZqpemsNgICAFylILSqpePECwCJWhAWYmAIEVLIUcoyKan21aqe+/rVvn5geJfVkGFUMYqYkCr1uf6hVKDWDykhJIO13B+3NNT/sbz3U3uDpxz/ul21xmERQ6h9srNwYhEVS9De13zns3Vo5bBsAIEIAVBUABVAEQQCCo1ALjGEbhrnzRS7i1AXeBOgF09wbJAIFYCKjgMjGBAgizMwurlBstCiK8Ny1b77w8tdeunaaDRoHzJiht0yUezLUnBrNh21Av7788eho4/S1l6LGZCWyZy5MmkopwCgwxjA6tKk4DzlKBsN+PsDBUA76uQFEUFUVEUUkQiJCJmDDXrw69Sqojr2jwumRQnUIglpQQYCIzIjMahQIjgwQUY2ITBguzr/4/LWvnb9ydnp2pl5mEWccoAoyefFHVDNx7MQ3vvfX+741cewE8QAhzZI03c+kHwwPkqTXLdJhLoPcoXgkI4BiORAuQSWotQIMq61f6X1EUSjEk8EwDI2NFNXGpjleDUo87PWLfoHIxjKAFy+CRpCIOLAhGaOkZI0qEpCR+sTE7IuvvfLc5WfPLpysVC0hGBVVIFVRBWb1ogQenFVdXbn+i7feAmeHvTwd7osbFAguCINg3EqdmMAwoGFV8D4rBmnRy7ODvDhI874RVUQkZAAQL6IECoWIeGesRWOVTeEhTVVyImSviAreO7JIlgnJO1H1ZA16AxKWwpErz73wtVdfOH/h7EirGjODAICwogMAAlVQVYPkyTvnLQRWzNPNlUpjpjl/drZaqQZsY7ZlW45HjK0Za5gNkQUQ8c65Is96RX6YJsNuf4C20lIAxl/FV14ViExo2Fq2Jq5E1ZESsPYOun7gWQMEJAKRAg1iwF4FGcOoFIZl53h8+sw33vjW11567szCbBjwUeSKigwgAA6RQAG8F0JEQO8FDFC/s3P7wa366FRjdMoSyvAgOdjL+sO0QJ9jnhfOFV7ZizKJMUDMQTkIyhVbahxhALx4RFQUFYcaEKCqOJ8LMhkVdd4VhXMKSmrEGEVDCD4Xj2hMBCaioLKwcP47v/at1197eaxRsUSoQACC4EWJQBFRQREByBAqHK0PKdJePuiOMHcfPNy9tXIw6A+G3aTXz5Iic+Rz9d4XPvPivfMo4kHVcBCExsRsK0ZViFiPcAyAbIiIiAXBowcDgupy5zNRj8qojEAkjMq+cGJtGAWlMGieOv3sb7753TdeuFIthzl4ACVAAkIBBUhcToSGrKpHRBU37BwebG1vba10Dg+63WzQHfa62dCLM3CURYHG1lTBACqQCqDTIpfCOSeJF5c4n6YuPzTMrKoKoAqErIAAIKJkDQccliKPmGUexSAAIRsiADDAXtVYE4S2XCtfOP/Cb37rey89c7ZcDgGAEUkBQB1AmqVO3CD3kY3rZU0H3fbT7b2tje2Ndmc/Pezv5977aMLGU8F8uREFUVmDsidikMgEsf7q8VWAvFfxznkpnD86zdPMEJErnKoCkiqIeCZU9QAUhDaOAyXKUydOARgAGQFUUcCYgMIgrrcuPfPKb337+9fOnYrjAEAVkZTIa+ocBpj5PC8cQdjb7W09eLKztb69uXPYTfuFDaqjI4sLpUrIcdVaFxo04rN817s0HWiaHBbazbOOy/pF7ryIIDsNRWxkYxuUwqBsy9Z45wCAiEQUFFGPGAycuIgDUPSpl0zy3FnLACCgxOhRiUxcHb145eXf+st/9ZnzS9YiARoABQUHSdrd6Xabo+NxEGb9w73tR2t3N3d2twe5o7hUP3FhqjlRrZjIauG7Wf9x2h70hwedzuPOwXqv0x0Ms8PBsJDEFX3NC3Uo4ArUTALRMLTG2jgKS2EcGQBlZlFAUAUFVAWlwFCZbMwqknQHekS6pArgBcAqhtZU6gsXn/ned7937dzJKGACZUAEBPXd/YN2b9exH/S6Ozu7Gw9vP1p5ephErcljJ2bmomYQRmAwGXY3n6ys728/2tu6udNpJ1lWFFnquwWnEHpHLFHEaI1aEhIfEgBB4X1n6HPvCkkc9tgQAaEiIgKKKxQ8B0G5EVbHS2EpkhxcmkjuDVtQj4JIFlSDKJhfOvWX3/zWtXOnS4E1qgSkqAB5Nux3eu3B0DWq4cadL2/fuJOkQaV5auHUeLM5Wi6Bl4PdjRsbq5+tLV/f3lx1XjkuQVWwElmoUxbnPhtmhTjALOin3quq+CLxhIYNChrVwBgCUg6tYSJEVEBVYSJQMNbaOAxLZTRUDHLwRCoqAiiKyGCDMB5pznzjpa9/7fKlVikmEFRVwDQZ+Hw4HPbLjXpvsPLF+5/u7rTDxsz8wrlSPQzrYv3TlfufLt/54smTG+3hE1uLgskaaQ0lSgaZH7gkTX0BriBVC4hEtijUiQKREnjvJTciRgQdgoqABwNoABFBFbzoUcXoEUEU87QY9AdFUXhfFIUj5jBEE1G1OX718je/+cIb482G/mrwME+z9s5hkUGpVmmvP/nys+vdnplaeGl0aqxRNSRbj5c/v/HJW2tb9wuSuDlbqTdz73qHkPStgksKCMIQlLx6Y1iLQRTmtbqLIwdAntFLLoXPcm13tH+IcSU2RMNeblSPtiioIjKqOucEQawV0Vz80KsIs/rMgDUYgKXZ44vffvO1Y/NjgoogXtUXRXu3DTawVu7f+OD27Xu2dfz4s2frzUoj5u7qV59/8IOVx59kXKmMTgtxe6+TFSYTI+rCIGVbkkEBUJDmqs4rQKhapflz0cxUnOcOrSnyAXvIB7z8OLnxaH/6ZGO0Cf324FdaCOAIxcQmLkWkgM4rCoFj8apERCGRMRHH9fmXX/n182cW2NDR8Ljc77XbjjzT4c1ffnDni+Wxk8+MzB6fOzGh/a1Pf/Rfvrj54yyEaPK4FrTRHuQ5OiRiX42jeilsNLVRi27e3VndwDAa9XrIiAJeoR+aAAq39nBtcnZ6mPqdnc7Y2KiNEX0Scjo91oIaG4AjDwkKqookoKCCmhXiU5AMCABAhQyGGDUbz734nZevvVSvlhGAgHzuevsDLEzMwefvfXz33oORxRemZ5Zmp5o793/+/k/+4+OdVZoYcdxabw/2DzthYIl8OSphfmD4wfFzE5Nz035fak981QJBDghefeLUaX2QNXUvWV85HG3OBFDa2dwwxsW2ypnrd3aSPg8PUgMAgKgKqgAiiIwYKMTAZZf3i1StGhHxIFgu1adOP3/t2ROzLQJlAFQYDpPu4bBcDq5/9NGjh7v12auj80szk9V7X/yXn/z4Xw1CihdOdPbS3m7Pc3mkRlTZajaypZmGGx77/LNi+V6FXDzeMJ3uoK+WA0KJNXEK4FK/uZFD3s0lRiqhz/KseHB3px5RGNbDyIIx/awwAESASmiAneaqAzTWBGxRCkhVBwCh4RBNUK7Mnz336qVTxyohoiKCDvNkv9+rN+LP3v/ZreXV8tyludNLs3X//k///ac3/2vRchwf39uCrFceZp7NYaWUN6ut516dAdmXnjVu6b33P3V+MPPmxYUzI5sfrWDY9HmAXCKhIh1u3RsMknzyxLzDMhMkaby2BvU4q4/WgiBADYvMHO1QBRVCImTnKXPCyIGSZh6UgQIlG8TlyfETrz/33NKxaQRGlFxhe69fKpfu3vzo3q3bjZGF+ROnplv8oz/+l59+9p/K1SjQud3HWafrJMyXFrjC/W7Ku5uHn/3F3XQ/YEynpvNXX53fa9/dfrpzfPRUqeAwdUHGthga6VBRYFCOGuMpp2r7zP3ASBSEYWjQYhyTAZcNCgLQ/99RiigghyYwBgvIU3JIYL2CWggr0cWLi5eXxgNGEBABRR2vVXfuff7Vlx/g5Ilj586fmih++oN/ceOzP62Mnehn1e393LGrjwzmT8Izr4ydP1/r7K4N8satFVl/2gtCK+JOzB1/7szlG3cetJpTMxNn+h0ypqTIXkLvzcD3Icg5RMlCk9dKpmZtBKFgnKDt4ZFBYCYAPaq1UTRCZS08pTklgyRHz8yeI6qPHbt48drUzFThUlc4Ekg6g8PtzVu3V4pgYXrxysmF0Z/+6b+7+/CtyvGJrcPh5o4U1KqOVr/+ysw3nquVcPvChbGXr4wlG2ujPLu+mvdTjGzJu+TY6dHpJu+2bx5fjEAPfJopek+5IzUeIB+GuQEyeSkYJJQdOlSPLgsxYhf4AZGIV1VEICIOSAzlDsXHICURJ5gpWDX1+ZNnTy3Oh2iMWEbv0NeblVsPPn3S3h6ZOPH8mYXPfvL7N2+/V56Yybs26YSEFUvJ5bO1Zpxc//jjfr9XUOfF54+99nq80/65y5KtrUMb4+rKlyHun75Uj2I3NlIeHeUEtjMzyNETGo3UM6RZW5OtGg6rUSpRz4XeMBHbQQ7doiBEImJEEHWFSzNJTBQaa/M8RQysrZsojCtj55aeX5g7TlwoFB4Zke7c+PnKvTut1smXnrn86IsffPbpv9V6sjuk3V7JhQylQ1PqVqOS0bk7d4v1bR+WR7bT5MSF2cXj4+31bOthBHg8aix98uW+KS31fAihmRprJe000FF2FrKhkYoltGGUQ+0giWxcK1MYZDbgaiWIrDpxKYkcuWxEZOKYqSoOxTkiQTNwtA9W5ucXrpw+Xq+gMoEJgDE73Ll7/f4gmT115uXO7u1f/OwHjsacHBsMCy37zIGk5bBoUsoGcWxsvLefcI6D7lbM9OpLl32x+3R7c/3pxthktd/euf3Fg8Zoo+DN+YWsXj0Af1BAihUPcULBcGaqvrBQLjW3LlyrTR4zvf5TkrQo+lHNnbxU/RWREZEAomSGfBwYgwpqxBkblaNqc25mfn56khFFSYHYJ3c//+Dx6pOR0y+WR7N3fvzDNuy4oNQ5aLvcD4ZFQOWwZPvD3fbQTPD44sL4/eU1KsK19f1S1BprBM+9NPPRF8v3V4uxyQuLSyff+8WX5XhkYnbRQlGuZZtPs3K1Qi5V7hU57D5JP23fkbTXH9TzYcOGlZ222seweNJMTZUNEx7ZSFVVD4iIQNZa5oBNIWpqrYkLF09MTdcMqlcnigcbO/durWM0cuXM4trdP917+r7BUmffQpo2A5yZrHWH/fbhZjgSPt49OH58cnamubW92k/7/aT14NH+6LWppYWFe8tbm5sH+4f51MLUS6XKl5+vLO6LFENLUm8EyCBiCwcK0aPNPqkYGfdFhEaFbT+VRw+Kp6vdaskZrzkCqSKoMltkcOKdeOCCglwctJqLi3PnSpFVAFIklOWHX2x324uXXs+GW9e/equT7g+K+qHX+lh0eqk5MUtYVO5cx7sr3QP2m1t7Z2arpxdPgNeperT86GF6/kK5VK3VK/0iG2+Evd270A3CvPTpJ5tU4VxDsRYLlILQGHVZIEYhdBIJKBaoGRA70OAwidodMHjkwlTkqLMgVrJOTZHnXvOwFE3PNSYnawZUlJR0b2v1zoP7GLamp6Zu3fpBe285k6CbRqWyXTxeGx23SX+jXC9dvDSWpfJgJ11+PJwcr1eaQZHulapuv5Ps7Q3mj1Vbteb67c0P330y7HT7h+yKelCuDH0GEhjPgnnUpKgMYIFtTIgqxEjOZ3nhXYZu4HqHRlJrEAMAJBRALwLgmQnRqxNlqlRaCydOnp+YGBNgQkCEne317qC9ePrZtHdj/d67PtccKIqGSwuthekwHxQ3f9mZOiGXz08vLOXre2vJ4fDho4cXTk2YgOvN6shs2UVZKvawvefS6PFj1OFkrsom81E3NkE1tBgigs19Qlk27Pp+Pz3S7MjI6gicENmwPj4deVADQL8S0wjqC3XOaOBlWABopLWxsfmJ2UZkAREB0sHBw+WbyMFIq3nnqz9vd7cSbweeg7AYadpKWTQyoxPNW7efzs+fKJUA88P0MH5wfxiZ0qm5ybiy+/Jr52MT3L25tbXvK/URydHBgEtcHQvL1ijFaRfTtWSQpBRyFC6Ww7nRifG4HCB5n2kx8Hk2PBiuDNtPDjf2C82M9w7xqKcgZAIoXJaXqIlePeDo2Mzs+LgFVBBQ3NrcerrenZg4laX7D9c+7+Yy8JI6i/3Ju18Oe5P7p87MLZ6b6vvOYbrbqI1cvrK09qT/ZIe+/OwgltLZkyPLK0/ubXUfraXWNAA9h2m1Wo1Clww13bFASOX505dfmD57bGai1SiXKrZsbUyGBES8iMfMZ7kbHPYPd7a2Nh+tGxGPiERHtpiRlNiiD01RxKY5N7k0OzWOqACo6h+v3E8SGW1Nr63/tDvcyNEAxSiY9vtdF6R9QO6fPN+8fOG4VxAqzl6dmD4x8ejh4cHe1uOVNZtNLK/s7fWt05DjoFoJ6mXjM3OwBwDHpk9evfTclZmZsbHmWFiLil6/v5v0tjbzpJNmPVfkKqTMGJaiynSpXDqzePrc2atHyRyKioIiMTApcZEMmKAxWp+fnmhUS6qAAMP+cHvjoF6dRSxWN270kzwTNhbOHB/fb+9ubBxG4ejKk0F9HMfHKsNe/+71m6VGvHDqzIVLraIf3Ptq7Z0vdwBbaH0ptuWRCWuh28khHVu4fO3ctZfOjk2ETZtvtTv3VjvDjtOEIUDb4spx01QGVU/iBHwxTIcH3VX/OFUJDcBRwaFAquC9+KIIQ86pGpVHquOjpZIhVEWFzfXHe3vb03NL7faT7e1+kpZ9lE7OxIunS8fc1ODjW3u7Heo3b9zavHh1fKLeatVGv7rxYHOtWFyYtQbaB6bnMLRxrWrrrSAZwNZ29cTCC688//zZE3NxM9x/cH11NfFFuRTXRycWKzGY0GI0glxWIQF0IqBKkhf5IMtm8kHa76RGvCgAEh7BGBEMWaCooKhSmx5tjlgSFAKAzSdb3mulBvcf3egN93MpKpVgdq5lTFKrm2eeHfv4o71hp7TtTHSvH50bn1s4neR00M6++nRdsdLLbRxJs4r1cnzY8Vkx841Xv//8c+fmj0/u3v7s4c2uK5fqk9NhtT7aqIWVqvVaeO+RvKDkgCB54cQD29hHUexKUpN6y5qjf5ZEBChHsawxSLHxQalVnR+rTR4tqLw/2N/bK9earih2djfSIi/ElcJ6szz5+N5DFqzVKxNx2hPqdHVzzZeizpkz40unF3e395fXb3k3GkRho0pjrdL+PtnKpW9+93uvX71adLbvffzhIHfNqdnRmWPlZtkqBQwikrsMJEj6ReFcFEl796nFACg+TPK41ogiq+KQyCAhATCRAgqqkINQuGWiaqXejMqVkgAiwH5n66C3X29M9vubnYMdLwgUI3NkQVN370G7MT42NTd3YaTx4M7d1Sf766uDE7MxMzx90lca51KrVLfVBh50KWi88Ovf+c7Lzy6u3ry1sXpYGa0tnJ1uTY1YJfHKeTHIk15/4EFr9ebm5kY5bkme7W7tlqLxUsPudrYm46Acj/d66WFv738DuLZJYl7eLo8AAAAASUVORK5CYII=)![](data:image/PNG;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAkAElEQVR4nDW6aa9l2XEdGBF77zPfeXpDvpevsjJrHjhUkbQkWyZFG+22JaNhoH9eo9FfDEgwYLTdpiyZsjXTxWIVq0TVlPOb77vzOfdMe4r+UPT6BRGIFbGAtQKfrAwLVAgpkDYlAJGMDDaAir98bP/DH/P8MvhXfyj/939bOYgIJJj27vrx//V/Z4cHo0dv9Acz/fd/s/7rv9j8w+e6dI+zzqcCPzb10zoHlAe97ltHs7NxL4pDL4WlQGAYBlEUBg4tUgIUMEJDEKCKSDEJ66xHz+Sds1JKItFa+/WXX3zx1W8ubi4Me47Ej3uTfyd72ddffk5CRpsdXl+5+dW2Kul73wmPjgWCcwQC8d499W/+aPfv/x//6a/CJAl+7ydaN40xDsPxT/+lXO/s10/uvvx/y2++WW1W1yp43OFf2/yy0RXDLOzs2yrfrh7btpe8PgpiIQUQCqlQKMMEJFEiEjCgQEBCjWDBRUrm29XN4iZUcjichklHe5wOZ/SavzeZmbpt0Z+pJNbkKBKxkPT1l3B7w8WOnUZtkZHZExB7wCyTb74l3n2/+fWvmv/2p/Fo7A6mTAr3daJd+/x5/vlnxRdfltZeE36l1Edtfus1BTQOk1EyWNVynm9v1pubzT5Msm4UIRGSAJIgJAj2JFEIZijzHABVGMkkRiGqprpZzCMlo6Qj49R5ztJOKMSkOzBla5APHYSbXCMGcSzt3/+VDSSfnKaDkUj64NE7I0RgnGd2EATxv/mj/NmT7Z/+56yTJH/078L+RF5d2Z/9afHRXy3n80060a89euzbj2/Pf/Fifn88fnUyfTAcd+Nw0fboWn76+MX5Yt3rD7qDARB4ICZJQeTYegqAJHt3ez23Vnf7vaOzUw+2cW1elYV3B/eanvRsnZIqSLM4iiHxGAWjqgqqisFESSztg4d0dl8cHYsg0HXjry9ISjg6BCGBofXMSRL8zu+nvXH22hvidmf/56fVR3+3/fLJlnlz9mAzHdc9uZyvqioPUdwbDGbDocjiQqgwULOxPdzW67xYF+URY6xCFMG+qfPFotLV7PBw2B8yu9a2u+3Oen/v5JTZBYxdqcq6JeOkgdChB+/RerIowYJu2Xj2ngFQSnnvPh6fevDti+f+7ha3W2Y0774j7p+JTp8NO0nq9dfTtKt2RfPxx/VXn+1efPMsb5e9rOmE1I8seSZQJKRAJZSUIarIkAgE9ft4cmC+fPZ8sy/yskmyDkppfLMrtnebZbfbFd0+AAhJjW4gB68NKBmQiFSwKW3rrPNMQu7223WxKpu9AEFBEHo89gYQvLUy7I2NNvqr37Q//7PENGq94bI2Vy/xX/+hfL1HFms2ctgPqsZ89D93f/b/bde3i1Hvk8ngHLTS+cM2CiHqyGTU7Yv5bV7bfeuTrkISIFSahmcH4uXN9a7c367Ws+khk2CBHvxmu9ZVQw6YME0SQK7rvWnqUGZKBSqMGuDKWYMcSrncbL589tXt4iYIo26n0836rztKCH3dyOY//UebBD6U0bvvha+e2U2x//jz5pNP/BuP6P79IOqFDTQffVz8/M+rX/z1Vsjto0frQVbMr15eLWEnOlE8HQhKZdzNOulgVZadshzOZiqIGElIHPbVwWx6t15d3948PHsQSgwClXZSBm9005qWIhVHEaLYm3brrIoC1etOBmPV2Adh99CxaIqkaV/xssUkakhYd1TXE4/GyMNSSzgYUhCq4VDNJvLsnnl63uIX2ujQA3rmfa4/+XT/33++/eSjndXb6Ww3Hu4ikvsIAPK8Pr/ZxFEURVEcx4ej2fX8dl3sdk09SFNASUiBwsPDw7wud2U+XywOgqMgCqIkkUrtm3rbVFl3FGbZoNcPUfS0720KWVXDvLRFPXp5md7OXdP0lyvIt1jXyrBFDEThATzb2LBU/+f/waAAFNoafOvmN/b60t0/FZNDYdG9eL77kz/Z/OPHGzSr9z7YJlkJpm7zQdqbZb2qWL24WE76vfFQxSq8f3C4WNxui+3tbpmOB5EMEaQlmB0czTery6vLy5uL4XAcJ5kM4yhJi6ZdlWWiZnEcn45GEtSjvMluFvFuG6xXsFpB0xjT1q6VzksiIgLmFlkDbwBlAh6klDKu9nlzdaU+/Q1fPC+Xt6aD8vd/QiDbP/v59s9/tn76dTHo7e8drgaJ0a1vazI6ADocD4tWP7m8vd7uRZr1u3EHZH88mm835zc39+6dhBRaIs8UhEmWZlEU1E3dVg33Me4MTu6/Eu6bo3398Pl1ML+zV5e4WPQbDc46awvvKgXY73M207GiNOEwZKlISFACpQAp60CGLCSS9Ox1ndu7Sy4KP5iE90/jtOc//3X5ySfLx79ZDbLiaLYfDTWCA/YIICSQ6PX740afr7eLXZH2+lm3HyVyMp3mbbPdbOfzu2AaJLHwoHXbtnXLLGSSopIR+2Fr+x7D9ba/vx4IidsttDV76wNqoo5NUpemupdRr4+djNII4hBVQEIiIElBQgAJUEIhSiZEKTGJmkmKx0fB0YN0eBj+8q+3P//Z6tk364PJ+u1Xd0laa6bGMoBXgZfSM4VhNDB+si3mt+vNdj+dQpwmB9PpLs9Xy9X5+XlXdqKhNG5/t1xsl2sw1O+NO5Hq1/vJdte5vgtfvsTV0niuiXg84nv32sNZORz68USNp0FnQCHKgKRSzFYACkY0FgGYwTMwIRFLBxzGGdx70MZCQaR2Lf/jp/P/9Ce77W774LX80elesK9rZcEIcs46z84LS9IBxkn34fHJdrUt9rv1ZjPuDXpZ7+TwuCqb28XiBT5dLuaLtl4Xmz7gW1H6nV3+YPdVf7elxZ3Wjell4o2H4vAgPD11swPbH6go6RGqQEaRUkqQRwREQR4IwbNz1hkhCAmZwIAn46Sxltk5iaLTV5WHy6ftX//36qsnzXRWT6bleKiLnTc1s3esgJEYPQIwIXOg1LA3mE1n+X6/XC1PDo86nXQ0GJwd32u1Wdf5us7ByzMvXhf0et3eyy+61goBdpDA5Myf3oejQzEZqYNp0OlyEAIJclYiK4GInq13zlu2nq33zllrrCGPSOgRLXh0Tuq61twYvadS47Zxz542n/2Kdy3MFCvlmJmEE+DZeE/oFQIjem89AiskjOP7908fP3u22W3W5S5KwyxN7x0deSkeXzxtt/mZUH8Qdt/QzaDa2f2mjYL61VP/vfeCt9+D2SllXZKgBCp2gp0EC+gZwANYD9pbbdvGaO2sdc5Z66xnBwzgAT0CkJf1eu65cWCNUzoOxJuvqT/4SbT9j3hzIciFhOdDVSjJjsGJGp111mq7bYpu0o2iAATMRqNil1+a+eXd3XQ4DpIQZfDocPpoNOmfXz+4no/mF571bpjp73xPvvOefPAoOjiOkoSYBVkpBJFnds7bxmpjbKtN3ZpGa920bdvWbWOctc6x9+jBO/YePAMDMHqpdQtOgySPzGj9eETf+b68vMZf/xqvbxhp+OjAdZOtUBqQjWbnGmOu57dFV49H436nLwMaDye1sXlZldqMVThTQdK6tGiyzTbbb+04sydH9PBh+uiRPD2Vg5GM4gjJmdLbxlrtnWuMrtt23zS1buvWtK1x1rF1RptGtx6YGYAZGZiBPXjP1lkEkBZQUCCFQrboWhcF5uxB+NOfqqrijz9R3zwZo2vvHxajnhYoGkfee/abbb5ttCc5ycaC5GA4bNHr80v2HDEee+o8PQ+fPPPLq73E8jvvRD/8J7033+/2xgwW2KA3CqlEU/lGV41tXVHWu7IqqroyutXWOSdREJJ3ThtHQiAiAiAzIoIgJPbWCwLpCeNsqIK4mt+CRYaGBPt33kRg6g+C//Zz9/R80LT6wUk17bfWA0Mqg7OjkyfXVze3NyfTg8FgGKpkiONO2nmnN3yw3IxeXjS/+axOhXnvkf7wh+Grr2fTWZplATlgYEQDrmjzbb7P8/2+qr2n/b4qy9p7JgTlERkQvfVsPVtmtL9VIEIUREqSlFIgEjkppFJhoBRJbK3TyI6kcFLxg1fR2EA35hcfRYvVAMmxvE7kntB5HvZ7szIvm+b8+iVHYT/r31fxsMwPzl92rm/Nem0mXffOm/Tuu8kbbwa9YRIGUoAH63Tb6HrfVHlR7XZFWZZaGxVE1nuP4CU45x17zyCYAsRIIQaSAASikqSkEERKCiklMCNbGQSRlJLIBtI627AHROFs49OE3nydIgmrpfzi687FPKGkORvVsao9R2FwMh7dbdY3d7e92fFRNngF6N5mh19/1WxW5aCPv/ND8YPfCV55EMeR8pbAsbeNM1WV50WxKcq8cGVRGN0gIgpFglQkLbA1lggUy0CqmCiSFCgliAMpw0CGgURAEiQEIXvwVoZpj4T0bCAb6eXO162y6CMCU1pid3zqfvJjAQyffk7Pnhyga49HVTcsmyaKktFItcF+RnSyvJnezKuPP9GdFH/wg+h3fzd+7S3sD4MoSMhb77Rpq6balrv1erMvyrrRLXpjPHsUSlWOgRlZxIKSTpKFYRKFShKBU4RBEACDIEIiZrDeMyIz1E0jhJRKEgp0lhpNGgQFgUxTEWe6Lq2tDXF7dh++/wF5gq++jOfzqbP+cHwdCwqDcdQ9SbJ7y3VnsWg3m7qb0g9+FH3wYfTGW1E3Q0kIujVt3ex3RbHe5uvtpijKpm2t855YsAxkEAUiVCJWKg5UFMaSUEkMFKJEDAJj/Hq3B9+EUYIU7naVsXupRKCifW3Y7qUicMDa2mpfO6GCJI0GI5axaRrXau9N20nwzbeljNhW4vGLwfU8cNTcS3WcjIV8g1meX9m7eZHG7sMPOz/+g/DV14IkUVyDa1rXlG25y7eL5Wa52u12hbHGec+IAYk4CTpp2E3iNAizMEiiMIhTw9DqpmoLtiaIZjXLm3Wl27usM5BBf7Wp6/1toDBN+9qHzX4tScZ1WxX5qi6vYtXp9qe94bTcV9jmUG0IU9aVDcg+vE/Zv0L8r+KrJ/HF8zOeMqVJbuTzi/rqwpwe4+/+buenf5h2u4FgNBsKAqt1tS/mm+XV/G63zZu6/a16IkmSgyw7nMwmg16WKFRRY+raNK3ZezW4LYrLiwtfzQ9eeTeOx2xouSi2ue0NxKg/udxcr9fbPNDTw1dtg9J6rsp9VezQiWw4TLI+A3lvAJGZdZODd5bAKoDpNP7gR1Im4ouv6G5NrQeg/Xarz87ED38YffijcDiU6CRaRF83xTZf3S3n89v5Lq/q2jhmGYgkCjtJd9AZDDqdJElUJHzoUMbrfL+4m3udz+69G6g0Scar3arYFuDjIIzTuF/pqqp209EkzXrVvtrn+/6wTKJU1m1dFbmpmjjqp52RUFGrW+MaFopl4OuNoMBb0ziHFKk33hEUikbH33wNl9daqHo08N/7fvDhj+KzRwFbAZrZa2v2+fp2eTm/vVzezI0hFHEYJ1kn63c6k+F4MphGYbqvq7zdtbrMuqN9i6t1tVs+j9Oj7uBkOr5Xrhd1UUqRD4dHfTutb17styt3eJzEmRJhu1+W+WLYG8ndbl7lWwnB5PBURqm2uq52xlgnA5FmCRkn03Kdm902IWOSgX/4QCVBRL75+onpDsKf/Av3nfejwTDSdSgJAm50lRer+XK1u73aza9322Vl8GB2//Tw+P7R/f6oJ0NpAPZeXVwVl5cvdtXt994/GnQOzcwsbr+Z313IOO32DtPOcHHzVCi6/+bbzrnq6683dzeH4xOFIWpn8u3mSisEWe62BJh1+1G309jW+gYRjAVrtGOHcb8/PeVgzca3uwV5jXFCpyf8T3+kj44aEdDZYZaliZJKgIHGVNVmu57fzTfb3JUaWMYyOz46PLv/2sm9h53udFluVourqqnGBw+DsBMF3cXdxXp5M5lF3f5oPDrd78rNahUlQ2Sw+3bvlsvr52bfxMK2Ae8212GQANVpFgBj2RTSNG0n6Xb6fVLS6tw6DZ7axui2ZoAgm3ZGJ4SBLlaLfI7OELLopvzOG352zMaLQaeTxUkggFxtqzxf3S3ubu9WddXEUqSdSb83e/Dw9cOje93e2Mvu5u7u6flNvrsLonGWjIej2XL+cr28Trr9Tmc07B2dL78sVptefwdOKxC2qu8unypQQQhZNyzKVaP3zCbIgtZS3RQyUFFnMEh6nbptgT17bmpdFHfW1kln0J/dBw5JUtwVnUh5QDDaSgdKyNk4FtEgSYfTsXZ6t18X5e5yebfabipjNIlOkh5ND85OXumMR62pl/ttGAkpAulFsylWN+edR8P+ZNKfHN3dvcy28ygIwYI04HbVfrEAboZHw9Y05X4vROxl4JNOVVdO194bz8YgxLWV3X4/TDLn0TYVW2grXRRr3Raq04v6U6VCbQrnrQqSqNuF1rasKy1ZU9LNBr3hsNtrvc7r7SZfLZbLunFAQRSLaZK+enjv6OAw7Q9WjTl/fp4v57PpcTw4HE0O1svb5XIxOdonaT+KutRiebfcMGmoe7MuEmi79dhSxErJSAeOkZiIJEFALAC8B+O8kwAy6/ZIBa3Wrm2Ndm1Z1fUOkZPuMO4M2Rtr9tYZTyFlqRSltyQ49I7TKMqyWESyWK7W2816s1mvd0AqCZO0Fx2ORseTw6zbawkqy8tVMX95bRvz+uTecDIb3B7Mby62qyUySRKJDHxT74sVKJEOU8e2dS2xR0BJhBRq64EBpYhEILxiBsvWs0EUUkWZcaaud167ap831Q6dydLhoD8Jo6jerCQ77V3B3kqJWTemOBWpt6bb7xHx7eIiz4vtarddlbZ1UtjxoPPK2Suj0XS929/eXgLxYPJoMjgsrhfbba6NzdLBwfh0/vJucf7SlhsRBP1xxpIxIkTB3jNIEqQsC+89u1Y4RYQe0AML0MZr44wzhp1AkB6994wQWGpaV1pbCxENZ2dSBrapyBnbGlvufZWDaYPuLEoGQgZkKs9+X+7r/bZsmrzcVtU2CYLDo9Pj45Px9IBJ3tw9v7x4KYDe+fC4P53NivuXjz9b3V6ESb+2TdJDFRqAisBFSSCkBCU9M3vPXoM1FjwDe++M1U2j26Ztmqas6kabptGt1tY7AiGdd56ZUGpnjG0YOI46SXekbW2aijw3TWWqAtpKAgVRP0gHJIAaV+y3Zbmrq7KsdasbIh72O0cHh/3+kJlqzXVril3R5vvTh6veYDo8OLx4/Jv1/E4ltXE27lAYChmgEEwqJCE9oPPWA3jHbJx3tqrbqqr3ZVnX7X5fFfuyKCtjXdPqtjXeewSUzjhmZPR1vjGtCcMkGwzBO2sqaxvBomz2jS2AfBBOgiAVKK22Wvu6LutyX1VtWTSSwsEkO7l/L+3GRbnZ3lRhZzocHbVl9XL7yXr+MorjME1lOiy2WlYbGYMMVBjFQRiTFCxk42xjWq01aAvagnZo+eZ6eXF9vVgureeqauq6cd4LGXjP1jpEBASJLHRTF/ttU+WEIukMsv5INw077x3U+7JtKwYO46w/uidV6J0xbVmW27puqspsd7XVdjQcnhwfHZ3e32zyy6sXN1dXwwN9cPLq4fHD/Pomv90in2MYeQECLRIgUCAToWImaZ1v9oXRxnsvkaqy2e+KYpsXu+J2tVrutlVZMaBz3rP07MEhMwCQs46kkN76tqmrMve6jTqDKO3JMGrKEjx466uyMKZRYRiloyjreWt0U7XNrqp3RVkW+6ra151ONh2NptODMBlwbprWbTZby3J6eNbpjgejk/nVjfZ3Mg1JeCmcECiEkDJwHq12bI1tLDomx61pVnfru8VquVxvt3nRVI3R3nn2jEgAxIDs4VswMAPIum7apnK2EYhZdxhEidYtCACP3hqtc882jCdx58B5BrDGllW9btpil282250z+vjgbDYZk5Dz5UpFyWAyS2+utuv5bnUbzu53Z2c3811dVoFrwwgke5JSBAEIUVWNbjV67ne7kQqqqrq4ffH4+fnlzXxb7B0DIhMiSmmNAe8A4NuD823xRMBspLXGtDUbrYI47nSlVG1Ve+/qJq/KtTdlnHaTbBSGqTe1bau2Lpu6asuyqfcqwIPp8dHx/dby82+eXc/np2dnUsnD01fM06fr5dx57zhQsfIoiVCpxBtprGudti5Pg3SY9YjENs8fz19eXF1fXF/nRdE0rfcgiBi9R0ZgIgJkRJAgrNXA35qm4AClMY1pa3Y+7o9EEDKz060F0zT7ti0RIE1HYdwBBGeatt43ddm2dbXPgV2v2z05Oe2PZner3W5f31xdxKE6Orl/7+SVeq+L/aa5vUqygVRkLDEBBbFn9KbxziQyTIIIPGx3+bMXL5+9PL+8vd3mBQMjgEAkJBaCkZmZ6Fs3iAQREnjnELz3nhhlq0ujGyTRGx1YBjYNsNW61rp0ziqRZtkBybDVhddNVRZ1U2rT7otNkgwOZ8fHJ2cq6SROjCYHt+fPi9WGZyf3Hj3wVn722S+Xy/mgdYIFAFjwHEqAENnGDK/ff1DU9eNnLz7++NOnz1/mxd4yCxkwOO+sczYIUIQhCHLGMnopZBAoKUipwDlrrGmahlBI0xRCiCjpqTirdeWsBva62uumJQq7/UMZJIzOu6apC23q7XZ7c3vt2ub4eNwbTFa7fPn8JZBk5uF4st9uF4u79PoCpev1BrY17F1ltfbWsS0280Ha6Y76AdHTi+snL168uLi4WS1L01qBAOQFAwAgkUAWBN/SHbz81kD1XnsviJjBWUckEIiMLoVScdYHIb2zzhrtrK4rti4IkrQ/AYnWtkbXrSn3VTG/W1xe3BJFw9Fhpzve1/b68nJ1N0fwp2evHh6fpFnHOSskZp00zTIEdOwYQAIG3iUSyfvVevfx5//w7PJqV1bGec9MhELQt08TSOgJLAEAE3hCQGD2zjvn3bfwDEBCkCDpTJNkoyjttfa3CUKttW4qIShOk7ib1W1ZN0VdVdY3m83m/Pzm8mLx5mtvD8f30s7YLUrbOLCVGIo33v2ubSrnnfVQ5kUYB0EsdUWSEYEiEcx6aRQGl1d3v/zkN3/76cevv/XW7OCIPVa7wnlLJBjYom+BjXcsSIGXiCjRaAOeiUhKqbX2nokECQL2UoosTvsqDsv9ml3rjbFVaU2VJYdRMnLIzJacEdbo1r+8uV4Xu+Fo/Mqj1x34+eq6tkWcZbpu8u023y4mg0EYRlWrb+fXKqB+v2Ma60qIY9GNwyiKPv/6my8eP3t5dd1LB/Oru8vzq/2+COMw7HYAsSz2AkQqJJEQUlijdWMZGVBQgMBcm5Y9euedsQyA3sso6oRRioRW1wL5W6kCwjDOgjB1znijvdWmbXb5/uZ24T2c3j/NOoPFerkrcovU6XZbGSgZCIQwUEkUosC0m9p1a61zbLqdJFGSAFZ5+fzy9sXlzS4vJr3+cr3cFTtm3x10ozT1wI02MQmJ+G0G5Nk79uw9EQkpmdlqrUgQEjnnmYWQMu30lIq8tcweAZ0xra7DMI2STCqpm9Lq1mi9r+vrxd1qsZkMJ6+99rrW9vnz59t8M5sdjQ9OqYOBCobdoSDFzKFSJ2enm2W+Wm6renN8dp8t3N5uv35xebfYtFUj2EdpoPYYSgzDtJtmIoyMc1maZUHI1tRt5djJWGKIpjaEpGQIAFrYKArUbxfZy0jKuNsjKVrdKClNXWtdM9oomYogBmTvWqfbtm3W+e7rpy8Uhg9OX33r7XcqY0ejiUCSHqMgGA/HvW43y1IiBEDn/W6/X27WAPDGg1etsU9eXD15fpUXRVXlzBoFWGsFYRiHYZqKMCAGxRzGMYAnGahArpcrx16GqjMcZJ1Or9tNs1RJEUgSkkgAAAsCKaLIA1ijBULVtka3SBilfZLSOuOtcaYpy/1iub64vjnuH8wmUyGDpthHYdLLvDMWAdI07vc7AACAANBqe3l167zt9bI06Tx/fnFzt56v12W9r+oSkaMwNM6iQBkoIGq1AefZGEHowAVKhkE46Pajbtod9vuDXhgERAjgnbO6bXVjmT0wI3jJgpxxbC0jG906Z4SUYdIDJGsaZ62xzS7f3N0tluvtB2+83+12r2/nq+0yDdIs7bd1BewBHKJ3zhFJz1TVzbOn55NO1IujfFu+vFhst6Vjt6t3dVsFKNI4br0DQejJGF3sdmy9M8Y5IyUmSRz01enZ/eNHr0yOZ904KYvd3fzm+vr6+vZmuViU+d40GiwzeMkM7B0459hY2zJzoBKhAkQA773npq1Xy+V6uU7C+Oj4MO1mm3y7Xa+yWTqZTOIkHAw7URIa9iQIgVbr7ZPzCylkr9Nt8vzvfvGry+ub3X63K/PNcmtr4yUr1apY6tq6tnVeMIEUUoaRYzEc988evPLo0cPRYFTsdk//4Ytnz56tl8tyv9dtq7Vum8Za++2wkVl69t5ZYG+NNqYloijMABHYgffW2mJfrNfral8fTg4ms2ncSTZ1LZCYWQVqOpumSeCRHTOicIzbopgvF0ezcV0WT588e/bs+brY1rqum9pUGjxodGVbJSIEj5ICEuQ9S4m9bjY9mh6dHg+GQ5LyxYsXN+eX5y9enF+8bOo8icMsi7NEMqckhFKBUgF7L73z7B2AN7qxtpUqjqKO947YsbfGtNt8t97srHEPH551Bz0ZBSpQ/V4fkT3bNEsJmNkjAIDQxuZlWTXV+0dv/tVffPXprz7brNe7etc6bbUlB0jk2Za69c6FIg6ihAUYU2epOjmZfPhPfjA5PJjfrX710a/+/i//Nt9uwJlA0qCnzl6ZHZ9Mw1QhChXEcZSmUWaskd5Y8B6Q62rPAFKFSkbsnfPGmqati+0mz/cVIB1MpruiLOrG6HbUm/Z6WW/QRfCemRmIEQGu7+at1ePB4MU331w8fbFYLndNUVa1cw48o2AiJxAAlQPSROycb+p7R+PvffD+a288YoY//y8/e/zl48XtXbHdTkfB4Wx8cHD04OHDo+N73V7fA6IUKCRKAYjWacmOwXvvndYNIgqhkCSDt840ui732/Vq17Y2SdOjw8O2teR8J0kBgzBK4jhGQEb/bXxrjb1dzMumSaPo4tk3q8Wy2O/rqjLWgWcCCCQwM5FAoSygB5fE4fT45MPvvz+ejdfr3ZdffPXJLz8pNpssjd566+zha2dnD+4fHBzNpodp1hVCWec9e8/esdNOeyslMTrPxhprNckASTEiAFtn6rrK8+1quXGWe9PB4eHB89tbQWrQGyyXLbBQMgRAACRk622+rxarJQOmWW9xc7fdFU3bmqZBkoBEyIEk7xhIkgzZO0F+Nu3/8IMPPvzg+19/8/gv/+Jv/vYv/0YKOD2ZvfPuo7feevjg7e8dnr7a7XSc1rrV1rTItq0qb413xrUN2FYCOetsqw2Dk0IJEsyGnLCmqer9dlOu7lbOQZrGQSr6WWKN2edb7wCJkdCDIQAAKqr6l19+0bZVhLS+Kb9+/vJ6tWyMCVXARMY7z96jUEoQSUD23r3yysn7333vzbff+s8/+6+ff/rZxfPnCu377z364Iffe/+D748PTsMoJeaqyNlb3Ta6bbyzTltgJqBYJiKMJYD37JyzACBIERGzJxTO2aZpin3dNE2cdHrdXpRm/T63tfYeJHlt5V6TlUDsBcC2LJ88fXw0Hvimffn45fMXL1ebtXFaCRGHoXLWOEuCiEgJEQTBpDN55923BoPerz7++Je/+OjixUtC/9a7D//ZT/75O++/f3h8EkSptd45571HYEEYBkpQSAkSEaEgJCIv4bek8gAohEBCZkZC77nVpqxryzZJ4+FwlMY97lEldVujtdb4oGixlYTegm8Xu+3y5ubRwWxX5l/+45c319eVaykgISgKQ3ay1doRCKIgVIN+//W333z10cPL85f/4Y//eLPYCEGnr578s5/+09/78b+czu4ZbdnX6D0BkpBSICtFCEkcB1ISESIiIjgnCcA7Z4wGRiISJADAOcvOWN2UVW6d7Q+6x8eHnU5nX+W7Ii+2bjQY9zphHAvHzlh9e3Nx8fLpqJ9FUj1ZrH7z9TeGOQiUimQUxx6YnScPgoQUotftPXj46u//5J//4u//7n/8/C82i4239rvffe+n/9uPf+8nP43ijrYWiSOZoGJClEpKKZgdAAdKMbP3zjrjrScBkr33zvwvbfttY9a21rRtU+fF3jN3Or1Bf9DUTVHsjdNpJz446HV6gQrZM1qSTzbL5e310XS6Wm+enV/MN2sZRoEggeida31DHgSw8F6ROjw+evTm61fXV7/5/ItnT14CwPd/+N0f/8Hvf/CDH43Hs1ZrAA6ECIRC9gQopZBSeI/MnhCcd+wtsEP03vn/H0KWxvYVM+HWAAAAAElFTkSuQmCC)Perpindicular lines are helpful across many natural features like **tires**, **clocks**, and **logos**A related hypothesis is that combing might allow curve detectors to be used for fur detection in some contexts. Another hypothesis is that a curve has higher “contrast” with perpendicular lines running towards. Recall that in the dataset examples, the strongest negative pre-ReLU activations were curves at opposite orientations. If a curve detector wants to see a strong change in orientation between the curve and the space around it, it may consider perpendicular lines to be more contrast than a solid color.\n\nFinally, we think it’s possible that combing is really just a convenient way to implement curve detectors — a side effect of a shortcut in circuit construction rather than an intrinsically useful feature. In conv2d1, edge detectors are inhibited by perpendicular lines in conv2d0. One of the things a line or curve detector needs to do is check that the image is not just a single repeating texture, but that it has a strong line surrounded by contrast. It seems to do this by weakly inhibiting parallel lines alongside the tangent. Being excited by a perpendicular line may be an easy way to implement a “inhibit an excitatory neuron” pattern which allows for capped inhibition, without creating dedicated neurons at the previous layer. \n\nCombing is not unique to curves. We also observe it in lines, and basically any shape feature like curves that is derivative of lines. A lot more work could be done exploring the combing phenomenon. Why does combing form? Does it persist in adversarially robust models? Is it an example of what Ilyas et al call a “non-robust feature”? \n\n[Conclusion\n----------](#conclusion)Compared to fields like neuroscience, artificial neural networks make careful investigation easy. We can read and write to every weight in the neural network, use gradients to optimize stimuli, and analyze billions of realistic activations across a dataset. Composing these tools lets us run a wide range of experiments that show us different perspectives on a neuron. If every perspective shows the same story, it’s unlikely we’re missing something big. \n\nGiven this, it may seem odd to invest so much energy into just a handful of neurons. We agree. We first estimated it would take a week to understand the curve family. Instead, we spent months exploring the fractal of beauty and structure we found. \n\nMany paths led to new techniques for studying neurons in general, like synthetic stimuli or using circuit editing to ablate neurons behavior. Others are only relevant for some families, such as the equivariance motif or our hand-trained “artificial artificial neural network” that reimplements curve detectors. A couple were curve-specific, like exploring curve detectors as a type of curve analysis algorithms.\n\nIf our broader goal is fully reverse-engineer neural networks it may seem concerning that studying just one family took so much effort. However, from our experience studying neuron families at a variety of depths, we’ve found that it’s easy to understand the basics of a neuron family. [OpenAI Microscope](https://microscope.openai.com/models) shows you feature visualizations, dataset examples, and soon weights in just a few seconds. Since feature visualization shows strong evidence of causal behavior and dataset examples show what neurons respond to in practice, these are collectively strong evidence of what a neuron does. In fact, we understood the basics of curves at our first glance at them. \n\nWhile it’s usually possible to understand the main function of a neuron family at a glance, researchers engaging in closer inquiry of neuron families will be rewarded with deeper beauty.\nWhen we started, we were nervous that 10 neurons was too narrow a topic for a paper, but now we realize a complete investigation would take a book.\n\n![](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPCEtLSBHZW5lcmF0b3I6IEFkb2JlIElsbHVzdHJhdG9yIDI0LjEuMCwgU1ZHIEV4cG9ydCBQbHVnLUluIC4gU1ZHIFZlcnNpb246IDYuMDAgQnVpbGQgMCkgIC0tPgo8c3ZnIHZlcnNpb249IjEuMSIgaWQ9IkxheWVyXzEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IgoJIHZpZXdCb3g9IjAgMCA0MyA1MCIgc3R5bGU9ImVuYWJsZS1iYWNrZ3JvdW5kOm5ldyAwIDAgNDMgNTA7IiB4bWw6c3BhY2U9InByZXNlcnZlIj4KPHN0eWxlIHR5cGU9InRleHQvY3NzIj4KCS5zdDB7ZmlsbC1vcGFjaXR5OjAuMTI7fQoJLnN0MXtmaWxsOiNGRkZGRkY7c3Ryb2tlOiNBQkFCQUI7fQoJLnN0MntmaWxsLXJ1bGU6ZXZlbm9kZDtjbGlwLXJ1bGU6ZXZlbm9kZDtmaWxsOiNGRkZGRkY7fQoJLnN0M3tmaWxsLXJ1bGU6ZXZlbm9kZDtjbGlwLXJ1bGU6ZXZlbm9kZDtmaWxsOiNGRkZGRkY7ZmlsdGVyOnVybCgjQWRvYmVfT3BhY2l0eU1hc2tGaWx0ZXIpO30KCS5zdDR7bWFzazp1cmwoI3BhdGgtMy1pbnNpZGUtMV8xXyk7ZmlsbDojQUJBQkFCO30KCS5zdDV7ZmlsbDojRkZGRkZGO3N0cm9rZTojQUJBQkFCO3N0cm9rZS1saW5lam9pbjpyb3VuZDtzdHJva2UtbWl0ZXJsaW1pdDoxO30KCS5zdDZ7ZmlsbDpub25lO3N0cm9rZTojQUJBQkFCO30KPC9zdHlsZT4KPHBhdGggY2xhc3M9InN0MCIgZD0iTTIwLjcsMTJ2MTEuNEgzMUwyMC43LDEyeiIvPgo8cGF0aCBjbGFzcz0ic3QxIiBkPSJNMzIuNSwxMC41djM2aC0yOXYtMzZIMzIuNXoiLz4KPHBhdGggY2xhc3M9InN0MiIgZD0iTTIyLDVIOHYzN2gzMFYyMkgyMlY1eiIvPgo8ZGVmcz4KCTxmaWx0ZXIgaWQ9IkFkb2JlX09wYWNpdHlNYXNrRmlsdGVyIiBmaWx0ZXJVbml0cz0idXNlclNwYWNlT25Vc2UiIHg9IjciIHk9IjQiIHdpZHRoPSIzMiIgaGVpZ2h0PSIzOSI+CgkJPGZlQ29sb3JNYXRyaXggIHR5cGU9Im1hdHJpeCIgdmFsdWVzPSIxIDAgMCAwIDAgIDAgMSAwIDAgMCAgMCAwIDEgMCAwICAwIDAgMCAxIDAiLz4KCTwvZmlsdGVyPgo8L2RlZnM+CjxtYXNrIG1hc2tVbml0cz0idXNlclNwYWNlT25Vc2UiIHg9IjciIHk9IjQiIHdpZHRoPSIzMiIgaGVpZ2h0PSIzOSIgaWQ9InBhdGgtMy1pbnNpZGUtMV8xXyI+Cgk8cGF0aCBjbGFzcz0ic3QzIiBkPSJNMjIsNUg4djM3aDMwVjIySDIyVjV6Ii8+CjwvbWFzaz4KPHBhdGggY2xhc3M9InN0NCIgZD0iTTIyLDVoMVY0aC0xVjV6IE04LDVWNEg3djFIOHogTTgsNDJIN3YxaDFWNDJ6IE0zOCw0MnYxaDF2LTFIMzh6IE0zOCwyMmgxdi0xaC0xVjIyeiBNMjIsMjJoLTF2MWgxVjIyeiBNMjIsNAoJSDh2MmgxNFY0eiBNNyw1djM3aDJWNUg3eiBNOCw0M2gzMHYtMkg4VjQzeiBNMzksNDJWMjJoLTJ2MjBIMzl6IE0zOCwyMUgyMnYyaDE2VjIxeiBNMjMsMjJWNWgtMnYxN0gyM3oiLz4KPHBhdGggY2xhc3M9InN0NSIgZD0iTTMyLDExLjVWMTJoMC41aDEwdjI0LjVoLTI5di0zNkgzMlYxMS41eiBNMzIuMSwxMS45VjAuNWwxMC40LDExLjRIMzIuMXoiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iNi41IiB4Mj0iMjciIHkyPSI2LjUiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iMTIuNSIgeDI9IjI3IiB5Mj0iMTIuNSIvPgo8bGluZSBjbGFzcz0ic3Q2IiB4MT0iMTkiIHkxPSIxOC41IiB4Mj0iMzciIHkyPSIxOC41Ii8+CjxsaW5lIGNsYXNzPSJzdDYiIHgxPSIxOSIgeTE9IjI0LjUiIHgyPSIzNyIgeTI9IjI0LjUiLz4KPGxpbmUgY2xhc3M9InN0NiIgeDE9IjE5IiB5MT0iMzAuNSIgeDI9IjM3IiB5Mj0iMzAuNSIvPgo8L3N2Zz4K)This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n\n[An Overview of Early Vision in InceptionV1](/2020/circuits/early-vision/)[Naturally Occurring Equivariance in Neural Networks](/2020/circuits/equivariance/)", "date_published": "2020-06-17T20:00:00Z", "authors": ["Nick Cammarata", "Gabriel Goh", "Shan Carter", "Ludwig Schubert", "Michael Petrov", "Chris Olah"], "summaries": ["Part one of a three part deep dive into the curve neuron family."], "doi": "10.23915/distill.00024.003", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf", "title": "Distributed representations of words and phrases and their compositionality"}, {"link": "https://arxiv.org/pdf/1506.02078.pdf", "title": "Visualizing and understanding recurrent networks"}, {"link": "https://arxiv.org/pdf/1704.01444.pdf", "title": "Learning to generate reviews and discovering sentiment"}, {"link": "https://arxiv.org/pdf/1412.6856.pdf", "title": "Object detectors emerge in deep scene cnns"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1704.05796.pdf", "title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"link": "https://arxiv.org/pdf/1711.11561.pdf", "title": "Measuring the tendency of CNNs to Learn Surface Statistical Regularities"}, {"link": "https://arxiv.org/pdf/1811.12231.pdf", "title": "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness"}, {"link": "https://arxiv.org/pdf/1904.00760.pdf", "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet"}, {"link": "https://arxiv.org/pdf/1905.02175.pdf", "title": "Adversarial examples are not bugs, they are features"}, {"link": "https://arxiv.org/pdf/1803.06959.pdf", "title": "On the importance of single directions for generalization"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "https://arxiv.org/pdf/1612.00005.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "https://arxiv.org/pdf/1703.01365.pdf", "title": "Axiomatic attribution for deep networks"}, {"link": "https://arxiv.org/pdf/1610.02391.pdf", "title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"link": "https://arxiv.org/pdf/1705.05598.pdf", "title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"link": "https://arxiv.org/pdf/1711.00867.pdf", "title": "The (Un)reliability of saliency methods"}, {"link": "https://doi.org/10.23915/distill.00012", "title": "Differentiable Image Parameterizations"}, {"link": "https://www.biorxiv.org/content/early/2019/10/20/808907", "title": "Discrete neural clusters encode orientation, curvature and corners in macaque V4"}, {"link": "\\url{https://medium.com/@tom_25234/synthetic-abstractions-8f0e8f69f390 }", "title": "Synthetic Abstractions"}, {"link": "https://distill.pub/2018/building-blocks", "title": "The Building Blocks of Interpretability"}, {"link": "https://distill.pub/2017/aia/", "title": "Using Artificial Intelligence to Augment Human Intelligence"}]} {"id": "db677f11f115381340bcd9afbaa95af3", "title": "Exploring Bayesian Optimization", "url": "https://distill.pub/2020/bayesian-optimization", "source": "distill", "source_type": "blog", "text": "Many modern machine learning algorithms have a large number of hyperparameters. To effectively use these algorithms, we need to pick good hyperparameter values.\n\n In this article, we talk about Bayesian Optimization, a suite of techniques often used to tune hyperparameters. More generally, Bayesian Optimization can be used to optimize any black-box function.\n \n\n\n\n\n\n\n\nMining Gold!\n============\n\n\n\n Let us start with the example of gold mining. Our goal is to mine for gold in an unknown landInterestingly, our example is similar to one of the first use of Gaussian Processes (also called kriging), where Prof. Krige modeled the gold concentrations using a Gaussian Process..\n For now, we assume that the gold is distributed about a line. We want to find the location along this line with the maximum gold while only drilling a few times (as drilling is expensive).\n \n\n\n\n Let us suppose that the gold distribution f(x)f(x)f(x) looks something like the function below. It is bi-modal, with a maximum value around x=5x = 5x=5. For now, let us not worry about the X-axis or the Y-axis units.\n \n\n\n\n![](images/MAB_gifs/GT.svg)\n\n\n Initially, we have no idea about the gold distribution. We can learn the gold distribution by drilling at different locations. However, this drilling is costly. Thus, we want to minimize the number of drillings required while still finding the location of maximum gold quickly.\n \n\n\n\n We now discuss two common objectives for the gold mining problem.\n \n\n\n* **Problem 1: Best Estimate of Gold Distribution (Active Learning)** \n\n In this problem, we want to accurately estimate the gold distribution on the new land. We can not drill at every location due to the prohibitive cost. Instead, we should drill at locations providing **high information** about the gold distribution. This problem is akin to\n **Active Learning**.\n* **Problem 2: Location of Maximum Gold (Bayesian Optimization)** \n\n In this problem, we want to find the location of the maximum gold content. We, again, can not drill at every location. Instead, we should drill at locations showing **high promise** about the gold content. This problem is akin to\n **Bayesian Optimization**.\n\n\n\n We will soon see how these two problems are related, but not the same.\n \n\n\nActive Learning\n---------------\n\n\n\n For many machine learning problems, unlabeled data is readily available. However, labeling (or querying) is often expensive. As an example, for a speech-to-text task, the annotation requires expert(s) to label words and sentences manually. Similarly, in our gold mining problem, drilling (akin to labeling) is expensive. \n \n\n\n\n Active learning minimizes labeling costs while maximizing modeling accuracy. While there are various methods in active learning literature, we look at **uncertainty reduction**. This method proposes labeling the point whose model uncertainty is the highest. Often, the variance acts as a measure of uncertainty.\n \n\n\n\n Since we only know the true value of our function at a few points, we need a *surrogate model* for the values our function takes elsewhere. This surrogate should be flexible enough to model the true function. Using a Gaussian Process (GP) is a common choice, both because of its flexibility and its ability to give us uncertainty estimates\n \n Gaussian Process supports setting of priors by using specific kernels and mean functions. One might want to look at this excellent Distill article on Gaussian Processes to learn more. \n \n\n Please find [this](https://youtu.be/EnXxO3BAgYk) amazing video from Javier González on Gaussian Processes. \n .\n \n\n\n\n Our surrogate model starts with a prior of f(x)f(x)f(x) — in the case of gold, we pick a prior assuming that it’s smoothly distributed\n \n Specifics: We use a Matern 5/2 kernel due to its property of favoring doubly differentiable functions. See [Rasmussen and Williams 2004](http://www.gaussianprocess.org/gpml/) and [scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.kernels.Matern.html), for details regarding the Matern kernel.\n .\n As we evaluate points (drilling), we get more data for our surrogate to learn from, updating it according to Bayes’ rule.\n \n\n\n\n![](images/MAB_gifs/prior2posterior.png)\n\n Each new data point updates our surrogate model, moving it closer to the ground truth. The black line and the grey shaded region indicate the mean (μ)(\\mu)(μ) and uncertainty (μ±σ)(\\mu \\pm \\sigma)(μ±σ) in our gold distribution estimate before and after drilling.\n \n\n\n In the above example, we started with uniform uncertainty. But after our first update, the posterior is certain near x=0.5x = 0.5x=0.5 and uncertain away from it. We could just keep adding more training points and obtain a more certain estimate of f(x)f(x)f(x).\n \n\n\n\n However, we want to minimize the number of evaluations. Thus, we should choose the next query point “smartly” using active learning. Although there are many ways to pick smart points, we will be picking the most uncertain one.\n \n\n\n\n This gives us the following procedure for Active Learning:\n \n\n\n\n1. Choose and add the point with the highest uncertainty to the training set (by querying/labeling that point)\n2. Train on the new training set\n3. Go to #1 till convergence or budget elapsed\n\n\n\n Let us now visualize this process and see how our posterior changes at every iteration (after each drilling).\n \n\n\n\n![](images/active-gp-img/0.png)\n\nThe visualization shows that one can estimate the true distribution in a few iterations. Furthermore, the most uncertain positions are often the farthest points from the current evaluation points. At every iteration, active learning **explores** the domain to make the estimates better.\n \n\n\nBayesian Optimization\n---------------------\n\n\n\n In the previous section, we picked points in order to determine an accurate model of the gold content. But what if our goal is simply to find the location of maximum gold content? Of course, we could do active learning to estimate the true function accurately and then find its maximum. But that seems pretty wasteful — why should we use evaluations improving our estimates of regions where the function expects low gold content when we only care about the maximum?\n \n\n\n\n This is the core question in Bayesian Optimization: “Based on what we know so far, which point should we evaluate next?” Remember that evaluating each point is expensive, so we want to pick carefully! In the active learning case, we picked the most uncertain point, exploring the function. But in Bayesian Optimization, we need to balance exploring uncertain regions, which might unexpectedly have high gold content, against focusing on regions we already know have higher gold content (a kind of exploitation).\n \n\n\n\n We make this decision with something called an acquisition function. Acquisition functions are heuristics for how desirable it is to evaluate a point, based on our present modelMore details on acquisition functions can be accessed at on this [link](https://botorch.org/docs/acquisition).. We will spend much of this section going through different options for acquisition functions.\n \n\n\n\n This brings us to how Bayesian Optimization works. At every step, we determine what the best point to evaluate next is according to the acquisition function by optimizing it. We then update our model and repeat this process to determine the next point to evaluate.\n \n\n\n\n You may be wondering what’s “Bayesian” about Bayesian Optimization if we’re just optimizing these acquisition functions. Well, at every step we maintain a model describing our estimates and uncertainty at each point, which we update according to Bayes’ rule at each step. Our acquisition functions are based on this model, and nothing would be possible without them!\n \n\n\n\n### Formalizing Bayesian Optimization\n\n\n\n Let us now formally introduce Bayesian Optimization. Our goal is to find the location (x∈Rd{x \\in \\mathbb{R}^d}x∈Rd) corresponding to the global maximum (or minimum) of a function f:Rd↦Rf: \\mathbb{R}^d \\mapsto \\mathbb{R}f:Rd↦R.\n We present the general constraints in Bayesian Optimization and contrast them with the constraints in our gold mining exampleThe section below is based on the slides/talk from Peter Fraizer at Uber on Bayesian Optimization:\n * [Youtube talk](https://www.youtube.com/watch?v=c4KKvyWW_Xk),\n* [slide deck](https://people.orie.cornell.edu/pfrazier/Presentations/2018.11.INFORMS.tutorial.pdf)\n\n\n.\n\n\n| General Constraints | Constraints in Gold Mining example |\n| --- | --- |\n| fff’s feasible set AAA is simple,\n e.g., box constraints. | Our domain in the gold mining problem is a single-dimensional box constraint: 0≤x≤60 \\leq x \\leq 60≤x≤6. |\n| fff is continuous but lacks special structure,\n e.g., concavity, that would make it easy to optimize. | Our true function is neither a convex nor a concave function, resulting in local optimums. |\n| fff is derivative-free:\n evaluations do not give gradient information. | Our evaluation (by drilling) of the amount of gold content at a location did not give us any gradient information. |\n| fff is expensive to evaluate:\n the number of times we can evaluate it\n is severely limited. | Drilling is costly. |\n| fff may be noisy. If noise is present, we will assume it is independent and normally distributed, with common but unknown variance. | We assume noiseless measurements in our modeling (though, it is easy to incorporate normally distributed noise for GP regression). |\n\n\n\n To solve this problem, we will follow the following algorithm:\n \n\n\n\n\n\n1. We first choose a surrogate model for modeling the true function fff and define its **prior**.\n2. Given the set of **observations** (function evaluations), use Bayes rule to obtain the **posterior**.\n3. Use an acquisition function α(x)\\alpha(x)α(x), which is a function of the posterior, to decide the next sample point xt=argmaxxα(x)x\\_t = \\text{argmax}\\_x \\alpha(x)xt​=argmaxx​α(x).\n4. Add newly sampled data to the set of **observations** and goto step #2 till convergence or budget elapses.\n\n\n\n\n### Acquisition Functions\n\n\n\n Acquisition functions are crucial to Bayesian Optimization, and there are a wide variety of options\n Please find [these](https://www.cse.wustl.edu/~garnett/cse515t/spring_2015/files/lecture_notes/12.pdf) slides from Washington University in St. Louis to know more about acquisition functions.\n . In the following sections, we will go through a number of options, providing intuition and examples.\n \n\n\n\n\n\n#### Probability of Improvement (PI)\n\n\n\n This acquisition function chooses the next query point as the one which has the highest *probability of improvement* over the current max f(x+)f(x^+)f(x+). Mathematically, we write the selection of next point as follows, \n \n\n\n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n x\\_{t+1} = argmax(\\alpha\\_{PI}(x)) = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n xt+1​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n \\begin{aligned}\n x\\_{t+1} & = argmax(\\alpha\\_{PI}(x))\\\\\n & = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n \\end{aligned}\n xt+1​​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))​\n\n\n where, \n \n\n* P(⋅)P(\\cdot)P(⋅) indicates probability\n* ϵ\\epsilonϵ is a small positive number\n* And, x+=argmaxxi∈x1:tf(xi) x^+ = \\text{argmax}\\_{x\\_i \\in x\\_{1:t}}f(x\\_i)x+=argmaxxi​∈x1:t​​f(xi​) where xix\\_ixi​ is the location queried at ithi^{th}ith time step.\n\n\n\n \n Looking closely, we are just finding the upper-tail probability (or the CDF) of the surrogate posterior. Moreover, if we are using a GP as a surrogate the expression above converts to,\n \n\n\nxt+1=argmaxxΦ(μt(x)−f(x+)−ϵσt(x))x\\_{t+1} = argmax\\_x \\Phi\\left(\\frac{\\mu\\_t(x) - f(x^+) - \\epsilon}{\\sigma\\_t(x)}\\right)xt+1​=argmaxx​Φ(σt​(x)μt​(x)−f(x+)−ϵ​)\n\n where, \n \n\n* Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates the CDF\n\n\n\n\n The visualization below shows the calculation of αPI(x)\\alpha\\_{PI}(x)αPI​(x). The orange line represents the current max (plus an ϵ \\epsilonϵ) or f(x+)+ϵ f(x^+) + \\epsilonf(x+)+ϵ. The violet region shows the probability density at each point. The grey regions show the probability density below the current max. The “area” of the violet region at each point represents the “probability of improvement over current maximum”. The next point to evaluate via the PI criteria (shown in dashed blue line) is x=6x = 6x=6.\n \n\n\n\n![](images/MAB_gifs/density_pi.png)\n\n##### Intuition behind ϵ\\epsilonϵ in PI\n\n\n\n PI uses ϵ\\epsilonϵ to strike a balance between exploration and exploitation. \n Increasing ϵ\\epsilonϵ results in querying locations with a larger σ\\sigmaσ as their probability density is spread.\n \n\n\n\n Let us now see the PI acquisition function in action. We start with ϵ=0.075\\epsilon=0.075ϵ=0.075.\n \n\n\n\n![](images/MAB_pngs/PI/0.075/0.png)\n\n\n Looking at the graph above, we see that we reach the global maxima in a few iterationsTies are broken randomly..\n Our surrogate possesses a large uncertainty in x∈[2,4]x \\in [2, 4]x∈[2,4] in the first few iterationsThe proportion of uncertainty is identified by the grey translucent area..\n The acquisition function initially **exploits** regions with a high promisePoints in the vicinity of current maxima, which leads to high uncertainty in the region x∈[2,4]x \\in [2, 4]x∈[2,4]. This observation also shows that we do not need to construct an accurate estimate of the black-box function to find its maximum.\n \n\n\n\n![](images/MAB_pngs/PI/0.3/0.png)\n\n\n The visualization above shows that increasing ϵ\\epsilonϵ to 0.3, enables us to **explore** more. However, it seems that we are exploring more than required.\n \n\n\n\n What happens if we increase ϵ\\epsilonϵ a bit more?\n \n\n\n\n![](images/MAB_pngs/PI/3/0.png)\n\n\n We see that we made things worse! Our model now uses ϵ=3\\epsilon = 3ϵ=3, and we are unable to exploit when we land near the global maximum. Moreover, with high exploration, the setting becomes similar to active learning.\n \n\n\n Our quick experiments above help us conclude that ϵ\\epsilonϵ controls the degree of exploration in the PI acquisition function.\n\n \n\n#### Expected Improvement (EI)\n\n\n\n Probability of improvement only looked at *how likely* is an improvement, but, did not consider *how much* we can improve. The next criterion, called Expected Improvement (EI), does exactly thatA good introduction to the Expected Improvement acquisition function is by [this post](https://thuijskens.github.io/2016/12/29/bayesian-optimisation/) by Thomas Huijskens and [these slides](https://people.orie.cornell.edu/pfrazier/Presentations/2018.11.INFORMS.tutorial.pdf) by Peter Frazier!\n The idea is fairly simple — choose the next query point as the one which has the highest expected improvement over the current max f(x+)f(x^+)f(x+), where x+=argmaxxi∈x1:tf(xi) x^+ = \\text{argmax}\\_{x\\_i \\in x\\_{1:t}}f(x\\_i)x+=argmaxxi​∈x1:t​​f(xi​) and xix\\_ixi​ is the location queried at ithi^{th}ith time step.\n \n\n\n\n In this acquisition function, t+1tht + 1^{th}t+1th query point, xt+1x\\_{t+1}xt+1​, is selected according to the following equation.\n \n\n\nxt+1=argminxE(∣∣ht+1(x)−f(x⋆)∣∣ ∣ Dt)\n x\\_{t+1} = argmin\\_x \\mathbb{E} \\left( ||h\\_{t+1}(x) - f(x^\\star) || \\ | \\ \\mathcal{D}\\_t \\right)\n xt+1​=argminx​E(∣∣ht+1​(x)−f(x⋆)∣∣ ∣ Dt​)\n\n Where, fff is the actual ground truth function, ht+1h\\_{t+1}ht+1​ is the posterior mean of the surrogate at t+1tht+1^{th}t+1th timestep, Dt\\mathcal{D}\\_tDt​ is the training data {(xi,f(xi))} ∀x∈x1:t\\{(x\\_i,\n f(x\\_i))\\} \\ \\forall x \\in x\\_{1:t}{(xi​,f(xi​))} ∀x∈x1:t​ and x⋆x^\\starx⋆ is the actual position where fff takes the maximum value.\n \n\n\n\n In essence, we are trying to select the point that minimizes the distance to the objective evaluated at the maximum. Unfortunately, we do not know the ground truth function, fff. Mockus proposed\n the following acquisition function to overcome the issue.\n \n\n\n\nxt+1=argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n x\\_{t+1} = argmax\\_x \\mathbb{E} \\left( {max} \\{ 0, \\ h\\_{t+1}(x) - f(x^+) \\} \\ | \\ \\mathcal{D}\\_t \\right)\n xt+1​=argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)\n\n\nxt+1= argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n \\begin{aligned}\n x\\_{t+1} = \\ & argmax\\_x \\mathbb{E} \\\\\n & \\left( {max} \\{ 0, \\ h\\_{t+1}(x) - f(x^+) \\} \\ | \\ \\mathcal{D}\\_t \\right)\n \\end{aligned}\n xt+1​= ​argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)​\n\n\n where f(x+)f(x^+)f(x+) is the maximum value that has been encountered so far. This equation for GP surrogate is an analytical expression shown below.\n \n\n\n\nEI(x)={(μt(x)−f(x+)−ϵ)Φ(Z)+σt(x)ϕ(Z),if σt(x)>00,if σt(x)=0\n EI(x)=\n \\begin{cases}\n (\\mu\\_t(x) - f(x^+) - \\epsilon)\\Phi(Z) + \\sigma\\_t(x)\\phi(Z), & \\text{if}\\ \\sigma\\_t(x) > 0 \\\\\n 0, & \\text{if}\\ \\sigma\\_t(x) = 0\n \\end{cases}\n EI(x)={(μt​(x)−f(x+)−ϵ)Φ(Z)+σt​(x)ϕ(Z),0,​if σt​(x)>0if σt​(x)=0​\n\n\nEI(x)={[(μt(x)−f(x+)−ϵ) σt(x)>0∗Φ(Z)]+σt(x)ϕ(Z),0, σt(x)=0\n EI(x)= \\begin{cases}\n [(\\mu\\_t(x) - f(x^+) - \\epsilon) & \\ \\sigma\\_t(x) > 0 \\\\\n \\quad \\* \\Phi(Z)] + \\sigma\\_t(x)\\phi(Z),\\\\\n 0, & \\ \\sigma\\_t(x) = 0\n \\end{cases}\n EI(x)=⎩⎪⎨⎪⎧​[(μt​(x)−f(x+)−ϵ)∗Φ(Z)]+σt​(x)ϕ(Z),0,​ σt​(x)>0 σt​(x)=0​\n\nZ=μt(x)−f(x+)−ϵσt(x)Z= \\frac{\\mu\\_t(x) - f(x^+) - \\epsilon}{\\sigma\\_t(x)}Z=σt​(x)μt​(x)−f(x+)−ϵ​\n\n where Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates CDF and ϕ(⋅)\\phi(\\cdot)ϕ(⋅) indicates pdf.\n \n\n\nFrom the above expression, we can see that *Expected Improvement* will be high when: i) the expected value of μt(x)−f(x+)\\mu\\_t(x) - f(x^+)μt​(x)−f(x+) is high, or, ii) when the uncertainty σt(x)\\sigma\\_t(x)σt​(x) around a point is high.\n\n\n \n\n Like the PI acquisition function, we can moderate the amount of exploration of the EI acquisition function by modifying ϵ\\epsilonϵ.\n \n\n\n\n![](images/MAB_pngs/EI/0.01/0.png)\n\n\n For ϵ=0.01\\epsilon = 0.01ϵ=0.01 we come close to the global maxima in a few iterations. \n\n\n\n We now increase ϵ\\epsilonϵ to explore more.\n \n\n\n\n![](images/MAB_pngs/EI/0.3/0.png)\n\n\n As we expected, increasing the value to ϵ=0.3\\epsilon = 0.3ϵ=0.3 makes the acquisition function explore more. Compared to the earlier evaluations, we see less exploitation. We see that it evaluates only two points near the global maxima.\n \n\n\n\n Let us increase ϵ\\epsilonϵ even more.\n \n\n\n\n![](images/MAB_pngs/EI/3/0.png)\n\n\n Is this better than before? It turns out a yes and a no; we explored too much at ϵ=3\\epsilon = 3ϵ=3 and quickly reached near the global maxima. But unfortunately, we did not exploit to get more gains near the global maxima.\n \n\n\n#### PI vs Ei\n\n\n\n\n We have seen two closely related methods, The *Probability of Improvement* and the *Expected Improvement*.\n \n\n\n\n![](images/MAB_gifs/Ei_Pi_graph/0.svg)\n\n\n The scatter plot above shows the policies’ acquisition functions evaluated on different pointsEach dot is a point in the search space. Additionally, the training set used while making the plot only consists of a single observation (0.5,f(0.5))(0.5, f(0.5))(0.5,f(0.5)).\n We see that αEI\\alpha\\_{EI}αEI​ and αPI\\alpha\\_{PI}αPI​ reach a maximum of 0.3 and around 0.47, respectively. Choosing a point with low αPI\\alpha\\_{PI}αPI​ and high αEI\\alpha\\_{EI}αEI​ translates to high riskSince “Probability of Improvement” is low and high rewardSince “Expected Improvement” is high.\n In case of multiple points having the same αEI\\alpha\\_{EI}αEI​, we should prioritize the point with lesser risk (higher αPI\\alpha\\_{PI}αPI​). Similarly, when the risk is same (same αPI\\alpha\\_{PI}αPI​), we should choose the point with greater reward (higher αEI\\alpha\\_{EI}αEI​).\n \n\n\n\n### Thompson Sampling\n\n\n\n Another common acquisition function is Thompson Sampling . At every step, we sample a function from the surrogate’s posterior and optimize it. For example, in the case of gold mining, we would sample a plausible distribution of the gold given the evidence and evaluate (drill) wherever it peaks.\n \n\n\n\n Below we have an image showing three sampled functions from the learned surrogate posterior for our gold mining problem. The training data constituted the point x=0.5x = 0.5x=0.5 and the corresponding functional value.\n \n\n\n\n\n![](images/MAB_gifs/thompson.svg)\n\n\n\n We can understand the intuition behind Thompson sampling by two observations:\n \n\n* Locations with high uncertainty (σ(x) \\sigma(x) σ(x)) will show a large variance in the functional values sampled from the surrogate posterior. Thus, there is a non-trivial probability that a sample can take high value in a highly uncertain region. Optimizing such samples can aid **exploration**.\n \n\n\n\n As an example, the three samples (sample #1, #2, #3) show a high variance close to x=6x=6x=6. Optimizing sample 3 will aid in exploration by evaluating x=6x=6x=6.\n* The sampled functions must pass through the current max value, as there is no uncertainty at the evaluated locations. Thus, optimizing samples from the surrogate posterior will ensure **exploiting** behavior.\n \n\n\n\n As an example of this behavior, we see that all the sampled functions above pass through the current max at x=0.5x = 0.5x=0.5. If x=0.5x = 0.5x=0.5 were close to the global maxima, then we would be able to **exploit** and choose a better maximum.\n\n\n\n![](images/MAB_pngs/Thompson/0.png)\n\n\nThe visualization above uses Thompson sampling for optimization. Again, we can reach the global optimum in relatively few iterations.\n \n\n\n### Random\n\n\n\n We have been using intelligent acquisition functions until now.\n We can create a random acquisition function by sampling xxx\n randomly. \n\n\n\n![](images/MAB_pngs/Rand/0.png)\n\n The visualization above shows that the performance of the random acquisition function is not that bad! However, if our optimization was more complex (more dimensions), then the random acquisition might perform poorly.\n\n\n\n### Summary of Acquisition Functions\n\n \n Let us now summarize the core ideas associated with acquisition functions: i) they are heuristics for evaluating the utility of a point; ii) they are a function of the surrogate posterior; iii) they combine exploration and exploitation; and iv) they are inexpensive to evaluate.\n\n\n#### Other Acquisition Functions\n\n \n\nWe have seen various acquisition functions until now. One trivial way to come up with acquisition functions is to have a explore/exploit combination.\n \n\n\n\n\n### Upper Confidence Bound (UCB)\n\n\n\n One such trivial acquisition function that combines the exploration/exploitation tradeoff is a linear combination of the mean and uncertainty of our surrogate model. The model mean signifies exploitation (of our model’s knowledge) and model uncertainty signifies exploration (due to our model’s lack of observations).\n α(x)=μ(x)+λ×σ(x)\\alpha(x) = \\mu(x) + \\lambda \\times \\sigma(x)α(x)=μ(x)+λ×σ(x)\n\n\n\n\n The intuition behind the UCB acquisition function is weighing of the importance between the surrogate’s mean vs. the surrogate’s uncertainty. The λ\\lambdaλ above is the hyperparameter that can control the preference between exploitation or exploration.\n \n\n\n\n We can further form acquisition functions by combining the existing acquisition functions though the physical interpretability of such combinations might not be so straightforward. One reason we might want to combine two methods is to overcome the limitations of the individual methods.\n \n\n\n### Probability of Improvement + λ ×\\lambda \\ \\timesλ × Expected Improvement (EI-PI)\n\n\n\n One such combination can be a linear combination of PI and EI.\n\n We know PI focuses on the probability of improvement, whereas EI focuses on the expected improvement. Such a combination could help in having a tradeoff between the two based on the value of λ\\lambdaλ.\n \n\n\n### Gaussian Process Upper Confidence Bound (GP-UCB)\n\n\n\n Before talking about GP-UCB, let us quickly talk about **regret**. Imagine if the maximum gold was aaa units, and our optimization instead samples a location containing b have been found via running grid search at high granularity.\n \n\n\n\n![](images/MAB_pngs/PI3d/0.05/0.png)\n\nAbove we see a slider showing the work of the *Probability of Improvement* acquisition function in finding the best hyperparameters.\n\n\n\n![](images/MAB_pngs/EI3d/0.01/0.png)\n\nAbove we see a slider showing the work of the *Expected Improvement* acquisition function in finding the best hyperparameters.\n\n\n### Comparison\n\n\n\n Below is a plot that compares the different acquisition functions. We ran the *random* acquisition function several times to average out its results.\n \n\n\n\n![](images/MAB_gifs/comp3d.svg)\n\n\n All our acquisition beat the *random* acquisition function after seven iterations. We see the *random* method seemed to perform much better initially, but it could not reach the global optimum, whereas Bayesian Optimization was able to get fairly close. The initial subpar performance of Bayesian Optimization can be attributed to the initial exploration.\n \n\n\n#### Other Examples\n\n\n\n### Example 2 — Random Forest\n\n\n\n Using Bayesian Optimization in a Random Forest Classifier.\n\n\n\n\n We will continue now to train a Random Forest on the moons dataset we had used previously to learn the Support Vector Machine model. The primary hyperparameters of Random Forests we would like to optimize our accuracy are the **number** of\n Decision Trees we would like to have, the **maximum depth** for each of those decision trees.\n \n\n\n\n The parameters of the Random Forest are the individual trained Decision Trees models.\n \n\n\n\n We will be again using Gaussian Processes with Matern kernel to estimate and predict the accuracy function over the two hyperparameters.\n \n\n\n\n![](images/MAB_pngs/RFPI3d/0.05/0.png)\n\n\n Above is a typical Bayesian Optimization run with the *Probability of Improvement* acquisition function.\n \n\n\n\n![](images/MAB_pngs/RFEI3d/0.5/0.png)\n\nAbove we see a run showing the work of the *Expected Improvement* acquisition function in optimizing the hyperparameters.\n\n\n\n![](images/MAB_pngs/RFGP_UCB3d/1-2/0.png)\n\n\n Now using the *Gaussian Processes Upper Confidence Bound* acquisition function in optimizing the hyperparameters.\n \n\n\n\n![](images/MAB_pngs/RFRand3d/1-2/0.png)\n\nLet us now use the Random acquisition function.\n\n\n\n![](images/MAB_gifs/RFcomp3d.svg)\n\n\n The optimization strategies seemed to struggle in this example. This can be attributed to the non-smooth ground truth. This shows that the effectiveness of Bayesian Optimization depends on the surrogate’s efficiency to model the actual black-box function. It is interesting to notice that the Bayesian Optimization framework still beats the *random* strategy using various acquisition functions.\n \n\n\n### Example 3 — Neural Networks\n\n\n\n Let us take this example to get an idea of how to apply Bayesian Optimization to train neural networks. Here we will be using `scikit-optim`, which also provides us support for optimizing function with a search space of categorical, integral, and real variables. We will not be plotting the ground truth here, as it is extremely costly to do so. Below are some code snippets that show the ease of using Bayesian Optimization packages for hyperparameter tuning.\n \n\n\n\n The code initially declares a search space for the optimization problem. We limit the search space to be the following:\n \n\n* batch\\_size — This hyperparameter sets the number of training examples to combine to find the gradients for a single step in gradient descent. \n \n Our search space for the possible batch sizes consists of integer values s.t. batch\\_size = 2i ∀ 2≤i≤7 & i∈Z2^i \\ \\forall \\ 2 \\leq i \\leq 7 \\ \\& \\ i \\in \\mathbb{Z}2i ∀ 2≤i≤7 & i∈Z.\n* learning rate — This hyperparameter sets the stepsize with which we will perform gradient descent in the neural network. \n\n We will be searching over all the real numbers in the range [10−6, 1][10^{-6}, \\ 1][10−6, 1].\n* activation — We will have one categorical variable, i.e. the activation to apply to our neural network layers. This variable can take on values in the set {relu, sigmoid}\\{ relu, \\ sigmoid \\}{relu, sigmoid}.\n\n\n\n\n log\\_batch\\_size = Integer(\n low=2,\n high=7,\n name='log\\_batch\\_size'\n )\n lr = Real(\n low=1e-6,\n high=1e0,\n prior='log-uniform',\n name='lr'\n )\n activation = Categorical(\n categories=['relu', 'sigmoid'],\n name='activation'\n )\n\n dimensions = [\n dim\\_num\\_batch\\_size\\_to\\_base,\n dim\\_learning\\_rate,\n dim\\_activation\n ]\n \n\n Now import `gp-minimize`**Note**: One will need to negate the accuracy values as we are using the minimizer function from `scikit-optim`. from `scikit-optim` to perform the optimization. Below we show calling the optimizer using *Expected Improvement*, but of course we can select from a number of other acquisition functions.\n \n\n\n\n # initial parameters (1st point)\n default\\_parameters = \n [4, 1e-1, 'relu']\n\n # bayesian optimization\n search\\_result = gp\\_minimize(\n func=train,\n dimensions=dimensions,\n acq\\_func='EI', # Expctd Imprv.\n n\\_calls=11,\n x0=default\\_parameters\n )\n \n\n![](images/MAB_gifs/conv.svg)\n\n\n In the graph above the y-axis denotes the best accuracy till then, (f(x+))\\left( f(x^+) \\right)(f(x+)) and the x-axis denotes the evaluation number.\n \n\n\n\n Looking at the above example, we can see that incorporating Bayesian Optimization is not difficult and can save a lot of time. Optimizing to get an accuracy of nearly one in around seven iterations is impressive!The example above has been inspired by [Hvass Laboratories’ Tutorial Notebook](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/19_Hyper-Parameters.ipynb) showcasing hyperparameter optimization in TensorFlow using `scikit-optim`.\n\n\n\n\n Let us get the numbers into perspective. If we had run this optimization using a grid search, it would have taken around (5×2×7)(5 \\times 2 \\times 7)(5×2×7) iterations. Whereas Bayesian Optimization only took seven iterations. Each iteration took around fifteen minutes; this sets the time required for the grid search to complete around seventeen hours!\n \n\n\n\nConclusion and Summary\n======================\n\n\n\n In this article, we looked at Bayesian Optimization for optimizing a black-box function. Bayesian Optimization is well suited when the function evaluations are expensive, making grid or exhaustive search impractical. We looked at the key components of Bayesian Optimization. First, we looked at the notion of using a surrogate function (with a prior over the space of objective functions) to model our black-box function. Next, we looked at the “Bayes” in Bayesian Optimization — the function evaluations are used as data to obtain the surrogate posterior. We look at acquisition functions, which are functions of the surrogate posterior and are optimized sequentially. This new sequential optimization is in-expensive and thus of utility of us. We also looked at a few acquisition functions and showed how these different functions balance exploration and exploitation. Finally, we looked at some practical examples of Bayesian Optimization for optimizing hyper-parameters for machine learning models.\n \n\n\n\n We hope you had a good time reading the article and hope you are ready to **exploit** the power of Bayesian Optimization. In case you wish to **explore** more, please read the [Further Reading](#FurtherReading) section below. We also provide our [repository](https://github.com/distillpub/post--bayesian-optimization) to reproduce the entire article.", "date_published": "2020-05-05T20:00:00Z", "authors": ["Apoorv Agnihotri", "Nipun Batra"], "summaries": ["How to tune hyperparameters for your machine learning model using Bayesian optimization."], "doi": "10.23915/distill.00026", "journal_ref": "distill-pub", "bibliography": [{"link": "https://journals.co.za/content/saimm/52/6/AJA0038223X_4792", "title": "A statistical approach to some basic mine valuation problems on the Witwatersrand "}, {"link": "http://burrsettles.com/pub/settles.activelearning.pdf", "title": "Active Learning Literature Survey"}, {"link": "http://www.robotics.stanford.edu/~stong/papers/tong_thesis.pdf", "title": "Active learning: theory and applications"}, {"link": "https://doi.org/10.1109/JPROC.2015.2494218", "title": "Taking the Human Out of the Loop: A Review of Bayesian Optimization"}, {"link": "https://distill.pub/2019/visual-exploration-gaussian-processes/", "title": "A Visual Exploration of Gaussian Processes"}, {"link": "http://www.gaussianprocess.org/gpml/chapters/RW.pdf", "title": "Gaussian Processes in Machine Learning"}, {"link": "https://doi.org/10.1007/BF00940509", "title": "Bayesian approach to global optimization and application to multiobjective and constrained problems"}, {"link": "https://doi.org/10.1093/biomet/25.3-4.285", "title": "On The Likelihood That One Unknown Probability Exceeds Another In View Of The Evidence Of Two Samples"}, {"link": "http://dl.acm.org/citation.cfm?id=2999325.2999464", "title": "Practical Bayesian Optimization of Machine Learning Algorithms"}, {"link": "http://dl.acm.org/citation.cfm?id=3042817.3042832", "title": "Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures"}, {"link": "http://papers.nips.cc/paper/7111-bayesian-optimization-with-gradients.pdf", "title": "Bayesian Optimization with Gradients"}, {"link": "http://www.jmlr.org/papers/volume18/16-558/16-558.pdf", "title": "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization"}, {"link": "http://proceedings.mlr.press/v54/klein17a.html", "title": "Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets"}, {"link": "http://dl.acm.org/citation.cfm?id=3045118.3045225", "title": "Safe Exploration for Optimization with Gaussian Processes"}, {"link": "http://dl.acm.org/citation.cfm?id=3045118.3045349", "title": "Scalable Bayesian Optimization Using Deep Neural Networks"}, {"link": "http://dl.acm.org/citation.cfm?id=3020548.3020587", "title": "Portfolio Allocation for Bayesian Optimization"}, {"link": "http://doi.acm.org/10.1145/1791212.1791238", "title": "Bayesian Optimization for Sensor Set Selection"}, {"link": "https://doi.org/10.1214/18-BA1110", "title": "Constrained Bayesian Optimization with Noisy Experiments"}]} {"id": "957e47782c32e397de3f788737eefba8", "title": "An Overview of Early Vision in InceptionV1", "url": "https://distill.pub/2020/circuits/early-vision", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. \n\n\n\n\n\n\n[Zoom In: An Introduction to Circuits](/2020/circuits/zoom-in/)\n[Curve Detectors](/2020/circuits/curve-detectors/)\n\n\n The first few articles of the Circuits project will be focused on early vision in InceptionV1 — for our purposes, the five convolutional layers leading up to the third pooling layer.\n\n\n\n\n\n\n\ninput\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsoftmax\n\n\n\n[0](#conv2d0)\n\n[1](#conv2d1)\n[2](#conv2d2)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n5a\n\n\n\n\n\n\n\n\n\n\n\n4d\n\n\n\n\n\n\n\n\n\n\n\n5b\n\n\n[3b](#mixed3b)\n\n\n\n\n\n\n\n\n\n4e\n\n\n\n\n\n\n\n\n\n\n\n4c\n\n\n\n\n\n\n\n\n\n\n\n4b\n\n\n[3a](#mixed3a)\n\n\n\n\n\n\n\n\n\n4a\n\n\n\n\nFor our purposes, we’ll consider early vision to be the first five layers. Click on a layer to jump to section.\n\n\n Over the course of these layers, we see the network go from raw pixels\n up to sophisticated [boundary detection](#group_mixed3b_boundary), basic shape detection\n (eg. [curves](#group_mixed3b_curves), [circles](#group_mixed3b_circles_loops), [spirals](#group_mixed3b_curve_shapes), [triangles](#group_mixed3a_angles)),\n [eye detectors](#group_mixed3b_eyes), and even crude detectors for [very small heads](#group_mixed3b_proto_head).\n Along the way, we see a variety of interesting intermediate features,\n including [Complex Gabor detectors](#conv2d1_discussion_complex_gabor) (similar to some classic “complex cells” of neuroscience),\n [black and white vs color detectors](#mixed3a_discussion_BW), and [small circle formation from curves](#mixed3a_discussion_small_circle).\n\n\n\n\n Studying early vision has two major advantages as a starting point in our investigation.\n Firstly, it’s particularly easy to study:\n it’s close to the input, the circuits are only a few layers deep, there aren’t that many different neurons,It’s common for vision models to have on the order of 64 channels in their initial convolutional layers, which are applied at many spatial positions. So while there are many neurons, the number of unique neurons is orders of magnitude smaller. and the features seem quite simple.\n Secondly, early vision seems most likely to be universal: to have the same features and circuits form across different architectures and tasks.\n\n\n\n\n Before we dive into detailed explorations of different parts of early vision, we wanted to give a broader overview of how we presently understand it.\n This article sketches out our understanding, as an annotated collection of what we call “neuron groups.”\n We also provide illustrations of selected circuits at each layer.\n\n\n\n\n By limiting ourselves to early vision, this article “only” considers the first 1,056 neurons of InceptionV1.We will not discuss the “bottleneck” neurons in mixed3a/mixed3b, which we generally think of as low-rank connections to the previous layer.\n But our experience is that a thousand neurons is more than enough to be disorienting when one begins studying a model.\n Our hope is that this article will help readers avoid this disorientation by providing some structure and handholds for thinking about them.\n\n\n\n### Playing Cards with Neurons\n\n\n\n Dmitri Mendeleev is often accounted to have discovered the Periodic Table by playing “chemical solitaire,” writing the details of each element on a card and patiently fiddling with different ways of classifying and organizing them.\n Some modern historians are skeptical about the cards, but Mendeleev’s story is a compelling demonstration of that there can be a lot of value in simply organizing phenomena, even when you don’t have a theory or firm justification for that organization yet.\n Mendeleev is far from unique in this.\n For example, in biology, taxonomies of species preceded genetics and the theory of evolution giving them a theoretical foundation.\n\n\n\n\n Our experience is that many neurons in vision models seem to fall into families of similar features.\n For example, it’s not unusual to see a dozen neurons detecting the same feature in different orientations or colors.\n Perhaps even more strikingly, the same “neuron families” seem to recur across models!\n Of course, it’s well known that Gabor filters and color contrast detectors reliably comprise neurons in the first layer of convolutional neural networks, but we were quite surprised to see this generalize to later layers.\n\n\n\n\n This article shares our working categorization of units in the first five layers of InceptionV1 into neuron families.\n These families are ad-hoc, human defined collections of features that seem to be similar in some way.\n We’ve found these helpful for communicating among ourselves and breaking the problem of understanding InceptionV1 into smaller chunks.\n While there are some families we suspect are “real”, many others are categories of convenience, or categories we have low-confidence about.\n The main goal of these families is to help researchers orient themselves.\n\n\n\n\n In constructing this categorization, our understanding of individual neurons was developed by looking at feature visualizations, dataset examples, how a feature is built from the previous layer, how it is used by the next layer, and other analysis.\n It’s worth noting that the level of attention we’ve given to individual neurons varies greatly: we’ve dedicated entire forthcoming articles to detailed analysis some of these units, while many others have only received a few minutes of cursory investigation.\n\n\n\n\n In some ways, our categorization of units is similar to [Net Dissect](http://netdissect.csail.mit.edu/) , which correlates neurons with a pre-defined set of features and groups them into categories like color, texture, and object.\n This has the advantage of removing subjectivity and being much more scalable.\n At the same time, it also has downsides: correlation can be misleading and the pre-defined taxonomy may miss the true feature types.\n[Net Dissect](http://netdissect.csail.mit.edu/) was very elegant work which advanced our ability to systematically talk about network features.\n However, to understand the differences between correlating features with a pre-defined taxonomy and individually studying them, it may be illustrative to consider how it classifies some features.\n Net Dissect doesn’t include the canonical InceptionV1, but it does include a variant of it.\n Glancing through their version of layer [`mixed3b`](http://netdissect.csail.mit.edu/dissect/googlenet_imagenet/html/inception_3b-output.html) we see many units which appear from dataset examples likely to be familiar feature types like curve detectors, divot detectors, boundary detectors, eye detector, and so forth, but are classified as weakly correlated with another feature — often objects that it seems unlikely could be detected at such an early layer.\n Or in another fun case, there is a feature (372) which is most correlated with a cat detector, but appears to be detecting left-oriented whiskers!\n \n In particular, if we expect models to have novel, unanticipated features — for example, high-low frequency detectors — the fact that they are unanticipated makes them impossible to include in a set of pre-defined features.\n The only way to discover them is the laborious process of manually investigating each feature.\n In the future, you could imagine hybrid approaches, where a human investigator is saved time by having many features sorted into a (continually growing) set of known features, especially if the [universality hypothesis](https://drafts.distill.pub/circuits-zoom-in/#claim-3) holds.\n\n\n\n#### Caveats\n\n\n* This is a broad overview and our understanding of many of these units is low-confidence. We fully expect, in retrospect, to realize we misunderstood some units and categories.\n* Many neuron groups are catch-all categories or convenient organizational categories that we don’t think reflect fundamental structure.\n* Even for neuron groups we suspect do reflect a fundamental structure (eg. some can be recovered from factorizing the layer’s weight matrices) the boundaries of these groups can be blurry and some neurons inclusion involve judgement calls.\n\n\n### Presentation of Neurons\n\n\n\n In order to talk about neurons, we need to somehow represent them.\n While we could use neuron indices, it’s very hard to keep hundreds of numbers straight in one’s head.\n Instead, we use [feature visualizations](https://distill.pub/2017/feature-visualization/), optimized images which highly stimulate a neuron.\n Our feature visualization is done with the [lucid library](https://github.com/tensorflow/lucid).\n We use small amounts of transformation robustness when visualizing the first few layers, because it has a larger proportional affect on their small receptive fields, and increase as we move to higher layers.\n For low layers, we use L2 regularization to push pixels towards gray.\n For the first layer, we follow the convention of other papers and just show the weights, which for the special case of the first layer are equivalent to feature visualization with the right L2 penalty.\n \n\n\n\n\n When we represent a neuron with a feature visualization, we don’t intend to claim that the feature visualization captures the entirety of the neuron’s behavior.\n Rather, the role of a feature visualization is like a variable name in understanding a program.\n It replaces an arbitrary number with a more meaningful symbol .\n\n\n\n### Presentation of Circuits\n\n\n\n Although this article is focused on giving an overview of the features which exist in early vision, we’re also interested in understanding how they’re computed from earlier features.\n To do this, we present [circuits](https://distill.pub/2020/circuits/zoom-in/#claim-2) consisting of a neuron, the units it has the strongest (L2 norm) weights to in the previous layer, and the weights between them.\n Some neurons in `mixed3a` and `mixed3b` are in branches consisting of a “bottleneck” 1x1 conv that greatly reduces the number of channels followed by a 5x5 conv. Although there is a ReLU between them, we generally think of them as a low rank factorization of a single weight matrix and visualize the product of the two weights. Additionally, some neurons in these layers are in a branch consisting of maxpooling followed by a 1x1 conv; we present these units as their weights replicated over the region of their maxpooling.\n \n In some cases, we’ve also included a few neurons that have weaker connections if they seem to have particular pedagogical value; in these cases, we’ve mentioned doing so in the caption.\n Neurons are visually displayed by their feature visualizations, as discussed above.\n Weights are represented using a color map with red as positive and blue as negative.\n\n\n\n\n For example, here is a circuit of a circle detecting unit in `mixed3a` being assembled from earlier curves and a primitive circle detector.\n We’ll discuss this example [in more depth](#mixed3a_discussion_small_circle) later.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\npositive (excitation)\nnegative (inhibition)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nClick on the feature visualization of any neuron to see more weights!\n\n\n At any point, you can click on a neuron’s feature visualization to see its weights to the 50 neurons in the previous layer it is most connected to (that is, how it assembled from the previous layer, and also the 50 neurons in the next layer it is most connected to (that is, how it is used going forward).\n This allows further investigation, and gives you an unbiased view of the weights if you’re concerned about cherry-picking.\n\n\n\n \n \n\n\n\n---\n\n\n \n \n\n`conv2d0`\n---------\n\n\n\n The first conv layer of every vision model we’ve looked at is mostly comprised of two kinds of features: color-contrast detectors and Gabor filters.\n InceptionV1′s `conv2d0` is no exception to this rule, and most of its units fall into these categories.\n\n\n\n\n In contrast to other models, however, the features aren’t perfect color contrast detectors and Gabor filters.\n For lack of a better word, they’re messy.\n We have no way of knowing, but it seems likely this is a result of the gradient not reaching the early layers very well during training.\n Note that InceptionV1 predated the adoption of modern techniques like batch norm and Adam, which make it much easier to train deep models well.\n If we compare to the TF-Slim rewrite of InceptionV1, which does use BatchNorm, we see crisper features.\n The weights for the units in the first layer of the TF-Slim version of InceptionV1, which adds BatchNorm. (Units are sorted by the first principal component of the adjacency matrix between the first and second layers.) These features are typical of a well trained conv net. Note how, unlike the canonical InceptionV1, these units have a crisp division between black and white Gabors, color Gabors, color-contrast units and color center-surround units. \n \n\n![](images/slim_weights.png)\n\n\n\n\n\n One subtlety that’s worth noting here is that Gabor filters almost always come in pairs of weights which are negative versions of each other, both in InceptionV1 and other vision models.\n A single Gabor filter can only detect edges at some offsets, but the negative version fills in holes, allowing for the formation of complex Gabor filters in the next layer.\n\n\n\n\n\n\n\n### [**Gabor Filters** 44%](#group_conv2d0_gabor_filters)\n\n\n\n[![](images/neuron/conv2d0_9.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_9.html)9[![](images/neuron/conv2d0_53.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_53.html)53[![](images/neuron/conv2d0_54.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_54.html)54[![](images/neuron/conv2d0_15.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_15.html)15[![](images/neuron/conv2d0_22.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_22.html)22[![](images/neuron/conv2d0_39.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_39.html)39[![](images/neuron/conv2d0_30.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_30.html)30[![](images/neuron/conv2d0_1.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_1.html)1[![](images/neuron/conv2d0_3.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_3.html)3[![](images/neuron/conv2d0_49.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_49.html)49[![](images/neuron/conv2d0_14.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_14.html)14[![](images/neuron/conv2d0_17.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_17.html)17[![](images/neuron/conv2d0_62.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_62.html)62[![](images/neuron/conv2d0_20.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_20.html)20[![](images/neuron/conv2d0_27.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_27.html)27[![](images/neuron/conv2d0_0.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_0.html)0[![](images/neuron/conv2d0_10.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_10.html)10[![](images/neuron/conv2d0_28.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_28.html)28[![](images/neuron/conv2d0_21.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_21.html)21[![](images/neuron/conv2d0_63.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_63.html)63[![](images/neuron/conv2d0_45.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_45.html)45[![](images/neuron/conv2d0_18.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_18.html)18[![](images/neuron/conv2d0_6.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_6.html)6[![](images/neuron/conv2d0_57.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_57.html)57[![](images/neuron/conv2d0_41.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_41.html)41[![](images/neuron/conv2d0_43.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_43.html)43[![](images/neuron/conv2d0_46.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_46.html)46[![](images/neuron/conv2d0_8.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_8.html)8\n\n\nShow all 28 neurons.\nCollapse neurons.\nGabor filters are a simple edge detector, highly sensitive to the alignment of the edge. They’re almost universally found in the fist layer of vision models. Note that Gabor filters almost always come in pairs of negative reciprocals.\n\n\n\n### [**Color Contrast** 42%](#group_conv2d0_color_contrast)\n\n\n\n[![](images/neuron/conv2d0_59.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_59.html)59[![](images/neuron/conv2d0_23.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_23.html)23[![](images/neuron/conv2d0_5.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_5.html)5[![](images/neuron/conv2d0_7.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_7.html)7[![](images/neuron/conv2d0_48.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_48.html)48[![](images/neuron/conv2d0_29.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_29.html)29[![](images/neuron/conv2d0_12.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_12.html)12[![](images/neuron/conv2d0_24.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_24.html)24[![](images/neuron/conv2d0_55.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_55.html)55[![](images/neuron/conv2d0_13.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_13.html)13[![](images/neuron/conv2d0_32.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_32.html)32[![](images/neuron/conv2d0_50.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_50.html)50[![](images/neuron/conv2d0_36.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_36.html)36[![](images/neuron/conv2d0_37.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_37.html)37[![](images/neuron/conv2d0_16.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_16.html)16[![](images/neuron/conv2d0_11.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_11.html)11[![](images/neuron/conv2d0_33.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_33.html)33[![](images/neuron/conv2d0_34.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_34.html)34[![](images/neuron/conv2d0_47.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_47.html)47[![](images/neuron/conv2d0_38.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_38.html)38[![](images/neuron/conv2d0_58.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_58.html)58[![](images/neuron/conv2d0_2.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_2.html)2[![](images/neuron/conv2d0_60.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_60.html)60[![](images/neuron/conv2d0_61.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_61.html)61[![](images/neuron/conv2d0_4.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_4.html)4[![](images/neuron/conv2d0_51.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_51.html)51[![](images/neuron/conv2d0_25.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_25.html)25\n\n\nShow all 27 neurons.\nCollapse neurons.\nThese units detect a color one side of their receptive field, and the opposite color on the other side. Compare to later color contrast ([`conv2d1`](#group_conv2d1_color_contrast), [`conv2d2`](#group_conv2d2_color_contrast), [`mixed3a`](#group_mixed3a_color_contrast), [`mixed3b`](#group_mixed3b_color_contrast_gradient)).\n\n\n\n### [**Other Units** 14%](#group_conv2d0_other_units)\n\n\n\n[![](images/neuron/conv2d0_19.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_19.html)19[![](images/neuron/conv2d0_26.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_26.html)26[![](images/neuron/conv2d0_31.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_31.html)31[![](images/neuron/conv2d0_35.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_35.html)35[![](images/neuron/conv2d0_40.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_40.html)40[![](images/neuron/conv2d0_42.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_42.html)42[![](images/neuron/conv2d0_44.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_44.html)44[![](images/neuron/conv2d0_52.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_52.html)52[![](images/neuron/conv2d0_56.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d0_56.html)56\n\n\n \nUnits that don’t fit in another category.\n\n \n\n\n\n---\n\n\n \n \n\n`conv2d1`\n---------\n\n\n\n In `conv2d1`, we begin to see some of the classic [complex cell](https://en.wikipedia.org/wiki/Complex_cell) features of visual neuroscience.\n These neurons respond to similar patterns to units in `conv2d0`, but are invariant to some changes in position and orientation.\n\n\n\n\n**Complex Gabors:**\n A nice example of this is the “Complex Gabor” feature family.\n Like simple Gabor filters, complex Gabors detect edges.\n But unlike simple Gabors, they are relatively invariant to the exact position of the edge or which side is dark or light.\n This is achieved by being excited by multiple Gabor filters in similar orientations — and most critically, by being excited by “reciprocal Gabor filters” that detect the same pattern with dark and light switched.\n This can be seen as an early example of the “union over cases” motif.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAll neurons in the previous layer with at least 30% of the max weight magnitude are shown,\n both positive (excitation) and negative (inhibition).\n Click on a neuron to see its forwards and backwards weights.\n\n\n Note that `conv2d1` is a 1x1 convolution, so there’s only a single weight — a single line, in this diagram — between each channel in the previous and this one.\n There is a pooling layer between them, so the features it connects to are pooled versions of the previous layer rather than original features.\n This plays an important role in determining the features we observe: in models with larger convolutions in their second layer, we often see a jump to crude versions of the larger more complex features we’ll see in the following layers.\n\n\n\n\n In addition to Complex Gabors, we see a variety of other features, including\n more invariant color contrast detectors, Gabor-like features that are less selective for a single orientation, and lower-frequency features.\n\n\n\n\n\n\n\n### [**Low Frequency** 27%](#group_conv2d1_low_frequency)\n\n\n\n[![](images/neuron/conv2d1_1.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_1.html)1[![](images/neuron/conv2d1_13.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_13.html)13[![](images/neuron/conv2d1_27.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_27.html)27[![](images/neuron/conv2d1_47.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_47.html)47[![](images/neuron/conv2d1_56.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_56.html)56[![](images/neuron/conv2d1_60.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_60.html)60[![](images/neuron/conv2d1_23.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_23.html)23[![](images/neuron/conv2d1_49.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_49.html)49[![](images/neuron/conv2d1_0.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_0.html)0[![](images/neuron/conv2d1_43.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_43.html)43[![](images/neuron/conv2d1_28.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_28.html)28[![](images/neuron/conv2d1_29.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_29.html)29[![](images/neuron/conv2d1_34.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_34.html)34[![](images/neuron/conv2d1_8.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_8.html)8[![](images/neuron/conv2d1_37.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_37.html)37[![](images/neuron/conv2d1_19.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_19.html)19[![](images/neuron/conv2d1_15.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_15.html)15\n\n\nShow all 17 neurons.\nCollapse neurons.\nThese units seem to respond to lower-frequency edge patterns, but we haven’t studied them very carefully.\n\n\n\n### [**Gabor Like** 17%](#group_conv2d1_gabor_like)\n\n\n\n[![](images/neuron/conv2d1_4.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_4.html)4[![](images/neuron/conv2d1_6.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_6.html)6[![](images/neuron/conv2d1_32.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_32.html)32[![](images/neuron/conv2d1_38.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_38.html)38[![](images/neuron/conv2d1_41.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_41.html)41[![](images/neuron/conv2d1_63.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_63.html)63[![](images/neuron/conv2d1_7.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_7.html)7[![](images/neuron/conv2d1_11.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_11.html)11[![](images/neuron/conv2d1_18.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_18.html)18[![](images/neuron/conv2d1_24.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_24.html)24[![](images/neuron/conv2d1_39.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_39.html)39\n\n\nShow all 11 neurons.\nCollapse neurons.\nThese units respond to edges stimuli, but seem to respond to a wider range of orientations, and also respond to color contrasts that align with the edge. We haven’t studied them very carefully.\n\n\n\n### [**Color Contrast** 16%](#group_conv2d1_color_contrast)\n\n\n\n[![](images/neuron/conv2d1_5.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_5.html)5[![](images/neuron/conv2d1_9.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_9.html)9[![](images/neuron/conv2d1_10.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_10.html)10[![](images/neuron/conv2d1_12.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_12.html)12[![](images/neuron/conv2d1_20.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_20.html)20[![](images/neuron/conv2d1_21.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_21.html)21[![](images/neuron/conv2d1_42.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_42.html)42[![](images/neuron/conv2d1_45.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_45.html)45[![](images/neuron/conv2d1_50.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_50.html)50[![](images/neuron/conv2d1_53.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_53.html)53\n\n\n \nThese units detect a color on one side of the receptive field, and a different color on the opposite side. Composed of lower-level color contrast detectors, they often respond to color transitions in a range of translation and orientation variations. Compare to earlier [color contrast (`conv2d0`)](#group_conv2d0_color_contrast) and later color contrast ([`conv2d2`](#group_conv2d2_color_contrast), [`mixed3a`](#group_mixed3a_color_contrast), [`mixed3b`](#group_mixed3b_color_contrast_gradient)).\n\n\n\n### [**Multicolor** 14%](#group_conv2d1_multicolor)\n\n\n\n[![](images/neuron/conv2d1_3.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_3.html)3[![](images/neuron/conv2d1_33.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_33.html)33[![](images/neuron/conv2d1_35.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_35.html)35[![](images/neuron/conv2d1_40.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_40.html)40[![](images/neuron/conv2d1_17.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_17.html)17[![](images/neuron/conv2d1_16.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_16.html)16[![](images/neuron/conv2d1_57.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_57.html)57[![](images/neuron/conv2d1_26.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_26.html)26[![](images/neuron/conv2d1_31.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_31.html)31\n\n\n \nThese units respond to mixtures of colors without an obvious strong spatial structure preference.\n\n\n\n### [**Complex Gabor** 14%](#group_conv2d1_complex_gabor)\n\n\n\n[![](images/neuron/conv2d1_51.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_51.html)51[![](images/neuron/conv2d1_58.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_58.html)58[![](images/neuron/conv2d1_30.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_30.html)30[![](images/neuron/conv2d1_25.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_25.html)25[![](images/neuron/conv2d1_52.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_52.html)52[![](images/neuron/conv2d1_54.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_54.html)54[![](images/neuron/conv2d1_22.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_22.html)22[![](images/neuron/conv2d1_61.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_61.html)61[![](images/neuron/conv2d1_55.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_55.html)55\n\n\n \nLike Gabor Filters, but fairly invariant to the exact position, formed by adding together multiple Gabor detectors in the same orientation but different phases. We call these ‘Complex’ after [complex cells](https://en.wikipedia.org/wiki/Complex_cell) in neuroscience.\n\n\n\n### [**Color** 6%](#group_conv2d1_color)\n\n\n\n[![](images/neuron/conv2d1_2.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_2.html)2[![](images/neuron/conv2d1_48.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_48.html)48[![](images/neuron/conv2d1_36.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_36.html)36[![](images/neuron/conv2d1_46.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_46.html)46\n\n\n \nTwo of these units seem to track brightness (bright vs dark), while the other two units seem to mostly track hue, dividing the space of hues between them. One responds to red/orange/yellow, while the other responds to purple/blue/turqoise. Unfortunately, their circuits seem to heavily rely on the existence of a Local Response Normalization layer after `conv2d0`, which makes it hard to reason about.\n\n\n\n### [**Other Units** 5%](#group_conv2d1_other_units)\n\n\n\n[![](images/neuron/conv2d1_44.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_44.html)44[![](images/neuron/conv2d1_62.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_62.html)62[![](images/neuron/conv2d1_59.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_59.html)59\n\n\n \nUnits that don’t fit in another category.\n\n\n\n### [**hatch** 2%](#group_conv2d1_hatch)\n\n\n\n[![](images/neuron/conv2d1_14.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d1_14.html)14\n\n\n \nThis unit detects Gabor patterns in two orthogonal directions, selecting for a “hatch” pattern.\n\n \n\n \n\n\n\n---\n\n\n \n \n\n`conv2d2`\n---------\n\n\n\nIn `conv2d2` we see the emergence of very simple shape predecessors.\nThis layer sees the first units that might be described as “line detectors”, preferring a single longer line to a Gabor pattern and accounting for about 25% of units.\nWe also see tiny curve detectors, corner detectors, divergence detectors, and a single very tiny circle detector.\nOne fun aspect of these features is that you can see that they are assembled from Gabor detectors in the feature visualizations, with curves being built from small piecewise Gabor segments.\nAll of these units still moderately fire in response to incomplete versions of their feature, such as a small curve running tangent to the edge detector.\n\n\n\n\nSince `conv2d2` is a 3x3 convolution, our understanding of these shape precursor features (and some texture features) maps to particular ways Gabor and lower-frequency edges are being spatially assembled into new features.\nAt a high-level, we see a few primary patterns:\n\n\n\n\n\n\nMany line-like features are weakly excited by perpendicular lines beside the primary line, a phenomenon we call “combing”. \nLine\nCurve\nShifted Line\nGabor Texture\nCorner / Lisp\nHatch Texture\nDivergence\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n We also begin to see various kinds of texture and color detectors start to become a major constituent of the layer, including color-contrast and color center surround features, as well as Gabor-like, hatch, low-frequency and high-frequency textures.\n A handful of units look for different textures on different sides of their receptive field.\n\n\n\n\n\n\n\n### [**Color Contrast** 21%](#group_conv2d2_color_contrast)\n\n\n\n[![](images/neuron/conv2d2_10.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_10.html)10[![](images/neuron/conv2d2_36.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_36.html)36[![](images/neuron/conv2d2_172.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_172.html)172[![](images/neuron/conv2d2_56.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_56.html)56[![](images/neuron/conv2d2_131.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_131.html)131[![](images/neuron/conv2d2_176.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_176.html)176[![](images/neuron/conv2d2_35.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_35.html)35[![](images/neuron/conv2d2_41.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_41.html)41[![](images/neuron/conv2d2_45.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_45.html)45[![](images/neuron/conv2d2_126.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_126.html)126[![](images/neuron/conv2d2_77.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_77.html)77[![](images/neuron/conv2d2_120.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_120.html)120[![](images/neuron/conv2d2_101.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_101.html)101[![](images/neuron/conv2d2_0.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_0.html)0[![](images/neuron/conv2d2_43.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_43.html)43[![](images/neuron/conv2d2_62.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_62.html)62[![](images/neuron/conv2d2_106.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_106.html)106[![](images/neuron/conv2d2_169.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_169.html)169[![](images/neuron/conv2d2_127.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_127.html)127[![](images/neuron/conv2d2_6.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_6.html)6[![](images/neuron/conv2d2_68.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_68.html)68[![](images/neuron/conv2d2_60.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_60.html)60[![](images/neuron/conv2d2_134.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_134.html)134[![](images/neuron/conv2d2_51.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_51.html)51[![](images/neuron/conv2d2_74.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_74.html)74[![](images/neuron/conv2d2_85.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_85.html)85[![](images/neuron/conv2d2_115.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_115.html)115[![](images/neuron/conv2d2_24.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_24.html)24[![](images/neuron/conv2d2_14.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_14.html)14[![](images/neuron/conv2d2_16.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_16.html)16[![](images/neuron/conv2d2_151.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_151.html)151[![](images/neuron/conv2d2_168.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_168.html)168[![](images/neuron/conv2d2_19.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_19.html)19[![](images/neuron/conv2d2_7.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_7.html)7[![](images/neuron/conv2d2_88.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_88.html)88[![](images/neuron/conv2d2_177.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_177.html)177[![](images/neuron/conv2d2_183.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_183.html)183[![](images/neuron/conv2d2_76.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_76.html)76[![](images/neuron/conv2d2_70.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_70.html)70[![](images/neuron/conv2d2_122.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_122.html)122\n\n\nShow all 40 neurons.\nCollapse neurons.\nThese units detect a color on one side of the receptive field, and a different color on the opposite side. Composed of lower-level color contrast detectors, they often respond to color transitions in a range of translation and orientation variations. Compare to earlier color contrast ([`conv2d0`](#group_conv2d0_color_contrast), [`conv2d1`](#group_conv2d1_color_contrast)) and later color contrast ([`mixed3a`](#group_mixed3a_color_contrast), [`mixed3b`](#group_mixed3b_color_contrast_gradient)).\n\n\n\n### [**Line** 17%](#group_conv2d2_line)\n\n\n\n[![](images/neuron/conv2d2_107.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_107.html)107[![](images/neuron/conv2d2_31.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_31.html)31[![](images/neuron/conv2d2_9.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_9.html)9[![](images/neuron/conv2d2_112.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_112.html)112[![](images/neuron/conv2d2_133.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_133.html)133[![](images/neuron/conv2d2_103.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_103.html)103[![](images/neuron/conv2d2_97.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_97.html)97[![](images/neuron/conv2d2_125.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_125.html)125[![](images/neuron/conv2d2_20.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_20.html)20[![](images/neuron/conv2d2_33.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_33.html)33[![](images/neuron/conv2d2_113.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_113.html)113[![](images/neuron/conv2d2_185.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_185.html)185[![](images/neuron/conv2d2_150.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_150.html)150[![](images/neuron/conv2d2_166.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_166.html)166[![](images/neuron/conv2d2_157.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_157.html)157[![](images/neuron/conv2d2_57.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_57.html)57[![](images/neuron/conv2d2_145.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_145.html)145[![](images/neuron/conv2d2_48.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_48.html)48[![](images/neuron/conv2d2_55.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_55.html)55[![](images/neuron/conv2d2_15.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_15.html)15[![](images/neuron/conv2d2_152.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_152.html)152[![](images/neuron/conv2d2_11.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_11.html)11[![](images/neuron/conv2d2_141.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_141.html)141[![](images/neuron/conv2d2_174.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_174.html)174[![](images/neuron/conv2d2_44.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_44.html)44[![](images/neuron/conv2d2_170.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_170.html)170[![](images/neuron/conv2d2_27.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_27.html)27[![](images/neuron/conv2d2_100.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_100.html)100[![](images/neuron/conv2d2_30.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_30.html)30[![](images/neuron/conv2d2_82.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_82.html)82[![](images/neuron/conv2d2_65.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_65.html)65[![](images/neuron/conv2d2_3.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_3.html)3[![](images/neuron/conv2d2_108.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_108.html)108\n\n\nShow all 33 neurons.\nCollapse neurons.\nThese units are beginning to look for a single primary line. Some look for different colors on each side. Many exhibit “combing” (small perpendicular lines along the main one), a very common but not presently understood phenomenon in line-like features across vision models. Compare to [shifted lines](#group_conv2d2_shifted_line) and later [lines (`mixed3a`)](#group_mixed3a_lines).\n\n\n\n### [**Shifted Line** 8%](#group_conv2d2_shifted_line)\n\n\n\n[![](images/neuron/conv2d2_116.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_116.html)116[![](images/neuron/conv2d2_61.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_61.html)61[![](images/neuron/conv2d2_8.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_8.html)8[![](images/neuron/conv2d2_22.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_22.html)22[![](images/neuron/conv2d2_69.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_69.html)69[![](images/neuron/conv2d2_96.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_96.html)96[![](images/neuron/conv2d2_78.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_78.html)78[![](images/neuron/conv2d2_18.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_18.html)18[![](images/neuron/conv2d2_17.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_17.html)17[![](images/neuron/conv2d2_132.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_132.html)132[![](images/neuron/conv2d2_190.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_190.html)190[![](images/neuron/conv2d2_64.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_64.html)64[![](images/neuron/conv2d2_81.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_81.html)81[![](images/neuron/conv2d2_154.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_154.html)154[![](images/neuron/conv2d2_179.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_179.html)179[![](images/neuron/conv2d2_136.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_136.html)136\n\n\nShow all 16 neurons.\nCollapse neurons.\nThese units look for edges “shifted” to the side of the receptive field instead of the middle. This may be linked to the many 1x1 convs in the next layer. Compare to [lines](#group_conv2d2_line) (non-shifted) and later [lines (`mixed3a`)](#group_mixed3a_lines).\n\n\n\n### [**Textures** 8%](#group_conv2d2_textures)\n\n\n\n[![](images/neuron/conv2d2_21.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_21.html)21[![](images/neuron/conv2d2_52.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_52.html)52[![](images/neuron/conv2d2_59.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_59.html)59[![](images/neuron/conv2d2_119.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_119.html)119[![](images/neuron/conv2d2_148.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_148.html)148[![](images/neuron/conv2d2_161.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_161.html)161[![](images/neuron/conv2d2_162.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_162.html)162[![](images/neuron/conv2d2_186.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_186.html)186[![](images/neuron/conv2d2_189.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_189.html)189[![](images/neuron/conv2d2_191.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_191.html)191[![](images/neuron/conv2d2_72.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_72.html)72[![](images/neuron/conv2d2_46.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_46.html)46[![](images/neuron/conv2d2_67.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_67.html)67[![](images/neuron/conv2d2_40.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_40.html)40[![](images/neuron/conv2d2_38.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_38.html)38\n\n\nShow all 15 neurons.\nCollapse neurons.\nA broad category of units detecting repeating local structure.\n\n\n\n### [**Other Units** 7%](#group_conv2d2_other_units)\n\n\n\n[![](images/neuron/conv2d2_2.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_2.html)2[![](images/neuron/conv2d2_50.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_50.html)50[![](images/neuron/conv2d2_54.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_54.html)54[![](images/neuron/conv2d2_66.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_66.html)66[![](images/neuron/conv2d2_84.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_84.html)84[![](images/neuron/conv2d2_90.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_90.html)90[![](images/neuron/conv2d2_93.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_93.html)93[![](images/neuron/conv2d2_109.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_109.html)109[![](images/neuron/conv2d2_129.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_129.html)129[![](images/neuron/conv2d2_149.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_149.html)149[![](images/neuron/conv2d2_153.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_153.html)153[![](images/neuron/conv2d2_158.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_158.html)158[![](images/neuron/conv2d2_175.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_175.html)175[![](images/neuron/conv2d2_187.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_187.html)187\n\n\nShow all 14 neurons.\nCollapse neurons.\nCatch-all category for all other units.\n\n\n\n### [**Color Center-Surround** 7%](#group_conv2d2_color_center_surround)\n\n\n\n[![](images/neuron/conv2d2_155.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_155.html)155[![](images/neuron/conv2d2_34.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_34.html)34[![](images/neuron/conv2d2_156.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_156.html)156[![](images/neuron/conv2d2_13.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_13.html)13[![](images/neuron/conv2d2_49.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_49.html)49[![](images/neuron/conv2d2_138.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_138.html)138[![](images/neuron/conv2d2_23.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_23.html)23[![](images/neuron/conv2d2_160.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_160.html)160[![](images/neuron/conv2d2_86.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_86.html)86[![](images/neuron/conv2d2_124.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_124.html)124[![](images/neuron/conv2d2_25.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_25.html)25[![](images/neuron/conv2d2_4.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_4.html)4[![](images/neuron/conv2d2_29.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_29.html)29\n\n\nShow all 13 neurons.\nCollapse neurons.\nThese units look for one color in the middle and another (typically opposite) on the boundary. Genereally more sensitive to the center than boundary. Compare to later [Color Center-Surround (`mixed3a`)](#group_mixed3a_color_center_surround) and [Color Center-Surround (`mixed3b`)](#group_mixed3b_color_center_surround).\n\n\n\n### [**Tiny Curves** 6%](#group_conv2d2_tiny_curves)\n\n\n\n[![](images/neuron/conv2d2_182.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_182.html)182[![](images/neuron/conv2d2_117.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_117.html)117[![](images/neuron/conv2d2_171.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_171.html)171[![](images/neuron/conv2d2_111.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_111.html)111[![](images/neuron/conv2d2_63.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_63.html)63[![](images/neuron/conv2d2_130.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_130.html)130[![](images/neuron/conv2d2_146.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_146.html)146[![](images/neuron/conv2d2_180.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_180.html)180[![](images/neuron/conv2d2_39.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_39.html)39[![](images/neuron/conv2d2_140.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_140.html)140[![](images/neuron/conv2d2_32.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_32.html)32[![](images/neuron/conv2d2_80.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_80.html)80\n\n\nShow all 12 neurons.\nCollapse neurons.\nVery small curve (and one circle) detectors. Many of these units respond to a range of curvatures all the way from a flat line to a curve. Compare to later [curves (`mixed3a`)](#group_mixed3a_curves) and [curves (`mixed3b`)](#group_mixed3b_curves). See also [circuit example and discussion](#mixed3a_discussion_small_circle) of use in forming [small circles/eyes (`mixed3a`)](#group_mixed3a_eyes_small_circles).\n\n\n\n### [**Early Brightness Gradient** 6%](#group_conv2d2_early_brightness_gradient)\n\n\n\n[![](images/neuron/conv2d2_94.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_94.html)94[![](images/neuron/conv2d2_142.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_142.html)142[![](images/neuron/conv2d2_99.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_99.html)99[![](images/neuron/conv2d2_104.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_104.html)104[![](images/neuron/conv2d2_163.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_163.html)163[![](images/neuron/conv2d2_188.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_188.html)188[![](images/neuron/conv2d2_98.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_98.html)98[![](images/neuron/conv2d2_165.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_165.html)165[![](images/neuron/conv2d2_83.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_83.html)83[![](images/neuron/conv2d2_79.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_79.html)79[![](images/neuron/conv2d2_128.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_128.html)128[![](images/neuron/conv2d2_5.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_5.html)5\n\n\nShow all 12 neurons.\nCollapse neurons.\nThese units detect oriented gradients in brightness. They support a variety of similar units in the next layer. Compare to later [brightness gradients (`mixed3a`)](#group_mixed3a_brightness_gradient) and [brightness gradients (`mixed3b`)](#group_mixed3b_brightness_gradients).\n\n\n\n### [**Gabor Textures** 6%](#group_conv2d2_gabor_textures)\n\n\n\n[![](images/neuron/conv2d2_89.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_89.html)89[![](images/neuron/conv2d2_91.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_91.html)91[![](images/neuron/conv2d2_110.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_110.html)110[![](images/neuron/conv2d2_114.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_114.html)114[![](images/neuron/conv2d2_123.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_123.html)123[![](images/neuron/conv2d2_135.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_135.html)135[![](images/neuron/conv2d2_139.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_139.html)139[![](images/neuron/conv2d2_143.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_143.html)143[![](images/neuron/conv2d2_144.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_144.html)144[![](images/neuron/conv2d2_167.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_167.html)167[![](images/neuron/conv2d2_173.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_173.html)173[![](images/neuron/conv2d2_118.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_118.html)118\n\n\nShow all 12 neurons.\nCollapse neurons.\nLike complex Gabor units from the previous layer, but larger. They’re probably starting to be better described as a texture.\n\n\n\n### [**Texture Contrast** 4%](#group_conv2d2_texture_contrast)\n\n\n\n[![](images/neuron/conv2d2_1.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_1.html)1[![](images/neuron/conv2d2_37.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_37.html)37[![](images/neuron/conv2d2_71.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_71.html)71[![](images/neuron/conv2d2_73.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_73.html)73[![](images/neuron/conv2d2_105.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_105.html)105[![](images/neuron/conv2d2_147.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_147.html)147[![](images/neuron/conv2d2_178.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_178.html)178[![](images/neuron/conv2d2_181.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_181.html)181\n\n\n \nThese units look for different textures on opposite sides of their receptive field. One side is typically a Gabor pattern.\n\n\n\n### [**Hatch Textures** 3%](#group_conv2d2_hatch_textures)\n\n\n\n[![](images/neuron/conv2d2_164.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_164.html)164[![](images/neuron/conv2d2_184.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_184.html)184[![](images/neuron/conv2d2_28.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_28.html)28[![](images/neuron/conv2d2_121.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_121.html)121[![](images/neuron/conv2d2_159.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_159.html)159[![](images/neuron/conv2d2_102.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_102.html)102\n\n\n \nThese units detect Gabor patterns in two orthogonal directions, selecting for a “hatch” pattern.\n\n\n\n### [**Color/Multicolor** 3%](#group_conv2d2_color_multicolor)\n\n\n\n[![](images/neuron/conv2d2_58.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_58.html)58[![](images/neuron/conv2d2_75.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_75.html)75[![](images/neuron/conv2d2_92.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_92.html)92[![](images/neuron/conv2d2_42.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_42.html)42[![](images/neuron/conv2d2_12.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_12.html)12\n\n\n \nSeveral units look for mixtures of colors but seem indifferent to their organization.\n\n\n\n### [**Corners** 2%](#group_conv2d2_corners)\n\n\n\n[![](images/neuron/conv2d2_47.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_47.html)47[![](images/neuron/conv2d2_87.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_87.html)87[![](images/neuron/conv2d2_95.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_95.html)95[![](images/neuron/conv2d2_26.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_26.html)26\n\n\n \nThese units detect two Gabor patterns which meet at apprixmately 90 degrees, causing them to respond to corners.\n\n\n\n### [**Line Divergence** 1%](#group_conv2d2_line_divergence)\n\n\n\n[![](images/neuron/conv2d2_137.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_137.html)137[![](images/neuron/conv2d2_53.png)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/conv2d2_53.html)53\n\n\n \nThese units detect lines diverging from a point.\n\n \n\n \n\n\n\n---\n\n\n \n \n\n`mixed3a`\n---------\n\n\n\n`mixed3a` has a significant increase in the diversity of features we observe.\nSome of them — [curve detectors](#group_mixed3a_curves) and [high-low frequency detectors](#group_mixed3a_high_low_frequency) — were discussed in [Zoom In](https://distill.pub/2020/circuits/zoom-in/)\nand will be discussed again in later articles in great detail.\nBut there are some really interesting circuits in `mixed3a` which we haven’t discussed before,\nand we’ll go through a couple selected ones to give a flavor of what happens at this layer.\n\n\n\n\n**Black & White Detectors:** One interesting property of `mixed3a` is the emergence of [“black and white” detectors](#group_mixed3a_bw_vs_color), which detect the absence of color.\nPrior to `mixed3a`, color contrast detectors look for transitions of a color to near complementary colors (eg. blue vs yellow).\nFrom this layer on, however, we’ll often see color detectors which compare a color to the absence of color.\nAdditionally, black and white detectors can allow the detection of greyscale images, which may be correlated with ImageNet categories (see\n [4a:479](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed4a_479.html) which detects black and white portraits).\n\n\n\n\nThe circuit for our black and white detector is quite simple:\nalmost all of its large weights are negative, detecting the absence of colors.\n Roughly, it computes `**NOT(**color_feature_1 **OR** color_feature_2 **OR** ...**)**`.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBlack and white detectors are created by against a wide variety of color detectors.\n \ninhibiting\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n The sixteen strongest magnitude weights to the previous layer are shown.\n For simplicity, only one spatial weight for positive and negative have been shown, but they all have almost identical structure.\n Click on a neuron to see its forwards and backwards weights.\n\n\n**Small Circle Detector:** We also see somewhat more complex shapes in `mixed3a`. Of course, curves (which we discussed in [Zoom In](https://distill.pub/2020/circuits/zoom-in/)) are a prominent example of this.\nBut there’s lots of other interesting examples.\nFor instance, we see a variety of [small circle and eye detectors](#group_mixed3a_eyes_small_circles) form\nby piecing together [early curves and circle detectors (`conv2d2`)](#group_conv2d2_tiny_curves):\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\npositive (excitation)\nnegative (inhibition)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Triangle Detectors:** While on the matter of basic shapes, we also see [triangle detectors](#group_mixed3a_angles)\n form from earlier [line (`conv2d2`)](#group_conv2d2_line) and [shifted line (`conv2d2`)](#group_conv2d2_shifted_line) detectors.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\nnegative (inhibition)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n The circuit constructing a triangle detector.\n The choice of which neurons in the previous layer to show is slightly cherrypicked for pedagogy.\n The six neurons with the highest magnitude weights to the triangle are shown, plus one other neuron with slightly weaker weights.\n (Left leaning edges have slightly higher weights than right ones, but it seemed more illustrative to show two of both.) Click on neurons to see the full weights.\n\n\n However, in practice, these triangle detectors (and other angle units) seem to often just be used as multi-edge detectors downstream,\n or in conjunction with many other units to detect convex boundaries.\n\n\n\n\n The selected circuits discussed above only scratch the surface of the intricate structure in `mixed3a`.\n Below, we provide a taxonomized overview of all of them:\n\n\n\n\n\n\n\n### [**Texture** 25%](#group_mixed3a_texture)\n\n\n\n[![](images/neuron/mixed3a_246.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_246.html)246[![](images/neuron/mixed3a_242.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_242.html)242[![](images/neuron/mixed3a_253.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_253.html)253[![](images/neuron/mixed3a_232.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_232.html)232[![](images/neuron/mixed3a_233.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_233.html)233[![](images/neuron/mixed3a_209.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_209.html)209[![](images/neuron/mixed3a_139.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_139.html)139[![](images/neuron/mixed3a_65.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_65.html)65[![](images/neuron/mixed3a_44.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_44.html)44[![](images/neuron/mixed3a_51.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_51.html)51[![](images/neuron/mixed3a_194.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_194.html)194[![](images/neuron/mixed3a_207.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_207.html)207[![](images/neuron/mixed3a_111.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_111.html)111[![](images/neuron/mixed3a_218.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_218.html)218[![](images/neuron/mixed3a_224.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_224.html)224[![](images/neuron/mixed3a_225.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_225.html)225[![](images/neuron/mixed3a_215.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_215.html)215[![](images/neuron/mixed3a_198.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_198.html)198[![](images/neuron/mixed3a_62.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_62.html)62[![](images/neuron/mixed3a_21.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_21.html)21[![](images/neuron/mixed3a_254.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_254.html)254[![](images/neuron/mixed3a_255.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_255.html)255[![](images/neuron/mixed3a_61.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_61.html)61[![](images/neuron/mixed3a_2.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_2.html)2[![](images/neuron/mixed3a_3.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_3.html)3[![](images/neuron/mixed3a_8.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_8.html)8[![](images/neuron/mixed3a_12.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_12.html)12[![](images/neuron/mixed3a_53.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_53.html)53[![](images/neuron/mixed3a_56.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_56.html)56[![](images/neuron/mixed3a_102.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_102.html)102[![](images/neuron/mixed3a_148.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_148.html)148[![](images/neuron/mixed3a_244.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_244.html)244[![](images/neuron/mixed3a_250.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_250.html)250[![](images/neuron/mixed3a_11.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_11.html)11[![](images/neuron/mixed3a_238.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_238.html)238[![](images/neuron/mixed3a_248.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_248.html)248[![](images/neuron/mixed3a_9.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_9.html)9[![](images/neuron/mixed3a_219.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_219.html)219[![](images/neuron/mixed3a_234.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_234.html)234[![](images/neuron/mixed3a_252.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_252.html)252[![](images/neuron/mixed3a_236.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_236.html)236[![](images/neuron/mixed3a_5.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_5.html)5[![](images/neuron/mixed3a_183.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_183.html)183[![](images/neuron/mixed3a_241.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_241.html)241[![](images/neuron/mixed3a_229.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_229.html)229[![](images/neuron/mixed3a_93.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_93.html)93[![](images/neuron/mixed3a_243.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_243.html)243[![](images/neuron/mixed3a_99.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_99.html)99[![](images/neuron/mixed3a_45.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_45.html)45[![](images/neuron/mixed3a_33.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_33.html)33[![](images/neuron/mixed3a_135.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_135.html)135[![](images/neuron/mixed3a_231.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_231.html)231[![](images/neuron/mixed3a_60.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_60.html)60[![](images/neuron/mixed3a_235.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_235.html)235[![](images/neuron/mixed3a_48.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_48.html)48[![](images/neuron/mixed3a_55.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_55.html)55[![](images/neuron/mixed3a_42.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_42.html)42[![](images/neuron/mixed3a_151.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_151.html)151[![](images/neuron/mixed3a_54.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_54.html)54[![](images/neuron/mixed3a_72.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_72.html)72[![](images/neuron/mixed3a_6.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_6.html)6[![](images/neuron/mixed3a_239.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_239.html)239[![](images/neuron/mixed3a_66.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_66.html)66[![](images/neuron/mixed3a_129.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_129.html)129[![](images/neuron/mixed3a_245.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_245.html)245\n\n\nShow all 65 neurons.\nCollapse neurons.\nThis is a broad, not very well defined category for units that seem to look for simple local structures over a wide receptive field, including mixtures of colors. Many live in a branch consisting of a maxpool followed by a 1x1 conv, which structurally encourages this.Maxpool branches (ie. maxpool 5x5 stride 1 -> conv 1x1) have large receptive fields, but can’t control where in in their receptive field each feature they detect is, nor the relative position of these features. In early vision, this unstructured of feature detection makes them a good fit for textures. \n\n\n\n### [**Color Center-Surround** 12%](#group_mixed3a_color_center_surround)\n\n\n\n[![](images/neuron/mixed3a_119.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_119.html)119[![](images/neuron/mixed3a_34.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_34.html)34[![](images/neuron/mixed3a_167.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_167.html)167[![](images/neuron/mixed3a_76.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_76.html)76[![](images/neuron/mixed3a_19.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_19.html)19[![](images/neuron/mixed3a_30.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_30.html)30[![](images/neuron/mixed3a_131.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_131.html)131[![](images/neuron/mixed3a_251.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_251.html)251[![](images/neuron/mixed3a_226.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_226.html)226[![](images/neuron/mixed3a_13.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_13.html)13[![](images/neuron/mixed3a_7.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_7.html)7[![](images/neuron/mixed3a_50.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_50.html)50[![](images/neuron/mixed3a_1.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_1.html)1[![](images/neuron/mixed3a_4.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_4.html)4[![](images/neuron/mixed3a_41.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_41.html)41[![](images/neuron/mixed3a_192.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_192.html)192[![](images/neuron/mixed3a_36.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_36.html)36[![](images/neuron/mixed3a_40.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_40.html)40[![](images/neuron/mixed3a_103.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_103.html)103[![](images/neuron/mixed3a_213.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_213.html)213[![](images/neuron/mixed3a_10.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_10.html)10[![](images/neuron/mixed3a_35.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_35.html)35[![](images/neuron/mixed3a_221.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_221.html)221[![](images/neuron/mixed3a_193.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_193.html)193[![](images/neuron/mixed3a_158.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_158.html)158[![](images/neuron/mixed3a_73.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_73.html)73[![](images/neuron/mixed3a_74.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_74.html)74[![](images/neuron/mixed3a_177.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_177.html)177[![](images/neuron/mixed3a_97.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_97.html)97[![](images/neuron/mixed3a_141.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_141.html)141\n\n\nShow all 30 neurons.\nCollapse neurons.\nThese units look for one color in the center, and another (usually opposite) color surrounding it. They are typically much more sensitive to the center color than the surrounding one. In visual neuroscience, center-surround units are classically an extremely low-level feature, but we see them in the later parts of early vision. Compare to earlier [Color Center-Surround (`conv2d2`)](#group_conv2d2_color_center_surround) and later [Color Center-Surround (`mixed3b`)](#group_mixed3b_color_center_surround).\n\n\n\n### [**High-Low Frequency** 6%](#group_mixed3a_high_low_frequency)\n\n\n\n[![](images/neuron/mixed3a_110.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_110.html)110[![](images/neuron/mixed3a_180.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_180.html)180[![](images/neuron/mixed3a_153.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_153.html)153[![](images/neuron/mixed3a_106.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_106.html)106[![](images/neuron/mixed3a_112.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_112.html)112[![](images/neuron/mixed3a_186.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_186.html)186[![](images/neuron/mixed3a_132.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_132.html)132[![](images/neuron/mixed3a_136.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_136.html)136[![](images/neuron/mixed3a_117.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_117.html)117[![](images/neuron/mixed3a_113.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_113.html)113[![](images/neuron/mixed3a_108.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_108.html)108[![](images/neuron/mixed3a_70.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_70.html)70[![](images/neuron/mixed3a_86.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_86.html)86[![](images/neuron/mixed3a_88.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_88.html)88[![](images/neuron/mixed3a_160.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_160.html)160\n\n\nShow all 15 neurons.\nCollapse neurons.\nThese units look for transitions from high-frequency texture to low-frequency. They are primarily used by [boundary detectors (`mixed3b`)](#group_mixed3b_boundary) as an additional cue for a boundary between objects. (Larger scale high-low frequency detectors can be found in `mixed4a` (245, 93, 392, 301), but are not discussed in this article.) \n \n A detailed article on these is forthcoming.\n\n\n\n### [**Brightness Gradient** 6%](#group_mixed3a_brightness_gradient)\n\n\n\n[![](images/neuron/mixed3a_216.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_216.html)216[![](images/neuron/mixed3a_127.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_127.html)127[![](images/neuron/mixed3a_22.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_22.html)22[![](images/neuron/mixed3a_182.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_182.html)182[![](images/neuron/mixed3a_162.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_162.html)162[![](images/neuron/mixed3a_25.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_25.html)25[![](images/neuron/mixed3a_249.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_249.html)249[![](images/neuron/mixed3a_15.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_15.html)15[![](images/neuron/mixed3a_28.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_28.html)28[![](images/neuron/mixed3a_59.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_59.html)59[![](images/neuron/mixed3a_29.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_29.html)29[![](images/neuron/mixed3a_196.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_196.html)196[![](images/neuron/mixed3a_206.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_206.html)206[![](images/neuron/mixed3a_18.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_18.html)18[![](images/neuron/mixed3a_247.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_247.html)247\n\n\nShow all 15 neurons.\nCollapse neurons.\nThese units detect brightness gradients. Among other things they will help detect specularity (shininess), curved surfaces, and the boundary of objects. Compare to earlier [brightness gradients (`conv2d2`)](#group_conv2d2_early_brightness_gradient) and later [brightness gradients (`mixed3b`)](#group_mixed3b_brightness_gradients).\n\n\n\n### [**Color Contrast** 5%](#group_mixed3a_color_contrast)\n\n\n\n[![](images/neuron/mixed3a_195.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_195.html)195[![](images/neuron/mixed3a_84.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_84.html)84[![](images/neuron/mixed3a_85.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_85.html)85[![](images/neuron/mixed3a_123.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_123.html)123[![](images/neuron/mixed3a_203.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_203.html)203[![](images/neuron/mixed3a_217.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_217.html)217[![](images/neuron/mixed3a_199.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_199.html)199[![](images/neuron/mixed3a_211.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_211.html)211[![](images/neuron/mixed3a_205.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_205.html)205[![](images/neuron/mixed3a_212.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_212.html)212[![](images/neuron/mixed3a_202.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_202.html)202[![](images/neuron/mixed3a_200.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_200.html)200[![](images/neuron/mixed3a_138.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_138.html)138[![](images/neuron/mixed3a_32.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_32.html)32\n\n\nShow all 14 neurons.\nCollapse neurons.\nThese units look for one color on one side of their receptive field, and another (usually opposite) color on the opposing side. They typically don’t care about the exact position or orientation of the transition. Compare to earlier color contrast ([`conv2d0`](#group_conv2d0_color_contrast), [`conv2d1`](#group_conv2d1_color_contrast), [`conv2d2`](#group_conv2d2_color_contrast)) and later color contrast ([`mixed3b`](#group_mixed3b_color_contrast_gradient)).\n\n\n\n### [**Complex Center-Surround** 5%](#group_mixed3a_complex_center_surround)\n\n\n\n[![](images/neuron/mixed3a_178.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_178.html)178[![](images/neuron/mixed3a_181.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_181.html)181[![](images/neuron/mixed3a_161.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_161.html)161[![](images/neuron/mixed3a_166.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_166.html)166[![](images/neuron/mixed3a_172.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_172.html)172[![](images/neuron/mixed3a_68.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_68.html)68[![](images/neuron/mixed3a_130.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_130.html)130[![](images/neuron/mixed3a_49.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_49.html)49[![](images/neuron/mixed3a_52.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_52.html)52[![](images/neuron/mixed3a_114.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_114.html)114[![](images/neuron/mixed3a_115.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_115.html)115[![](images/neuron/mixed3a_120.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_120.html)120[![](images/neuron/mixed3a_144.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_144.html)144[![](images/neuron/mixed3a_37.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_37.html)37\n\n\nShow all 14 neurons.\nCollapse neurons.\nThis is a broad, not very well defined category for center-surround units that detect a pattern or complex texture in their center.\n\n\n\n### [**Line Misc.** 5%](#group_mixed3a_line_misc.)\n\n\n\n[![](images/neuron/mixed3a_191.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_191.html)191[![](images/neuron/mixed3a_121.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_121.html)121[![](images/neuron/mixed3a_116.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_116.html)116[![](images/neuron/mixed3a_14.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_14.html)14[![](images/neuron/mixed3a_24.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_24.html)24[![](images/neuron/mixed3a_0.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_0.html)0[![](images/neuron/mixed3a_159.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_159.html)159[![](images/neuron/mixed3a_152.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_152.html)152[![](images/neuron/mixed3a_165.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_165.html)165[![](images/neuron/mixed3a_83.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_83.html)83[![](images/neuron/mixed3a_173.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_173.html)173[![](images/neuron/mixed3a_87.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_87.html)87[![](images/neuron/mixed3a_90.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_90.html)90[![](images/neuron/mixed3a_82.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_82.html)82\n\n\nShow all 14 neurons.\nCollapse neurons.\nBroad, low confidence organizational category.\n\n\n\n### [**Lines** 5%](#group_mixed3a_lines)\n\n\n\n[![](images/neuron/mixed3a_227.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_227.html)227[![](images/neuron/mixed3a_75.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_75.html)75[![](images/neuron/mixed3a_146.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_146.html)146[![](images/neuron/mixed3a_69.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_69.html)69[![](images/neuron/mixed3a_169.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_169.html)169[![](images/neuron/mixed3a_57.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_57.html)57[![](images/neuron/mixed3a_154.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_154.html)154[![](images/neuron/mixed3a_187.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_187.html)187[![](images/neuron/mixed3a_27.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_27.html)27[![](images/neuron/mixed3a_134.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_134.html)134[![](images/neuron/mixed3a_150.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_150.html)150[![](images/neuron/mixed3a_240.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_240.html)240[![](images/neuron/mixed3a_101.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_101.html)101[![](images/neuron/mixed3a_176.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_176.html)176\n\n\nShow all 14 neurons.\nCollapse neurons.\nUnits used to detect extended lines, often further excited by different colors on each side. A few are highly combed line detectors that aren’t obviously such at first glance. The decision to include a unit was often decided by whether it seems to be used by downstream client units as a line detector.\n\n\n\n### [**Other Units** 5%](#group_mixed3a_other_units)\n\n\n\n[![](images/neuron/mixed3a_38.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_38.html)38[![](images/neuron/mixed3a_43.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_43.html)43[![](images/neuron/mixed3a_58.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_58.html)58[![](images/neuron/mixed3a_67.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_67.html)67[![](images/neuron/mixed3a_190.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_190.html)190[![](images/neuron/mixed3a_109.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_109.html)109[![](images/neuron/mixed3a_122.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_122.html)122[![](images/neuron/mixed3a_128.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_128.html)128[![](images/neuron/mixed3a_142.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_142.html)142[![](images/neuron/mixed3a_143.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_143.html)143[![](images/neuron/mixed3a_155.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_155.html)155[![](images/neuron/mixed3a_170.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_170.html)170[![](images/neuron/mixed3a_179.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_179.html)179[![](images/neuron/mixed3a_184.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_184.html)184\n\n\nShow all 14 neurons.\nCollapse neurons.\nCatch-all category for all other units.\n\n\n\n### [**Repeating patterns** 5%](#group_mixed3a_repeating_patterns)\n\n\n\n[![](images/neuron/mixed3a_237.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_237.html)237[![](images/neuron/mixed3a_31.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_31.html)31[![](images/neuron/mixed3a_17.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_17.html)17[![](images/neuron/mixed3a_20.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_20.html)20[![](images/neuron/mixed3a_39.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_39.html)39[![](images/neuron/mixed3a_126.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_126.html)126[![](images/neuron/mixed3a_124.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_124.html)124[![](images/neuron/mixed3a_156.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_156.html)156[![](images/neuron/mixed3a_98.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_98.html)98[![](images/neuron/mixed3a_105.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_105.html)105[![](images/neuron/mixed3a_230.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_230.html)230[![](images/neuron/mixed3a_228.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_228.html)228\n\n\nShow all 12 neurons.\nCollapse neurons.\nThis is broad, catch-all category for units that seem to look for repeating local patterns that seem more complex than textures.\n\n\n\n### [**Curves** 4%](#group_mixed3a_curves)\n\n\n\n[![](images/neuron/mixed3a_81.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_81.html)81[![](images/neuron/mixed3a_104.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_104.html)104[![](images/neuron/mixed3a_92.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_92.html)92[![](images/neuron/mixed3a_145.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_145.html)145[![](images/neuron/mixed3a_95.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_95.html)95[![](images/neuron/mixed3a_163.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_163.html)163[![](images/neuron/mixed3a_171.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_171.html)171[![](images/neuron/mixed3a_71.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_71.html)71[![](images/neuron/mixed3a_147.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_147.html)147[![](images/neuron/mixed3a_189.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_189.html)189[![](images/neuron/mixed3a_137.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_137.html)137\n\n\nShow all 11 neurons.\nCollapse neurons.\nThese curve detectors detect significantly larger radii curves than their predecessors. They will be refined into more specific, larger curve detectors in the next layer. Compare to earlier [curves (`conv2d2`)](#group_conv2d2_tiny_curves) and later [curves (`mixed3b`)](#group_mixed3b_curves). \n \nSee the [full paper on curve detectors](https://distill.pub/2020/circuits/curve-detectors/).\n\n\n\n### [**BW vs Color** 4%](#group_mixed3a_bw_vs_color)\n\n\n\n[![](images/neuron/mixed3a_214.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_214.html)214[![](images/neuron/mixed3a_208.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_208.html)208[![](images/neuron/mixed3a_201.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_201.html)201[![](images/neuron/mixed3a_223.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_223.html)223[![](images/neuron/mixed3a_210.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_210.html)210[![](images/neuron/mixed3a_197.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_197.html)197[![](images/neuron/mixed3a_222.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_222.html)222[![](images/neuron/mixed3a_204.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_204.html)204[![](images/neuron/mixed3a_220.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_220.html)220\n\n\n \nThese “black and white” detectors respond to absences of color. Prior to this, color detectors contrast to the opposite hue, but from this point on we’ll see many compare to the absence of color. See also [BW circuit example and discussion](#mixed3a_discussion_BW).\n\n\n\n### [**Angles** 3%](#group_mixed3a_angles)\n\n\n\n[![](images/neuron/mixed3a_188.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_188.html)188[![](images/neuron/mixed3a_94.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_94.html)94[![](images/neuron/mixed3a_164.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_164.html)164[![](images/neuron/mixed3a_107.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_107.html)107[![](images/neuron/mixed3a_77.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_77.html)77[![](images/neuron/mixed3a_157.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_157.html)157[![](images/neuron/mixed3a_149.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_149.html)149[![](images/neuron/mixed3a_100.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_100.html)100\n\n\n \nUnits that detect multiple lines, forming angles, triangles and squares. They generally respond to any of the individual lines, and more strongly to them together.\n\n\n\n### [**Fur Precursors** 3%](#group_mixed3a_fur_precursors)\n\n\n\n[![](images/neuron/mixed3a_46.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_46.html)46[![](images/neuron/mixed3a_47.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_47.html)47[![](images/neuron/mixed3a_26.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_26.html)26[![](images/neuron/mixed3a_63.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_63.html)63[![](images/neuron/mixed3a_80.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_80.html)80[![](images/neuron/mixed3a_23.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_23.html)23[![](images/neuron/mixed3a_16.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_16.html)16\n\n\n \nThese units are not yet highly selective for fur (they also fire for other high-frequency patterns), but their primary use in the next layer is supporting [fur detection](#group_mixed3b_generic_oriented_fur). At the 224x224 image resolution, individual fur hairs are generally not detectable, but tufts of fur are. These units use Gabor textures to detect those tufts in different orientations. The also detect lower frequency edges or changes in lighting perpendicular to the tufts.\n\n\n\n### [**Eyes / Small Circles** 2%](#group_mixed3a_eyes_small_circles)\n\n\n\n[![](images/neuron/mixed3a_174.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_174.html)174[![](images/neuron/mixed3a_168.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_168.html)168[![](images/neuron/mixed3a_79.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_79.html)79[![](images/neuron/mixed3a_125.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_125.html)125[![](images/neuron/mixed3a_175.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_175.html)175\n\n\n \nWe think of eyes as high-level features, but small eye detectors actually form very early. Compare to later [eye detectors (`mixed3b`)](#group_mixed3b_eyes). See also [circuit example and discussion](#mixed3a_discussion_small_circle).\n\n\n\n### [**Crosses / Diverging Lines** 2%](#group_mixed3a_crosses_diverging_lines)\n\n\n\n[![](images/neuron/mixed3a_91.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_91.html)91[![](images/neuron/mixed3a_185.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_185.html)185[![](images/neuron/mixed3a_64.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_64.html)64[![](images/neuron/mixed3a_118.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_118.html)118\n\n\n \nThese units seem to respond to lines crossing or to lines diverging from a central point.\n\n\n\n### [**Thick Lines** 1%](#group_mixed3a_thick_lines)\n\n\n\n[![](images/neuron/mixed3a_140.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_140.html)140[![](images/neuron/mixed3a_78.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_78.html)78[![](images/neuron/mixed3a_89.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_89.html)89\n\n\n \nLow confidence organizational category.\n\n\n\n### [**Line Ends** 1%](#group_mixed3a_line_ends)\n\n\n\n[![](images/neuron/mixed3a_133.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_133.html)133[![](images/neuron/mixed3a_96.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_96.html)96\n\n\n \nThese units detect a line ending or sharply turning. Often used in boundary detection and more complex shape detectors.\n\n \n\n \n\n\n\n---\n\n\n \n \n\n`mixed3b`\n---------\n\n\n\n`mixed3b` straddles two levels of abstraction.\nOn the one hand, it has some quite sophisticated features that don’t really seem like they should be characterized as “early” or “low-level”: object boundary detectors, early head detectors, and more sophisticated part of shape detectors.\nOn the other hand, it also has many units that still feel quite low-level, such as color center-surround units.\n\n\n\n\n**Boundary detectors:** One of the most striking transitions in `mixed3b` is the formation of boundary detectors.\n When you first look at the feature visualizations and dataset examples,\n you might think these are just another iteration of edge or curve detectors.\n But they are in fact combining a variety of cues to detect boundaries and transitions between objects.\n Perhaps the most important one is the high-low frequency detectors we saw develop at the previous layer.\n Notice that it largely doesn’t care which direction the change in color or frequency is, just that there’s a change.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\nnegative (inhibition)\n\n\nHigh-low frequency detectors\nThese detectors vary in orientation, preferring concave vs convex boundaries, and type of foreground.\nmixed3b creates boundary detectors that rely on many cues, including changes in frequency, changes in color, and actual edges.\nEdges\nEnd of Line\nColor Contrasts\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n We sometimes find it useful to think about the “goal” of early vision.\n Gradient descent will only create features if they are useful for features in later layers.\n Which later features incentivized the creation of the features we see in early vision?\n These boundary detectors seem to be the “goal” of the [high-low frequency detectors (`mixed3a`)](#group_mixed3a_high_low_frequency) we saw in the previous layer.\n\n\n\n\n**Curve-based Features:**\n Another major theme in this layer is the emergence of more complex and specific shape detectors based on curves.\n These include more sophisticated curves,\n [circles](#group_mixed3b_circles_loops), [S-shapes](#group_mixed3b_curve_shapes), [spirals](#group_mixed3b_curve_shapes),\n [divots](#group_mixed3b_divots), and [“evolutes”](#group_mixed3b_evolute)\n (a term we’ve repurposed to describe units detecting curves facing away from the middle).\n We’ll discuss these in detail in a forthcoming article on curve circuits, but they warrant mention here.\n\n\n\n\n Conceptually, you can think of the weights as piecing together curve detectors as something like this:\n\n\n\n\n\n\nCurve\nCircle\nSpiral\nEvolute\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n**Fur detectors:** Another interesting (albeit, probably quite specific to the dog focus of ImageNet)\n circuit is the implementation of [“oriented fur detectors”](#group_mixed3b_generic_oriented_fur) which detect fur parting, like hair on one’s head.\n They’re implemented by piecing together [fur precursors (`mixed3a`)](#group_mixed3a_fur_precursors) so that they converge in a particular way.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npositive (excitation)\nnegative (inhibition)\n\n\nOriented fur detectors detect fur parting like hair by assembling early fur detectors which detect fur at different angles to coverge at one point. These two are primairly used to create head detectors in the next layer.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Again, these circuits only scratch the surface of `mixed3b`.\n Since it’s a larger layer with lots of families, we’ll go through a couple particularly interesting and well understood families first:\n\n\n\n\n\n\n\n### [**Boundary** 8%](#group_mixed3b_boundary)\n\n\n\n[![](images/neuron/mixed3b_220.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_220.html)220[![](images/neuron/mixed3b_402.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_402.html)402[![](images/neuron/mixed3b_364.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_364.html)364[![](images/neuron/mixed3b_293.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_293.html)293[![](images/neuron/mixed3b_356.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_356.html)356[![](images/neuron/mixed3b_151.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_151.html)151[![](images/neuron/mixed3b_203.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_203.html)203[![](images/neuron/mixed3b_394.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_394.html)394[![](images/neuron/mixed3b_376.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_376.html)376[![](images/neuron/mixed3b_400.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_400.html)400[![](images/neuron/mixed3b_328.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_328.html)328[![](images/neuron/mixed3b_219.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_219.html)219[![](images/neuron/mixed3b_320.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_320.html)320[![](images/neuron/mixed3b_313.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_313.html)313[![](images/neuron/mixed3b_329.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_329.html)329[![](images/neuron/mixed3b_321.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_321.html)321[![](images/neuron/mixed3b_251.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_251.html)251[![](images/neuron/mixed3b_298.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_298.html)298[![](images/neuron/mixed3b_257.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_257.html)257[![](images/neuron/mixed3b_143.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_143.html)143[![](images/neuron/mixed3b_366.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_366.html)366[![](images/neuron/mixed3b_345.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_345.html)345[![](images/neuron/mixed3b_405.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_405.html)405[![](images/neuron/mixed3b_414.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_414.html)414[![](images/neuron/mixed3b_301.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_301.html)301[![](images/neuron/mixed3b_368.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_368.html)368[![](images/neuron/mixed3b_398.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_398.html)398[![](images/neuron/mixed3b_383.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_383.html)383[![](images/neuron/mixed3b_396.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_396.html)396[![](images/neuron/mixed3b_261.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_261.html)261[![](images/neuron/mixed3b_184.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_184.html)184[![](images/neuron/mixed3b_144.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_144.html)144[![](images/neuron/mixed3b_360.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_360.html)360[![](images/neuron/mixed3b_183.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_183.html)183[![](images/neuron/mixed3b_239.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_239.html)239[![](images/neuron/mixed3b_386.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_386.html)386\n\n\nShow all 36 neurons.\nCollapse neurons.\nThese units use multiple cues to detect the boundaries of objects. They vary in orientation, detecting convex/concave/straight boundaries, and detecting artificial vs fur foregrounds. Cues they rely on include line detectors, high-low frequency detectors, and color contrast.\n\n\n\n### [**Proto-Head** 3%](#group_mixed3b_proto_head)\n\n\n\n[![](images/neuron/mixed3b_362.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_362.html)362[![](images/neuron/mixed3b_413.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_413.html)413[![](images/neuron/mixed3b_334.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_334.html)334[![](images/neuron/mixed3b_331.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_331.html)331[![](images/neuron/mixed3b_174.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_174.html)174[![](images/neuron/mixed3b_225.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_225.html)225[![](images/neuron/mixed3b_393.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_393.html)393[![](images/neuron/mixed3b_185.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_185.html)185[![](images/neuron/mixed3b_435.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_435.html)435[![](images/neuron/mixed3b_180.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_180.html)180[![](images/neuron/mixed3b_441.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_441.html)441[![](images/neuron/mixed3b_163.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_163.html)163\n\n\nShow all 12 neurons.\nCollapse neurons.\nThe tiny eye detectors, along with texture detectors for fur, hair and skin developed at the previous layer enable these early head detectors, which will continue to be refined in the next layer.\n\n\n\n### [**Generic, Oriented Fur** 2%](#group_mixed3b_generic_oriented_fur)\n\n\n\n[![](images/neuron/mixed3b_57.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_57.html)57[![](images/neuron/mixed3b_387.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_387.html)387[![](images/neuron/mixed3b_404.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_404.html)404[![](images/neuron/mixed3b_333.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_333.html)333[![](images/neuron/mixed3b_375.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_375.html)375[![](images/neuron/mixed3b_381.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_381.html)381[![](images/neuron/mixed3b_335.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_335.html)335[![](images/neuron/mixed3b_378.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_378.html)378[![](images/neuron/mixed3b_62.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_62.html)62[![](images/neuron/mixed3b_52.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_52.html)52\n\n\n \nWe don’t typically think of fur as an oriented feature, but it is. These units detect fur parting in various ways, much like how hair on your head parts.\n\n\n\n### [**Curves** 2%](#group_mixed3b_curves)\n\n\n\n[![](images/neuron/mixed3b_379.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_379.html)379[![](images/neuron/mixed3b_406.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_406.html)406[![](images/neuron/mixed3b_385.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_385.html)385[![](images/neuron/mixed3b_343.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_343.html)343[![](images/neuron/mixed3b_342.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_342.html)342[![](images/neuron/mixed3b_388.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_388.html)388[![](images/neuron/mixed3b_340.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_340.html)340[![](images/neuron/mixed3b_330.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_330.html)330[![](images/neuron/mixed3b_349.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_349.html)349[![](images/neuron/mixed3b_324.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_324.html)324\n\n\n \nThe third iteration of curve detectors. They detect larger radii curves than their predecessors, and are the first to not slightly fire for curves rotated 180 degrees. Compare to the earlier [curves (conv2d2)](#group_conv2d2_tiny_curves) and [curves (mixed3a)](#group_mixed3a_curves). \n \nSee the [full paper on curve detectors](https://distill.pub/2020/circuits/curve-detectors/).\n\n\n\n### [**Divots** 2%](#group_mixed3b_divots)\n\n\n\n[![](images/neuron/mixed3b_395.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_395.html)395[![](images/neuron/mixed3b_159.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_159.html)159[![](images/neuron/mixed3b_237.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_237.html)237[![](images/neuron/mixed3b_409.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_409.html)409[![](images/neuron/mixed3b_357.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_357.html)357[![](images/neuron/mixed3b_190.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_190.html)190[![](images/neuron/mixed3b_212.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_212.html)212[![](images/neuron/mixed3b_211.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_211.html)211[![](images/neuron/mixed3b_198.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_198.html)198[![](images/neuron/mixed3b_218.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_218.html)218\n\n\n \nCurve-like detectors for sharp corners or bumps.\n\n\n\n### [**Square / Grid** 2%](#group_mixed3b_square_grid)\n\n\n\n[![](images/neuron/mixed3b_392.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_392.html)392[![](images/neuron/mixed3b_361.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_361.html)361[![](images/neuron/mixed3b_401.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_401.html)401[![](images/neuron/mixed3b_68.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_68.html)68[![](images/neuron/mixed3b_341.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_341.html)341[![](images/neuron/mixed3b_382.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_382.html)382[![](images/neuron/mixed3b_397.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_397.html)397[![](images/neuron/mixed3b_66.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_66.html)66[![](images/neuron/mixed3b_125.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_125.html)125\n\n\n \nUnits detecting grid patterns.\n\n\n\n### [**Brightness Gradients** 1%](#group_mixed3b_brightness_gradients)\n\n\n\n[![](images/neuron/mixed3b_0.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_0.html)0[![](images/neuron/mixed3b_317.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_317.html)317[![](images/neuron/mixed3b_136.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_136.html)136[![](images/neuron/mixed3b_455.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_455.html)455[![](images/neuron/mixed3b_417.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_417.html)417[![](images/neuron/mixed3b_469.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_469.html)469\n\n\n \nThese units detect brightness gradients. This is their third iteration; compare to earlier [brightness gradients (`conv2d2`)](#group_conv2d2_early_brightness_gradient) and [brightness gradients (`mixed3a`)](#group_mixed3a_brightness_gradient).\n\n\n\n### [**Eyes** 1%](#group_mixed3b_eyes)\n\n\n\n[![](images/neuron/mixed3b_370.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_370.html)370[![](images/neuron/mixed3b_352.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_352.html)352[![](images/neuron/mixed3b_363.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_363.html)363[![](images/neuron/mixed3b_322.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_322.html)322[![](images/neuron/mixed3b_83.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_83.html)83[![](images/neuron/mixed3b_199.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_199.html)199\n\n\n \nAgain, we continue to see eye detectors quite early in vision. Note that several of these detect larger eyes than the earlier [eye detectors (mixed3a)](#group_mixed3a_eyes_small_circles). In the next layer, we see much larger scale eye detectors again.\n\n\n\n### [**Shallow Curves** 1%](#group_mixed3b_shallow_curves)\n\n\n\n[![](images/neuron/mixed3b_403.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_403.html)403[![](images/neuron/mixed3b_353.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_353.html)353[![](images/neuron/mixed3b_355.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_355.html)355[![](images/neuron/mixed3b_336.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_336.html)336\n\n\n \nDetectors for curves with wider radii than [regular curve detectors](#group_mixed3b_curves).\n\n\n\n### [**Curve Shapes** 1%](#group_mixed3b_curve_shapes)\n\n\n\n[![](images/neuron/mixed3b_325.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_325.html)325[![](images/neuron/mixed3b_338.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_338.html)338[![](images/neuron/mixed3b_327.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_327.html)327[![](images/neuron/mixed3b_347.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_347.html)347\n\n\n \nSimple shapes created by composing curves, such as spirals and S-curves.\n\n\n\n### [**Circles / Loops** 1%](#group_mixed3b_circles_loops)\n\n\n\n[![](images/neuron/mixed3b_389.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_389.html)389[![](images/neuron/mixed3b_384.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_384.html)384[![](images/neuron/mixed3b_346.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_346.html)346[![](images/neuron/mixed3b_323.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_323.html)323\n\n\n \nPiece together curves in a circle or partial circle. Opposite of [evolute](#group_mixed3b_evolute).\n\n\n\n### [**Circle Cluster** 1%](#group_mixed3b_circle_cluster)\n\n\n\n[![](images/neuron/mixed3b_446.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_446.html)446[![](images/neuron/mixed3b_462.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_462.html)462[![](images/neuron/mixed3b_82.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_82.html)82\n\n\n \nUnits detecting circles and curves without necesarily requiring spatial coherrence.\n\n\n\n### [**Double Curves** 1%](#group_mixed3b_double_curves)\n\n\n\n[![](images/neuron/mixed3b_359.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_359.html)359[![](images/neuron/mixed3b_337.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_337.html)337[![](images/neuron/mixed3b_380.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_380.html)380\n\n\n \nWeights appear to be two curve detectors added together. Likely best thought of as a polysemantic neuron.\n\n\n\n### [**Evolute** 0.2%](#group_mixed3b_evolute)\n\n\n\n[![](images/neuron/mixed3b_373.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_373.html)373\n\n\n \nDetects curves facing away from the middle. Opposite of [circles](#group_mixed3b_circles_loops). Term repurposed from [mathematical evolutes](https://en.wikipedia.org/wiki/Evolute) which can sometimes be visually similar.\n\n \n\n\n In addition to the above features, are also a lot of other features which don’t fall into such a neat categorization.\n One frustrating issue is that `mixed3b` has many units that don’t have a simple low-level articulation, but also are not yet very specific to a high-level feature.\n For example, there are units which seem to be developing towards detecting certain animal body parts, but still respond to many other stimuli as well and so are difficult to describe.\n\n\n\n\n\n\n\n### [**Color Center-Surround** 16%](#group_mixed3b_color_center_surround)\n\n\n\n[![](images/neuron/mixed3b_285.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_285.html)285[![](images/neuron/mixed3b_451.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_451.html)451[![](images/neuron/mixed3b_208.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_208.html)208[![](images/neuron/mixed3b_122.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_122.html)122[![](images/neuron/mixed3b_93.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_93.html)93[![](images/neuron/mixed3b_75.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_75.html)75[![](images/neuron/mixed3b_46.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_46.html)46[![](images/neuron/mixed3b_294.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_294.html)294[![](images/neuron/mixed3b_44.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_44.html)44[![](images/neuron/mixed3b_247.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_247.html)247[![](images/neuron/mixed3b_91.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_91.html)91[![](images/neuron/mixed3b_14.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_14.html)14[![](images/neuron/mixed3b_10.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_10.html)10[![](images/neuron/mixed3b_271.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_271.html)271[![](images/neuron/mixed3b_60.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_60.html)60[![](images/neuron/mixed3b_80.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_80.html)80[![](images/neuron/mixed3b_84.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_84.html)84[![](images/neuron/mixed3b_70.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_70.html)70[![](images/neuron/mixed3b_202.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_202.html)202[![](images/neuron/mixed3b_422.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_422.html)422[![](images/neuron/mixed3b_48.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_48.html)48[![](images/neuron/mixed3b_436.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_436.html)436[![](images/neuron/mixed3b_65.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_65.html)65[![](images/neuron/mixed3b_300.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_300.html)300[![](images/neuron/mixed3b_105.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_105.html)105[![](images/neuron/mixed3b_34.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_34.html)34[![](images/neuron/mixed3b_121.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_121.html)121[![](images/neuron/mixed3b_424.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_424.html)424[![](images/neuron/mixed3b_457.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_457.html)457[![](images/neuron/mixed3b_186.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_186.html)186[![](images/neuron/mixed3b_23.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_23.html)23[![](images/neuron/mixed3b_479.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_479.html)479[![](images/neuron/mixed3b_89.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_89.html)89[![](images/neuron/mixed3b_283.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_283.html)283[![](images/neuron/mixed3b_22.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_22.html)22[![](images/neuron/mixed3b_124.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_124.html)124[![](images/neuron/mixed3b_6.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_6.html)6[![](images/neuron/mixed3b_9.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_9.html)9[![](images/neuron/mixed3b_50.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_50.html)50[![](images/neuron/mixed3b_5.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_5.html)5[![](images/neuron/mixed3b_71.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_71.html)71[![](images/neuron/mixed3b_59.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_59.html)59[![](images/neuron/mixed3b_182.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_182.html)182[![](images/neuron/mixed3b_87.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_87.html)87[![](images/neuron/mixed3b_308.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_308.html)308[![](images/neuron/mixed3b_428.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_428.html)428[![](images/neuron/mixed3b_109.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_109.html)109[![](images/neuron/mixed3b_141.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_141.html)141[![](images/neuron/mixed3b_12.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_12.html)12[![](images/neuron/mixed3b_474.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_474.html)474[![](images/neuron/mixed3b_112.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_112.html)112[![](images/neuron/mixed3b_192.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_192.html)192[![](images/neuron/mixed3b_2.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_2.html)2[![](images/neuron/mixed3b_177.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_177.html)177[![](images/neuron/mixed3b_249.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_249.html)249[![](images/neuron/mixed3b_281.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_281.html)281[![](images/neuron/mixed3b_284.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_284.html)284[![](images/neuron/mixed3b_30.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_30.html)30[![](images/neuron/mixed3b_27.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_27.html)27[![](images/neuron/mixed3b_255.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_255.html)255[![](images/neuron/mixed3b_53.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_53.html)53[![](images/neuron/mixed3b_432.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_432.html)432[![](images/neuron/mixed3b_475.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_475.html)475[![](images/neuron/mixed3b_79.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_79.html)79[![](images/neuron/mixed3b_67.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_67.html)67[![](images/neuron/mixed3b_25.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_25.html)25[![](images/neuron/mixed3b_351.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_351.html)351[![](images/neuron/mixed3b_420.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_420.html)420[![](images/neuron/mixed3b_152.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_152.html)152[![](images/neuron/mixed3b_26.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_26.html)26[![](images/neuron/mixed3b_193.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_193.html)193[![](images/neuron/mixed3b_448.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_448.html)448[![](images/neuron/mixed3b_153.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_153.html)153[![](images/neuron/mixed3b_164.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_164.html)164[![](images/neuron/mixed3b_113.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_113.html)113[![](images/neuron/mixed3b_216.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_216.html)216[![](images/neuron/mixed3b_259.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_259.html)259\n\n\nShow all 77 neurons.\nCollapse neurons.\nThese units look for one color in the center, and another color surrounding it. These units likely have many subtleties about the range of hues, texture preferences, and interactions that similar neurons in earlier layers may not have. Note how many units detect the absence (or generic presence) of color, building off of the [black and white detectors](#group_mixed3a_bw_vs_color) in `mixed3a`. Compare to earlier [Color Center-Surround (`conv2d2`)](#group_conv2d2_color_center_surround) and [(Color Center-Surround `mixed3a`)](#group_mixed3a_color_center_surround).\n\n\n\n### [**Complex Center-Surround** 15%](#group_mixed3b_complex_center_surround)\n\n\n\n[![](images/neuron/mixed3b_299.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_299.html)299[![](images/neuron/mixed3b_139.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_139.html)139[![](images/neuron/mixed3b_7.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_7.html)7[![](images/neuron/mixed3b_170.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_170.html)170[![](images/neuron/mixed3b_16.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_16.html)16[![](images/neuron/mixed3b_28.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_28.html)28[![](images/neuron/mixed3b_291.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_291.html)291[![](images/neuron/mixed3b_439.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_439.html)439[![](images/neuron/mixed3b_443.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_443.html)443[![](images/neuron/mixed3b_69.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_69.html)69[![](images/neuron/mixed3b_11.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_11.html)11[![](images/neuron/mixed3b_13.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_13.html)13[![](images/neuron/mixed3b_56.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_56.html)56[![](images/neuron/mixed3b_116.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_116.html)116[![](images/neuron/mixed3b_117.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_117.html)117[![](images/neuron/mixed3b_72.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_72.html)72[![](images/neuron/mixed3b_36.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_36.html)36[![](images/neuron/mixed3b_35.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_35.html)35[![](images/neuron/mixed3b_41.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_41.html)41[![](images/neuron/mixed3b_51.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_51.html)51[![](images/neuron/mixed3b_55.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_55.html)55[![](images/neuron/mixed3b_88.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_88.html)88[![](images/neuron/mixed3b_101.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_101.html)101[![](images/neuron/mixed3b_110.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_110.html)110[![](images/neuron/mixed3b_114.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_114.html)114[![](images/neuron/mixed3b_158.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_158.html)158[![](images/neuron/mixed3b_161.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_161.html)161[![](images/neuron/mixed3b_169.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_169.html)169[![](images/neuron/mixed3b_176.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_176.html)176[![](images/neuron/mixed3b_215.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_215.html)215[![](images/neuron/mixed3b_228.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_228.html)228[![](images/neuron/mixed3b_230.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_230.html)230[![](images/neuron/mixed3b_232.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_232.html)232[![](images/neuron/mixed3b_233.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_233.html)233[![](images/neuron/mixed3b_234.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_234.html)234[![](images/neuron/mixed3b_238.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_238.html)238[![](images/neuron/mixed3b_242.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_242.html)242[![](images/neuron/mixed3b_244.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_244.html)244[![](images/neuron/mixed3b_245.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_245.html)245[![](images/neuron/mixed3b_252.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_252.html)252[![](images/neuron/mixed3b_256.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_256.html)256[![](images/neuron/mixed3b_275.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_275.html)275[![](images/neuron/mixed3b_280.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_280.html)280[![](images/neuron/mixed3b_290.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_290.html)290[![](images/neuron/mixed3b_296.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_296.html)296[![](images/neuron/mixed3b_297.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_297.html)297[![](images/neuron/mixed3b_302.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_302.html)302[![](images/neuron/mixed3b_310.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_310.html)310[![](images/neuron/mixed3b_410.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_410.html)410[![](images/neuron/mixed3b_442.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_442.html)442[![](images/neuron/mixed3b_17.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_17.html)17[![](images/neuron/mixed3b_8.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_8.html)8[![](images/neuron/mixed3b_15.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_15.html)15[![](images/neuron/mixed3b_18.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_18.html)18[![](images/neuron/mixed3b_20.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_20.html)20[![](images/neuron/mixed3b_24.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_24.html)24[![](images/neuron/mixed3b_31.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_31.html)31[![](images/neuron/mixed3b_37.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_37.html)37[![](images/neuron/mixed3b_42.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_42.html)42[![](images/neuron/mixed3b_61.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_61.html)61[![](images/neuron/mixed3b_73.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_73.html)73[![](images/neuron/mixed3b_92.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_92.html)92[![](images/neuron/mixed3b_315.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_315.html)315[![](images/neuron/mixed3b_103.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_103.html)103[![](images/neuron/mixed3b_104.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_104.html)104[![](images/neuron/mixed3b_118.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_118.html)118[![](images/neuron/mixed3b_119.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_119.html)119[![](images/neuron/mixed3b_131.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_131.html)131[![](images/neuron/mixed3b_274.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_274.html)274[![](images/neuron/mixed3b_278.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_278.html)278[![](images/neuron/mixed3b_289.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_289.html)289[![](images/neuron/mixed3b_29.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_29.html)29[![](images/neuron/mixed3b_147.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_147.html)147\n\n\nShow all 73 neurons.\nCollapse neurons.\nThis is a broad, not very well defined category for center-surround units that detect a pattern or complex texture in their center.\n\n\n\n### [**Texture** 9%](#group_mixed3b_texture)\n\n\n\n[![](images/neuron/mixed3b_3.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_3.html)3[![](images/neuron/mixed3b_40.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_40.html)40[![](images/neuron/mixed3b_32.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_32.html)32[![](images/neuron/mixed3b_54.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_54.html)54[![](images/neuron/mixed3b_74.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_74.html)74[![](images/neuron/mixed3b_309.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_309.html)309[![](images/neuron/mixed3b_267.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_267.html)267[![](images/neuron/mixed3b_438.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_438.html)438[![](images/neuron/mixed3b_416.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_416.html)416[![](images/neuron/mixed3b_440.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_440.html)440[![](images/neuron/mixed3b_460.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_460.html)460[![](images/neuron/mixed3b_276.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_276.html)276[![](images/neuron/mixed3b_458.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_458.html)458[![](images/neuron/mixed3b_132.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_132.html)132[![](images/neuron/mixed3b_133.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_133.html)133[![](images/neuron/mixed3b_106.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_106.html)106[![](images/neuron/mixed3b_120.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_120.html)120[![](images/neuron/mixed3b_123.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_123.html)123[![](images/neuron/mixed3b_426.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_426.html)426[![](images/neuron/mixed3b_434.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_434.html)434[![](images/neuron/mixed3b_429.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_429.html)429[![](images/neuron/mixed3b_445.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_445.html)445[![](images/neuron/mixed3b_452.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_452.html)452[![](images/neuron/mixed3b_456.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_456.html)456[![](images/neuron/mixed3b_459.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_459.html)459[![](images/neuron/mixed3b_464.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_464.html)464[![](images/neuron/mixed3b_465.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_465.html)465[![](images/neuron/mixed3b_421.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_421.html)421[![](images/neuron/mixed3b_437.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_437.html)437[![](images/neuron/mixed3b_418.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_418.html)418[![](images/neuron/mixed3b_425.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_425.html)425[![](images/neuron/mixed3b_221.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_221.html)221[![](images/neuron/mixed3b_195.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_195.html)195[![](images/neuron/mixed3b_204.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_204.html)204[![](images/neuron/mixed3b_39.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_39.html)39[![](images/neuron/mixed3b_76.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_76.html)76[![](images/neuron/mixed3b_77.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_77.html)77[![](images/neuron/mixed3b_468.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_468.html)468[![](images/neuron/mixed3b_471.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_471.html)471[![](images/neuron/mixed3b_227.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_227.html)227[![](images/neuron/mixed3b_415.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_415.html)415[![](images/neuron/mixed3b_126.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_126.html)126[![](images/neuron/mixed3b_128.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_128.html)128[![](images/neuron/mixed3b_172.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_172.html)172\n\n\nShow all 44 neurons.\nCollapse neurons.\nThis is a broad, not very well defined category for units that seem to look for simple local structures over a wide receptive field, including mixtures of colors. \n\n\n\n### [**Other Units** 9%](#group_mixed3b_other_units)\n\n\n\n[![](images/neuron/mixed3b_43.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_43.html)43[![](images/neuron/mixed3b_45.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_45.html)45[![](images/neuron/mixed3b_58.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_58.html)58[![](images/neuron/mixed3b_78.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_78.html)78[![](images/neuron/mixed3b_85.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_85.html)85[![](images/neuron/mixed3b_86.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_86.html)86[![](images/neuron/mixed3b_96.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_96.html)96[![](images/neuron/mixed3b_100.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_100.html)100[![](images/neuron/mixed3b_127.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_127.html)127[![](images/neuron/mixed3b_134.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_134.html)134[![](images/neuron/mixed3b_137.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_137.html)137[![](images/neuron/mixed3b_142.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_142.html)142[![](images/neuron/mixed3b_145.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_145.html)145[![](images/neuron/mixed3b_146.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_146.html)146[![](images/neuron/mixed3b_148.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_148.html)148[![](images/neuron/mixed3b_150.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_150.html)150[![](images/neuron/mixed3b_154.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_154.html)154[![](images/neuron/mixed3b_179.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_179.html)179[![](images/neuron/mixed3b_181.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_181.html)181[![](images/neuron/mixed3b_187.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_187.html)187[![](images/neuron/mixed3b_188.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_188.html)188[![](images/neuron/mixed3b_207.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_207.html)207[![](images/neuron/mixed3b_213.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_213.html)213[![](images/neuron/mixed3b_214.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_214.html)214[![](images/neuron/mixed3b_222.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_222.html)222[![](images/neuron/mixed3b_231.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_231.html)231[![](images/neuron/mixed3b_235.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_235.html)235[![](images/neuron/mixed3b_240.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_240.html)240[![](images/neuron/mixed3b_253.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_253.html)253[![](images/neuron/mixed3b_265.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_265.html)265[![](images/neuron/mixed3b_266.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_266.html)266[![](images/neuron/mixed3b_268.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_268.html)268[![](images/neuron/mixed3b_273.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_273.html)273[![](images/neuron/mixed3b_306.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_306.html)306[![](images/neuron/mixed3b_350.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_350.html)350[![](images/neuron/mixed3b_354.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_354.html)354[![](images/neuron/mixed3b_358.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_358.html)358[![](images/neuron/mixed3b_371.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_371.html)371[![](images/neuron/mixed3b_391.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_391.html)391[![](images/neuron/mixed3b_399.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_399.html)399[![](images/neuron/mixed3b_411.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_411.html)411[![](images/neuron/mixed3b_433.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_433.html)433\n\n\nShow all 42 neurons.\nCollapse neurons.\nUnits that don’t fall in any other category.\n\n\n\n### [**Color Contrast/Gradient** 5%](#group_mixed3b_color_contrast_gradient)\n\n\n\n[![](images/neuron/mixed3b_4.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_4.html)4[![](images/neuron/mixed3b_217.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_217.html)217[![](images/neuron/mixed3b_450.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_450.html)450[![](images/neuron/mixed3b_191.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_191.html)191[![](images/neuron/mixed3b_287.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_287.html)287[![](images/neuron/mixed3b_49.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_49.html)49[![](images/neuron/mixed3b_196.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_196.html)196[![](images/neuron/mixed3b_473.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_473.html)473[![](images/neuron/mixed3b_430.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_430.html)430[![](images/neuron/mixed3b_305.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_305.html)305[![](images/neuron/mixed3b_447.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_447.html)447[![](images/neuron/mixed3b_277.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_277.html)277[![](images/neuron/mixed3b_165.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_165.html)165[![](images/neuron/mixed3b_279.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_279.html)279[![](images/neuron/mixed3b_226.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_226.html)226[![](images/neuron/mixed3b_303.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_303.html)303[![](images/neuron/mixed3b_224.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_224.html)224[![](images/neuron/mixed3b_269.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_269.html)269[![](images/neuron/mixed3b_264.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_264.html)264[![](images/neuron/mixed3b_189.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_189.html)189[![](images/neuron/mixed3b_156.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_156.html)156[![](images/neuron/mixed3b_463.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_463.html)463[![](images/neuron/mixed3b_270.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_270.html)270[![](images/neuron/mixed3b_272.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_272.html)272\n\n\nShow all 24 neurons.\nCollapse neurons.\nUnits which respond to different colors on each side. These units look for one color in the center, and another color surrounding it. These units likely have many subtleties about the range of hues, texture preferences, and interactions that similar neurons in earlier layers may not have. Compare to earlier color contrast ([`conv2d0`](#group_conv2d0_color_contrast), [`conv2d1`](#group_conv2d1_color_contrast), [`conv2d2`](#group_conv2d2_color_contrast), [`mixed3a`](group_mixed3a_color_contrast)).\n\n\n\n### [**Texture Contrast** 3%](#group_mixed3b_texture_contrast)\n\n\n\n[![](images/neuron/mixed3b_319.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_319.html)319[![](images/neuron/mixed3b_155.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_155.html)155[![](images/neuron/mixed3b_201.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_201.html)201[![](images/neuron/mixed3b_171.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_171.html)171[![](images/neuron/mixed3b_178.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_178.html)178[![](images/neuron/mixed3b_197.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_197.html)197[![](images/neuron/mixed3b_260.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_260.html)260[![](images/neuron/mixed3b_412.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_412.html)412[![](images/neuron/mixed3b_248.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_248.html)248[![](images/neuron/mixed3b_250.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_250.html)250[![](images/neuron/mixed3b_241.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_241.html)241[![](images/neuron/mixed3b_390.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_390.html)390\n\n\nShow all 12 neurons.\nCollapse neurons.\nUnits that detect one texture on one side and a different texture on the other.\n\n\n\n### [**Other Fur** 2%](#group_mixed3b_other_fur)\n\n\n\n[![](images/neuron/mixed3b_472.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_472.html)472[![](images/neuron/mixed3b_476.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_476.html)476[![](images/neuron/mixed3b_477.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_477.html)477[![](images/neuron/mixed3b_453.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_453.html)453[![](images/neuron/mixed3b_454.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_454.html)454[![](images/neuron/mixed3b_427.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_427.html)427[![](images/neuron/mixed3b_449.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_449.html)449[![](images/neuron/mixed3b_467.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_467.html)467[![](images/neuron/mixed3b_64.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_64.html)64[![](images/neuron/mixed3b_129.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_129.html)129\n\n\n \nUnits which seem to detect fur but, unlike the oriented fur detectors, don’t seem to detect it parting in a particular way. Many of these seem to prefer a particular fur pattern.\n\n\n\n### [**Lines** 2%](#group_mixed3b_lines)\n\n\n\n[![](images/neuron/mixed3b_377.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_377.html)377[![](images/neuron/mixed3b_326.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_326.html)326[![](images/neuron/mixed3b_95.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_95.html)95[![](images/neuron/mixed3b_38.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_38.html)38[![](images/neuron/mixed3b_307.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_307.html)307[![](images/neuron/mixed3b_1.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_1.html)1[![](images/neuron/mixed3b_19.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_19.html)19[![](images/neuron/mixed3b_209.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_209.html)209[![](images/neuron/mixed3b_210.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_210.html)210\n\n\n \nUnits which seem, to a significant extent, to detect a line. Many seem to have additional, more complex behavior.\n\n\n\n### [**Cross / Corner Divergence** 2%](#group_mixed3b_cross_corner_divergence)\n\n\n\n[![](images/neuron/mixed3b_108.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_108.html)108[![](images/neuron/mixed3b_47.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_47.html)47[![](images/neuron/mixed3b_339.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_339.html)339[![](images/neuron/mixed3b_344.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_344.html)344[![](images/neuron/mixed3b_374.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_374.html)374[![](images/neuron/mixed3b_99.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_99.html)99[![](images/neuron/mixed3b_369.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_369.html)369[![](images/neuron/mixed3b_236.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_236.html)236[![](images/neuron/mixed3b_408.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_408.html)408\n\n\n \nUnits detecting lines crossing or diverging from a center point. Some are early predecessors for 3D corner detection.\n\n\n\n### [**Pattern** 2%](#group_mixed3b_pattern)\n\n\n\n[![](images/neuron/mixed3b_157.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_157.html)157[![](images/neuron/mixed3b_431.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_431.html)431[![](images/neuron/mixed3b_444.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_444.html)444[![](images/neuron/mixed3b_311.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_311.html)311[![](images/neuron/mixed3b_470.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_470.html)470[![](images/neuron/mixed3b_33.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_33.html)33[![](images/neuron/mixed3b_115.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_115.html)115[![](images/neuron/mixed3b_316.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_316.html)316[![](images/neuron/mixed3b_372.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_372.html)372\n\n\n \nLow confidence category.\n\n\n\n### [**Bumps** 2%](#group_mixed3b_bumps)\n\n\n\n[![](images/neuron/mixed3b_167.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_167.html)167[![](images/neuron/mixed3b_206.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_206.html)206[![](images/neuron/mixed3b_312.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_312.html)312[![](images/neuron/mixed3b_292.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_292.html)292[![](images/neuron/mixed3b_194.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_194.html)194[![](images/neuron/mixed3b_140.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_140.html)140[![](images/neuron/mixed3b_223.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_223.html)223[![](images/neuron/mixed3b_254.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_254.html)254\n\n\n \nLow confidence category.\n\n\n\n### [**Double Boundary** 1%](#group_mixed3b_double_boundary)\n\n\n\n[![](images/neuron/mixed3b_318.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_318.html)318[![](images/neuron/mixed3b_332.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_332.html)332[![](images/neuron/mixed3b_286.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_286.html)286[![](images/neuron/mixed3b_258.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_258.html)258[![](images/neuron/mixed3b_229.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_229.html)229[![](images/neuron/mixed3b_138.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_138.html)138[![](images/neuron/mixed3b_314.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_314.html)314\n\n\n \nUnits that detect boundary transitions on two sides, with a ‘foreground’ texture in the middle.\n\n\n\n### [**Bar / Line-Like** 1%](#group_mixed3b_bar_line_like)\n\n\n\n[![](images/neuron/mixed3b_81.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_81.html)81[![](images/neuron/mixed3b_97.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_97.html)97[![](images/neuron/mixed3b_98.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_98.html)98[![](images/neuron/mixed3b_107.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_107.html)107[![](images/neuron/mixed3b_282.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_282.html)282[![](images/neuron/mixed3b_288.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_288.html)288[![](images/neuron/mixed3b_295.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_295.html)295\n\n\n \nLow confidence category.\n\n\n\n### [**Boundary Misc** 1%](#group_mixed3b_boundary_misc)\n\n\n\n[![](images/neuron/mixed3b_149.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_149.html)149[![](images/neuron/mixed3b_130.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_130.html)130[![](images/neuron/mixed3b_168.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_168.html)168[![](images/neuron/mixed3b_243.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_243.html)243[![](images/neuron/mixed3b_246.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_246.html)246[![](images/neuron/mixed3b_160.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_160.html)160[![](images/neuron/mixed3b_162.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_162.html)162\n\n\n \nBoundary-related units we didn’t know what else to do with.\n\n\n\n### [**Star** 1%](#group_mixed3b_star)\n\n\n\n[![](images/neuron/mixed3b_263.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_263.html)263[![](images/neuron/mixed3b_262.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_262.html)262[![](images/neuron/mixed3b_205.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_205.html)205[![](images/neuron/mixed3b_135.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_135.html)135[![](images/neuron/mixed3b_94.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_94.html)94[![](images/neuron/mixed3b_304.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_304.html)304\n\n\n \nLow confidence category.\n\n\n\n### [**Line Grad** 1%](#group_mixed3b_line_grad)\n\n\n\n[![](images/neuron/mixed3b_21.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_21.html)21[![](images/neuron/mixed3b_63.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_63.html)63[![](images/neuron/mixed3b_102.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_102.html)102[![](images/neuron/mixed3b_423.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_423.html)423[![](images/neuron/mixed3b_175.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_175.html)175\n\n\n \nLow confidence category.\n\n\n\n### [**Scales** 1%](#group_mixed3b_scales)\n\n\n\n[![](images/neuron/mixed3b_461.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_461.html)461[![](images/neuron/mixed3b_466.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_466.html)466[![](images/neuron/mixed3b_90.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_90.html)90[![](images/neuron/mixed3b_478.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_478.html)478[![](images/neuron/mixed3b_419.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_419.html)419\n\n\n \nWe don’t really understand these units.\n\n\n\n### [**Curves misc.** 1%](#group_mixed3b_curves_misc.)\n\n\n\n[![](images/neuron/mixed3b_348.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_348.html)348[![](images/neuron/mixed3b_407.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_407.html)407[![](images/neuron/mixed3b_365.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_365.html)365[![](images/neuron/mixed3b_367.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_367.html)367\n\n\n \nLow confidence organizational category.\n\n\n\n### [**Shiny** 0.4%](#group_mixed3b_shiny)\n\n\n\n[![](images/neuron/mixed3b_173.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_173.html)173[![](images/neuron/mixed3b_200.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_200.html)200\n\n\n \nUnits that seem to detect shiny, specular surfaces.\n\n\n\n### [**Pointy** 0.4%](#group_mixed3b_pointy)\n\n\n\n[![](images/neuron/mixed3b_166.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_166.html)166[![](images/neuron/mixed3b_111.jpg)](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3b_111.html)111\n\n\n \nLow confidence category.\n\n \n\n\n\n---\n\n\n \n \n\nConclusion\n----------\n\n\n\n The goal of this essay was to give a high-level overview of our present understanding of early vision in InceptionV1.\n Every single feature discussed in this article is a potential topic of deep investigation.\n For example, are curve detectors really curve detectors? What types of curves do they fire for? How do they behave on various edge cases? How are they built?\n Over the coming articles, we’ll do deep dives rigorously investigating these questions for a few features, starting with curves.\n\n\n\n\n Our investigation into early vision has also left us with many broader open questions.\n To what extent do these feature families reflect fundamental clusters in features,\n versus a taxonomy that might be helpful to humans but is ultimately somewhat arbitrary?\n Is there a better taxonomy, or another way to understand the space of features?\n Why do features often seem to form in families?\n To what extent do the same features families form across models?\n Is there a “periodic table of low-level visual features”, in some sense?\n To what extent do later features admit a similar taxonomy?\n We think these could be interesting questions for future work.\n\n\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. \n\n\n\n\n\n\n[Zoom In: An Introduction to Circuits](/2020/circuits/zoom-in/)\n[Curve Detectors](/2020/circuits/curve-detectors/)", "date_published": "2020-04-01T20:00:00Z", "authors": ["Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter"], "summaries": ["An overview of all the neurons in the first five layers of InceptionV1, organized into a taxonomy of 'neuron groups.'"], "doi": "10.23915/distill.00024.002", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "https://arxiv.org/pdf/1704.05796.pdf", "title": "Network dissection: Quantifying interpretability of deep visual representations"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "https://arxiv.org/pdf/1612.00005.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "https://distill.pub/2018/building-blocks", "title": "The Building Blocks of Interpretability"}, {"link": "https://github.com/google-research/tf-slim", "title": "TensorFlow-Slim: A lightweight library for defining, training and evaluating complex models in TensorFlow"}]} {"id": "16bba51135fb54b50506160f8e521ab3", "title": "Visualizing Neural Networks with the Grand Tour", "url": "https://distill.pub/2020/grand-tour", "source": "distill", "source_type": "blog", "text": "The Grand Tour is a classic visualization technique for high-dimensional point clouds that *projects* a high-dimensional dataset into two dimensions.\n\n Over time, the Grand Tour smoothly animates its projection so that every possible view of the dataset is (eventually) presented to the viewer.\n\n Unlike modern nonlinear projection methods such as t-SNE and UMAP, the Grand Tour is fundamentally a *linear* method.\n\n In this article, we show how to leverage the linearity of the Grand Tour to enable a number of capabilities that are uniquely useful to visualize the behavior of neural networks.\n \n Concretely, we present three use cases of interest: visualizing the training process as the network weights change, visualizing the layer-to-layer behavior as the data goes through the network and visualizing both how adversarial examples are crafted and how they fool a neural network.\n\n\n\nIntroduction\n------------\n\n\n\n Deep neural networks often achieve best-in-class performance in supervised learning contests such as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).\n \n Unfortunately, their decision process is notoriously hard to interpret, and their training process is often hard to debug.\n \n In this article, we present a method to visualize the responses of a neural network which leverages properties of deep neural networks and properties of the *Grand Tour*.\n\n Notably, our method enables us to more directly reason about the relationship between *changes in the data* and *changes in the resulting visualization*.\n\n As we will show, this data-visual correspondence is central to the method we present, especially when compared to other non-linear projection methods like UMAP and t-SNE.\n\n\n\n\n\n To understand a neural network, we often try to observe its action on input examples (both real and synthesized).\n \n These kinds of visualizations are useful to elucidate the activation patterns of a neural network for a single example, but they might offer less insight about the relationship between different examples, different states of the network as it’s being trained, or how the data in the example flows through the different layers of a single network.\n \n Therefore, we instead aim to enable visualizations of the *context around* our objects of interest: what is the difference between the present training epoch and the next one? How does the classification of a network converge (or diverge) as the image is fed through the network?\n\n Linear methods are attractive because they are particularly easy to reason about.\n\n The Grand Tour works by generating a random, smoothly changing rotation of the dataset, and then projecting the data to the two-dimensional screen: both are linear processes.\n\n Although deep neural networks are clearly not linear processes, they often confine their nonlinearity to a small set of operations, enabling us to still reason about their behavior.\n\n Our proposed method better preserves context by providing more\n consistency: it should be possible to know *how the visualization\n would change, if the data had been different in a particular\n way*.\n\n\n\nWorking Examples\n----------------\n\n\n\n To illustrate the technique we will present, we trained deep neural\n network models (DNNs) with 3 common image classification datasets:\n MNIST\n \n MNIST contains grayscale images of 10 handwritten digits\n ![](figs/mnist.png)\n Image credit to \n,\n fashion-MNIST\n \n Fashion-MNIST contains grayscale images of 10 types of fashion items:\n ![](figs/fashion-mnist.png)\n\n Image credit to \n\n\n and CIFAR-10\n \n CIFAR-10 contains RGB images of 10 classes of objects\n ![](figs/cifar-10.png)\n Image credit to \n. \n While our architecture is simpler and smaller than current DNNs, it’s still indicative of modern networks, and is complex enough to demonstrate both our proposed techniques and shortcomings of typical approaches.\n\n\n\n\n The following figure presents a simple functional diagram of the neural network we will use throughout the article. The neural network is a sequence of linear (both convolutional\n A convolution calculates weighted sums of regions in the input. \n In neural networks, the learnable weights in convolutional layers are referred to as the kernel.\n For example\n ![](figs/conv.gif)\n Image credit to . \n\n See also [Convolution arithmetic](https://github.com/vdumoulin/conv_arithmetic).\n and fully-connected\n A fully-connected layer computes output neurons as weighted sum of input neurons. In matrix form, it is a matrix that linearly transforms the input vector into the output vector.\n ), max-pooling, and ReLU\n First introduced by Nair and Hinton, ReLU calculates f(x)=max(0,x)f(x)=max(0,x)f(x)=max(0,x) for each entry in a vector input. Graphically, it is a hinge at the origin: ![](figs/relu.png)\n Image credit to \n layers, culminating in a softmax\n Softmax function calculates S(yi)=eyiΣj=1NeyjS(y\\_i)=\\frac{e^{y\\_i}}{\\Sigma\\_{j=1}^{N} e^{y\\_j}}S(yi​)=Σj=1N​eyj​eyi​​ for each entry (yiy\\_iyi​) in a vector input (yyy). For example, ![](figs/softmax.png)\n Image credit to \n layer.\n\n\n\n \n\n\n\nNeural network opened. The colored blocks are building-block functions (i.e. neural network layers), the gray-scale heatmaps are either the input image or intermediate activation vectors after some layers.\n\n Even though neural networks are capable of incredible feats of classification, deep down, they really are just pipelines of relatively simple functions.\n For images, the input is a 2D array of scalar values for gray scale images or RGB triples for colored images.\n When needed, one can always flatten the 2D array into an equivalent (w⋅h⋅cw \\cdot h \\cdot cw⋅h⋅c) -dimensional vector.\n Similarly, the intermediate values after any one of the functions in composition, or activations of neurons after a layer, can also be seen as vectors in Rn\\mathbb{R}^nRn, where nnn is the number of neurons in the layer. \n The softmax, for example, can be seen as a 10-vector whose values are positive real numbers that sum up to 1.\n This vector view of data in neural network not only allows us represent complex data in a mathematically compact form, but also hints us on how to visualize them in a better way.\n\n\n\n\n Most of the simple functions fall into two categories: they are either linear transformations of their inputs (like fully-connected layers or convolutional layers), or relatively simple non-linear functions that work component-wise (like sigmoid activations\n Sigmoid calculates S(x)=exex+1S(x)=\\frac{e^{x}}{e^{x}+1}S(x)=ex+1ex​ for each entry (xxx) in a vector input. Graphically, it is an S-shaped curve.\n ![](figs/sigmoid.png)\n Image credit to \n \n or ReLU activations).\n Some operations, notably max-pooling\n Max-pooling calculates maximum of a region in the input. For example\n \n![](figs/maxpool.gif)\n Image credit to \n and softmax, do not fall into either categories. We will come back to this later.\n\n\n\n\n The above figure helps us look at a single image at a time; however, it does not provide much context to understand the relationship between layers, between different examples, or between different class labels. For that, researchers often turn to more sophisticated visualizations.\n\n\n\nUsing Visualization to Understand DNNs\n--------------------------------------\n\n\n\n Let’s start by considering the problem of visualizing the training process of a DNN.\n When training neural networks, we optimize parameters in the function to minimize a scalar-valued loss function, typically through some form of gradient descent.\n We want the loss to keep decreasing, so we monitor the whole history of training and testing losses over rounds of training (or “epochs”), to make sure that the loss decreases over time. \n The following figure shows a line plot of the training loss for the MNIST classifier.\n\n\n\n\n\n\n\n\n Although its general trend meets our expectation as the loss steadily decreases, we see something strange around epochs 14 and 21: the curve goes almost flat before starting to drop again.\n What happened? What caused that?\n\n\n\n\n\n\n\n\n If we separate input examples by their true labels/classes and plot the *per-class* loss like above, we see that the two drops were caused by the classses 1 and 7; the model learns different classes at very different times in the training process. \n Although the network learns to recognize digits 0, 2, 3, 4, 5, 6, 8 and 9 early on, it is not until epoch 14 that it starts successfully recognizing digit 1, or until epoch 21 that it recognizes digit 7.\n If we knew ahead of time to be looking for class-specific error rates, then this chart works well. But what if we didn’t really know what to look for?\n\n\n\n\n In that case, we could consider visualizations of neuron activations (e.g. in the last softmax layer) for *all* examples at once, looking\n to find patterns like class-specific behavior, and other patterns besides.\n Should there be only two neurons in that layer, a simple two-dimensional scatter plot would work.\n However, the points in the softmax layer for our example datasets are 10 dimensional (and in larger-scale classification problems this number can be much larger).\n We need to either show two dimensions at a time (which does not scale well as the number of possible charts grows quadratically),\n or we can use *dimensionality reduction* to map the data into a two dimensional space and show them in a single plot. \n\n\n\n\n### The State-of-the-art Dimensionality Reduction is Non-linear\n\n\n\n Modern dimensionality reduction techniques such as t-SNE and UMAP are capable of impressive feats of summarization, providing two-dimensional images where similar points tend to be clustered together very effectively.\n However, these methods are not particularly good to understand the behavior of neuron activations at a fine scale.\n Consider the aforementioned intriguing feature about the different learning rate that the MNIST classifier has on digit 1 and 7: the network did not learn to recognize digit 1 until epoch 14, digit 7 until epoch 21.\n We compute t-SNE, Dynamic t-SNE, and UMAP projections of the epochs where the phenomenon we described happens.\n Consider now the task of identifying this class-specific behavior during training. As a reminder, in this case, the strange behavior happens with digits 1 and 7, around epochs 14 and 21 respectively.\n While the behavior is not particularly subtle&emdash;digit goes from misclassified to correctly classified&emdash; it is quite hard to notice it in any of the plots below. \n Only on careful inspection we can notice that (for example) in the UMAP plot, the digit 1 which clustered in the bottom in epoch 13 becomes a new tentacle-like feature in epoch 14. \n\n\n\n\n\n\n // let sm1 = createSmallMultiple('#smallmultiple1', \n // [13,14,15, 20,21,22], ['t-SNE', 'Dynamic t-SNE', 'UMAP'], \n // 'mnist', true, highlight\\_digits);\n let sm1 = createSmallMultiple('#smallmultiple1', \n [13,14,15, 20,21,22], ['t-SNE', 'Dynamic t-SNE', 'UMAP'], \n 'mnist', true);\n\n\nSoftmax activations of the MNIST classifier with non-linear dimensionality reduction. Use the buttons on the right to highlight digits 1 and 7 in the plot, or drag rectangles around the charts to select particular point subsets to highlight in the other charts.\n\n One reason that non-linear embeddings fail in elucidating this phenomenon is that, for the particular change in the data, the fail the principle of *data-visual correspondence* . More concretely, the principle states that specific visualization tasks should be modeled as functions that change the data; the visualization sends this change from data to visuals, and\n we can study the extent to which the visualization changes are easily perceptible.\n Ideally, we want the changes in data and visualization to *match in magnitude*: a barely noticeable change in visualization should be due to the smallest possible change in data, and a salient change in visualization should reflect a significant one in data.\n Here, a significant change happened in only a *subset* of data (e.g. all points of digit 1 from epoch 13 to 14), but *all* points in the visualization move dramatically.\n For both UMAP and t-SNE, the position of each single point depends non-trivially on the whole data distribution in such embedding algorithms.\n This property is not ideal for visualization because it fails the data-visual correspondence, making it hard to *infer* the underlying change in data from the change in the visualization.\n\n\n\n\n Non-linear embeddings that have non-convex objectives also tend to be sensitive to initial conditions.\n For example, in MNIST, although the neural network starts to stabilize on epoch 30, t-SNE and UMAP still generate quite different projections between epochs 30, 31 and 32 (in fact, all the way to 99).\n Temporal regularization techniques (such as Dynamic t-SNE) mitigate these consistency issues, but still suffer from other interpretability issues. \n\n\n\n\n\nlet sm2 = createSmallMultiple('#smallmultiple2', [1,5,10,15,20,30,31,32], ['t-SNE', 'Dynamic t-SNE', 'UMAP', 'Linear'], 'mnist', true, ()=>{})\n\n\n Now, let’s consider another task, that of identifying classes which the neural network tends to confuse.\n For this example, we will use the Fashion-MNIST dataset and classifier, and consider the confusion among sandals, sneakers and ankle boots.\n If we know ahead of time that these three classes are likely to confuse the classifier, then we can directly design an appropriate linear projection, as can be seen in the last row of the following figure (we found this particular projection using both the Grand Tour and the direct manipulation technique we later describe). The pattern in this case is quite salient, forming a triangle.\n T-SNE, in contrast, incorrectly separates the class clusters (possibly because of an inappropriately-chosen hyperparameter).\n UMAP successfully isolates the three classes, but even in this case it’s not possible to distinguish between three-way confusion for the classifier in epochs 5 and 10 (portrayed in a linear method by the presence of points near the center of the triangle), and multiple two-way confusions in later epochs (evidences by an “empty” center).\n\n\n\n\n\n\n sm3 = createSmallMultiple('#smallmultiple3', \n [2,5,10,20,50,99], ['t-SNE', 'UMAP', 'Linear'], \n 'fashion-mnist', true, \n highlight\\_shoes\\_button, \n highlight\\_shoes,\n );\n \n\nThree-way confusion in fashion-MNIST. Notice that in contrast to non-linear methods, a carefully-constructed linear projection can offer a better visualization of the classifier behavior.\nLinear Methods to the Rescue\n----------------------------\n\n\n\n When given the chance, then, we should prefer methods for which changes in the data produce predictable, visually salient changes in the result, and linear dimensionality reductions often have this property.\n Here, we revisit the linear projections described above in an interface where the user can easily navigate between different training epochs.\n In addition, we introduce another useful capability which is only available to linear methods, that of direct manipulation.\n Each linear projection from nnn dimensions to 222 dimensions can be represented by nnn 2-dimensional vectors which have an intuitive interpretation: they are the vectors that the nnn canonical basis vector in the nnn-dimensional space will be projected to.\n In the context of projecting the final classification layer, this is especially simple to interpret: they are the destinations of an input that is classified with 100% confidence to any one particular class.\n If we provide the user with the ability to change these vectors by dragging around user-interface handles, then users can intuitively set up new linear projections.\n\n\n\n \n This setup provides additional nice properties that explain the salient patterns in the previous illustrations.\n For example, because projections are linear and the coefficients of vectors in the classification layer sum to one, classification outputs that are halfway confident between two classes are projected to vectors that are halfway between the class handles.\n\n\n\n\n\n\n\n\n From\n this linear projection, we can easily identify the learning of \n digit 1 on \n epoch 14 and \n digit 7 on \n epoch 21.\n\n\n This particular property is illustrated clearly in the Fashion-MNIST example below.\n The model confuses sandals, sneakers and ankle boots, as data points form a triangular shape in the softmax layer.\n\n\n\n\n\n\n\n\n This linear projection clearly shows model’s confusion among\n sandals,\n sneakers, and\n ankle boots.\n Similarly, this projection shows the true three-way confusion about\n pullovers,\n coats, and\n shirts.\n (The shirts are also get confused with \n t-shirts/tops. )\n Both projections are found by direct manipulations.\n \n\n\n\n Examples falling between classes indicate that the model has trouble distinguishing the two, such as sandals vs. sneakers, and sneakers vs. ankle boot classes. \n Note, however, that this does not happen as much for sandals vs. ankle boots: not many examples fall between these two classes. \n Moreover, most data points are projected close to the edge of the triangle. \n This tells us that most confusions happen between two out of the three classes, they are really two-way confusions.\n\n Within the same dataset, we can also see pullovers, coats and shirts filling a triangular *plane*.\n This is different from the sandal-sneaker-ankle-boot case, as examples not only fall on the boundary of a triangle, but also in its interior: a true three-way confusion. \n\n Similarly, in the CIFAR-10 dataset we can see confusion between dogs and cats, airplanes and ships.\n The mixing pattern in CIFAR-10 is not as clear as in fashion-MNIST, because many more examples are misclassified.\n\n\n\n\n\n\n\n\n This linear projection clearly shows model’s confusion between\n cats and\n dogs.\n Similarly, this projection shows the confusion about\n airplanes and\n ships.\n Both projections are found by direct manipulations.\n\nThe Grand Tour\n--------------\n\n\n\n In the previous section, we took advantage of the fact that we knew which classes to visualize.\n That meant it was easy to design linear projections for the particular tasks at hand.\n But what if we don’t know ahead of time which projection to choose from, because we don’t quite know what to look for?\n Principal Component Analysis (PCA) is the quintessential linear dimensionality reduction method,\n choosing to project the data so as to preserve the most variance possible. \n However, the distribution of data in softmax layers often has similar variance along many axis directions, because each axis concentrates a similar number of examples around the class vector.We are assuming a class-balanced training dataset. Nevertheless, if the training dataset is not balanced, PCA will prefer dimensions with more examples, which might not be help much either.\n As a result, even though PCA projections are interpretable and consistent through training epochs, the first two principal components of softmax activations are not substantially better than the third.\n So which of them should we choose?\n Instead of PCA, we propose to visualize this data by smoothly animating random projections, using a technique called the Grand Tour.\n\n\n\n\nStarting with a random velocity, it smoothly rotates data points around the origin in high dimensional space, and then projects it down to 2D for display. \nHere are some examples of how Grand Tour acts on some (low-dimensional) objects:\n\n\n\n* On a square, the Grand Tour rotates it with a constant angular velocity.\n* On a cube, the Grand Tour rotates it in 3D, and its 2D projection let us see every facet of the cube.\n* On a 4D cube (a *tesseract*), the rotation happens in 4D and the 2D view shows every possible projection.\n\n\n\n\nGrand tours of a square, a cube and a tesseract\n\n\n### The Grand Tour of the Softmax Layer\n\n\n\n We first look at the Grand Tour of the softmax layer. \n The softmax layer is relatively easy to understand because its axes have strong semantics. As we described earlier, the iii-th axis corresponds to network’s *confidence* about predicting that the given input belongs to the iii-th class. \n\n\n\n\n\n\n\n\n\n The Grand Tour of softmax layer in the last (99th) epoch, with \n MNIST, \n fashion-MNIST or \n CIFAR-10 dataset.\n\n\n The Grand Tour of the softmax layer lets us qualitatively assess the performance of our model.\n In the particular case of this article, since we used comparable architectures for three datasets, this also allows us to gauge the relative difficulty of classifying each dataset. \n We can see that data points are most confidently classified for the MNIST dataset, where the digits are close to one of the ten corners of the softmax space. For Fashion-MNIST or CIFAR-10, the separation is not as clean, and more points appear *inside* the volume.\n\n\n\n### The Grand Tour of Training Dynamics\n\n\n\n Linear projection methods naturally give a formulation that is independent of the input points, allowing us to keep the projection fixed while the\n data changes.\n To recap our working example, we trained each of the neural networks for 99 epochs and recorded the entire history of neuron activations on a subset of training and testing examples. We can use the Grand Tour, then, to visualize the actual training process of these networks.\n\n\n\n\n In the beginning when the neural networks are randomly initialized, all examples are placed around the center of the softmax space, with equal weights to each class. \n Through training, examples move to class vectors in the softmax space. The Grand Tour also lets us\n compare visualizations of the training and testing data, giving us a qualitative assessment of over-fitting. \n In the MNIST dataset, the trajectory of testing images through training is consistent with the training set. \n Data points went directly toward the corner of its true class and all classes are stabilized after about 50 epochs.\n On the other hand, in CIFAR-10 there is an *inconsistency* between the training and testing sets. Images from the testing set keep oscillating while most images from training converges to the corresponding class corner. \n In epoch 99, we can clearly see a difference in distribution between these two sets.\n This signals that the model overfits the training set and thus does not generalize well to the testing set. \n\n\n\n\n\n\n\n\n With this view of CIFAR-10 , \n the color of points are more mixed in testing (right) than training (left) set, showing an over-fitting in the training process.\n Compare \n CIFAR-10 \n with \n MNIST\n or fashion-MNIST, \n where there is less difference between training and testing sets.\n\n### The Grand Tour of Layer Dynamics\n\n\n\n Given the presented techniques of the Grand Tour and direct manipulations on the axes, we can in theory visualize and manipulate any intermediate layer of a neural network by itself. Nevertheless, this is not a very satisfying approach, for two reasons:\n \n\n* In the same way that we’ve kept the projection fixed as the training data changed, we would like to “keep the projection fixed”, as the data moves through the layers in the neural network. However, this is not straightforward. For example, different layers in a neural network have different dimensions. How do we connect rotations of one layer to rotations of the other?\n* The class “axis handles” in the softmax layer convenient, but that’s only practical when the dimensionality of the layer is relatively small.\n With hundreds of dimensions, for example, there would be too many axis handles to naturally interact with.\n In addition, hidden layers do not have as clear semantics as the softmax layer, so manipulating them would not be as intuitive.\n\n\n\n\n To address the first problem, we will need to pay closer attention to the way in which layers transform the data that they are given. \n To see how a linear transformation can be visualized in a particularly ineffective way, consider the following (very simple) weights (represented by a matrix AAA) which take a 2-dimensional hidden layer kkk and produce activations in another 2-dimensional layer k+1k+1k+1. The weights simply negate two activations in 2D:\n A=[−1,00,−1]\n A = \\begin{bmatrix}\n -1, 0 \\\\\n 0, -1\n \\end{bmatrix}\n A=[−1,00,−1​]\n Imagine that we wish to visualize the behavior of network as the data moves from layer to layer. One way to interpolate the source x0x\\_0x0​ and destination x1=A(x0)=−x0x\\_1 = A(x\\_0) = -x\\_0x1​=A(x0​)=−x0​ of this action AAA is by a simple linear interpolation\n\n xt=(1−t)⋅x0+t⋅x1=(1−2t)⋅x0\n x\\_t = (1-t) \\cdot x\\_0 + t \\cdot x\\_1 = (1-2t) \\cdot x\\_0 \n xt​=(1−t)⋅x0​+t⋅x1​=(1−2t)⋅x0​\n for t∈[0,1].t \\in [0,1].t∈[0,1].\n\n Effectively, this strategy reuses the linear projection coefficients from one layer to the next. This is a natural thought, since they have the same dimension.\n However, notice the following: the transformation given by A is a simple rotation of the data. Every linear transformation of the layer k+1k+1k+1 could be encoded simply as a linear transformation of the layer kkk, if only that transformation operated on the negative values of the entries.\n In addition, since the Grand Tour has a rotation itself built-in, for every configuration that gives a certain picture of the layer kkk, there exists a *different* configuration that would yield the same picture for layer k+1k+1k+1, by taking the action of AAA into account.\n In effect, the naive interpolation fails the principle of data-visual correspondence: a simple change in data (negation in 2D/180 degree rotation) results in a drastic change in visualization (all points cross the origin).\n\n\n\n\n This observation points to a more general strategy: when designing a visualization, we should be as explicit as possible about which parts of the input (or process) we seek to capture in our visualizations.\n We should seek to explicitly articulate what are purely representational artifacts that we should discard, and what are the real features a visualization we should *distill* from the representation.\n Here, we claim that rotational factors in linear transformations of neural networks are significantly less important than other factors such as scalings and nonlinearities.\n As we will show, the Grand Tour is particularly attractive in this case because it is can be made to be invariant to rotations in data.\n As a result, the rotational components in the linear transformations of a neural network will be explicitly made invisible.\n\n\n\n\n Concretely, we achieve this by taking advantage of a central theorem of linear algebra. \n The *Singular Value Decomposition* (SVD) theorem shows that *any* linear transformation can be decomposed into a sequence of very simple operations: a rotation, a scaling, and another rotation. \n\n Applying a matrix AAA to a vector xxx is then equivalent to applying those simple operations: xA=xUΣVTx A = x U \\Sigma V^TxA=xUΣVT.\n But remember that the Grand Tour works by rotating the dataset and then projecting it to 2D.\n Combined, these two facts mean that as far as the Grand Tour is concerned, visualizing a vector xxx is the same as visualizing xUx UxU, and visualizing a vector xUΣVTx U \\Sigma V^TxUΣVT is the same as visualizing xUΣx U \\SigmaxUΣ. \n This means that any linear transformation seen by the Grand Tour is equivalent to the transition between xUx UxU and xUΣx U \\SigmaxUΣ - a simple (coordinate-wise) scaling. \n This is explicitly saying that any linear operation (whose matrix is represented in standard bases) is a scaling operation with appropriately chosen orthonormal bases on both sides.\n So the Grand Tour provides a natural, elegant and computationally efficient way to *align* visualizations of activations separated by fully-connected (linear) layers.Convolutional layers are also linear. One can instantly see that by forming the linear transformations between flattened feature maps, or by taking the circulant structure of convolutional layers directly into account\n\n\n\n\n (For the following portion, we reduce the number of data points to 500 and epochs to 50, in order to reduce the amount of data transmitted in a web-based demonstration.)\n With the linear algebra structure at hand, now we are able to trace behaviors and patterns from the softmax back to previous layers.\n In fashion-MNIST, for example, we observe a separation of shoes (sandals, sneakers and ankle boots as a group) from all other classes in the softmax layer. \n Tracing it back to earlier layers, we can see that this separation happened as early as layer 5:\n\n\n\n\n\n\n\nWith layers aligned, it is easy to see the early separation of shoes from this view.\n### The Grand Tour of Adversarial Dynamics\n\n\n\n As a final application scenario, we show how the Grand Tour can also elucidate the behavior of adversarial examples as they are processed by a neural network.\n For this illustration, we use the MNIST dataset, and we adversarially add perturbations to 89 digit 8s to fool the network into thinking they are 0s.\n Previously, we either animated the training dynamics or the layer dynamics.\n We fix a well-trained neural network, and visualize the training process of adversarial examples, since they are often themselves generated by an optimization process. Here, we used the Fast Gradient Sign method.\n Again, because the Grand Tour is a linear method, the change in the positions of the adversarial examples over time can be faithfully attributed to changes in how the neural network perceives the images, rather than potential artifacts of the visualization.\n Let us examine how adversarial examples evolved to fool the network:\n\n\n\n\n\n\n\n\n From this view of softmax, we can see how \n adversarial examples \n evolved from 8s \n into 0s.\n In the corresponding pre-softmax however, these adversarial examples stop around the decision boundary of two classes. \n Show data as images to see the actual images generated in each step, or dots colored by labels.\n\n\n Through this adversarial training, the network eventually claims, with high confidence, that the inputs given are all 0s.\n If we stay in the softmax layer and slide though the adversarial training steps in the plot, we can see adversarial examples move from a high score for class 8 to a high score for class 0.\n Although all adversarial examples are classified as the target class (digit 0s) eventually, some of them detoured somewhere close to the centroid of the space (around the 25th epoch) and then moved towards the target. \n Comparing the actual images of the two groups, we see those that those “detouring” images tend to be noisier.\n\n\n\n\n More interesting, however, is what happens in the intermediate layers.\n In pre-softmax, for example, we see that these fake 0s behave differently from the genuine 0s: they live closer to the decision boundary of two classes and form a plane by themselves. \n\n\n\nDiscussion\n----------\n\n\n### Limitations of the Grand Tour\n\n\n\n Early on, we compared several state-of-the-art dimensionality reduction techniques with the Grand Tour, showing that non-linear methods do not have as many desirable properties as the Grand Tour for understanding the behavior of neural networks. \n However, the state-of-the-art non-linear methods come with their own strength. \n Whenever geometry is concerned, like the case of understanding multi-way confusions in the softmax layer, linear methods are more interpretable because they preserve certain geometrical structures of data in the projection. \n When topology is the main focus, such as when we want to cluster the data or we need dimensionality reduction for downstream models that are less sensitive to geometry, we might choose non-linear methods such as UMAP or t-SNE for they have more freedom in projecting the data, and will generally make better use of the fewer dimensions available. \n\n\n\n### The Power of Animation and Direct Manipulation\n\n\n\n When comparing linear projections with non-linear dimensionality reductions, we used small multiples to contrast training epochs and dimensionality reduction methods.\n The Grand Tour, on the other hand, uses a single animated view.\n When comparing small multiples and animations, there is no general consensus on which one is better than the other in the literature, aside. \n from specific settings such as dynamic graph drawing , or concerns about incomparable contents between small multiples and animated plots.\n Regardless of these concerns, in our scenarios, the use of animation comes naturally from the direct manipulation and the existence of a continuum of rotations for the Grand Tour to operate in.\n\n\n\n### Non-sequential Models\n\n\n\n In our work we have used models that are purely “sequential”, in the sense that the layers can be put in numerical ordering, and that the activations for\n the n+1n+1n+1-th layer are a function exclusively of the activations at the nnn-th layer. \n In recent DNN architectures, however, it is common to have non-sequential parts such as highway branches or dedicated branches for different tasks . \n With our technique, one can visualize neuron activations on each such branch, but additional research is required to incorporate multiple branches directly.\n\n\n\n### Scaling to Larger Models\n\n\n\n Modern architectures are also wide. Especially when convolutional layers are concerned, one could run into issues with scalability if we see such layers as a large sparse matrix acting on flattened multi-channel images.\n For the sake of simplicity, in this article we brute-forced the computation of the alignment of such convolutional layers by writing out their explicit matrix representation. \n However, the singular value decomposition of multi-channel 2D convolutions can be computed efficiently , which can be then be directly used for alignment, as we described above.\n\n\n\n\nfunction toggle(event, id){\n let caller = d3.select(event.target); //DOM that received the event\n let callerIsActive = caller.classed('clickable-active');\n\n let selection = d3.select(id); //DOM to be toggled\n let isHidden = selection.classed('hidden');\n\n selection.classed('hidden', !isHidden);\n caller.classed('clickable-active', !callerIsActive); //change the indicator (+/- sign) besides the caller\n}\n\n\nTechnical Details\n------------------\n\n\n\n\nThis section presents the technical details necessary to implement the direct manipulation of axis handles and data points, as well as how to implement the projection consistency technique for layer transitions.\n\n### Notation\n\n\n\n In this section, our notational convention is that data points are represented as row vectors.\n An entire dataset is laid out as a matrix, where each row is a data point, and each column represents a different feature/dimension.\n As a result, when a linear transformation is applied to the data, the row vectors (and the data matrix overall) are left-multiplied by the transformation matrix.\n This has a side benefit that when applying matrix multiplications in a chain, the formula reads from left to right and aligns with a commutative diagram.\n For example, when a data matrix XXX is multiplied by a matrix MMM to generate YYY, in formula we write XM=YXM = YXM=Y, the letters have the same order in diagram:\n\n\n\nX↦MY\n X \n \\overset{M}{\\mapsto}\n Y\nX↦M​Y\n\nFurthermore, if the SVD of MMM is M=UΣVTM = U \\Sigma V^{T}M=UΣVT, we have XUΣVT=YX U \\Sigma V^{T} = YXUΣVT=Y, and the diagram\nX↦U↦Σ↦VTY\n X \n \\overset{U}{\\mapsto} \n \\overset{\\Sigma}{\\mapsto} \n \\overset{V^T}{\\mapsto} Y\nX↦U​↦Σ​↦VT​Y\nnicely aligns with the formula.\n\n### \nDirect Manipulation\n\n\n\n\n The direct manipulations we presented earlier provide explicit control over the possible projections for the data points.\n We provide two modes: directly manipulating class axes (the “axis mode”), or directly manipulating a group of data points through their centroid (the “data point mode”).\n Based on the dimensionality and axis semantics, as discussed in [Layer Dynamics](#layer-dynamics), we may prefer one mode than the other.\n \n We will see that the axis mode is a special case of data point mode, because we can view an axis handle as a particular “fictitious” point in the dataset.\n Because of its simplicity, we will first introduce the axis mode.\n\n\n\n#### \n The Axis Mode\n\n\n\n\n The implied semantics of direct manipulation is that when a user drags an UI element (in this case, an axis handle), they are signaling to the system that they wished that the corresponding\n data point had been projected to the location where the UI element was dropped, rather than where it was dragged from.\n In our case the overall projection is a rotation (originally determined by the Grand Tour), and an arbitrary user manipulation might not necessarily generate a new projection that is also a rotation. Our goal, then, is to find a new rotation which satisfies the user request and is close to the previous state of the Grand Tour projection, so that the resulting state satisfies the user request.\n\n In a nutshell, when user drags the ithi^{th}ith axis handle by (dx,dy)(dx, dy)(dx,dy), we add them to the first two entries of the ithi^{th}ith row of the Grand Tour matrix, and then perform [Gram-Schmidt orthonormalization](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) on the rows of the new matrix.\n \n Rows have to be reordered such that the ithi^{th}ith row is considered first in the Gram-Schmidt procedure.\n \n\n\n\n\n Before we see in detail why this works well, let us formalize the process of the Grand Tour on a standard basis vector eie\\_iei​. \n As shown in the diagram below, eie\\_iei​ goes through an orthogonal Grand Tour matrix GTGTGT to produce a rotated version of itself, ei~\\tilde{e\\_i}ei​~​. \n Then, π2\\pi\\_2π2​ is a function that keeps only the first two entries of ei~\\tilde{e\\_i}ei​~​ and gives the 2D coordinate of the handle to be shown in the plot, (xi,yi)(x\\_i, y\\_i)(xi​,yi​).\n\n\n\nei↦GTei~↦π2(xi,yi)\n e\\_i \\overset{GT}{\\mapsto} \\tilde{e\\_i} \\overset{\\pi\\_2}{\\mapsto} (x\\_i, y\\_i)\nei​↦GT​ei​~​↦π2​​(xi​,yi​)\n\n When user drags an axis handle on the screen canvas, they induce a delta change Δ=(dx,dy)\\Delta = (dx, dy)Δ=(dx,dy) on the xyxyxy-plane. \n The coordinate of the handle becomes:\n (xi(new),yi(new)):=(xi+dx,yi+dy)(x\\_i^{(new)}, y\\_i^{(new)}) := (x\\_i+dx, y\\_i+dy)(xi(new)​,yi(new)​):=(xi​+dx,yi​+dy)\n Note that xix\\_ixi​ and yiy\\_iyi​ are the first two coordinates of the axis handle in high dimensions after the Grand Tour rotation, so a delta change on (xi,yi)(x\\_i, y\\_i)(xi​,yi​) induces a delta change Δ~:=(dx,dy,0,0,⋯)\\tilde{\\Delta} := (dx, dy, 0, 0, \\cdots)Δ~:=(dx,dy,0,0,⋯) on ei~\\tilde{e\\_i}ei​~​:\n ei~↦Δ~ei~+Δ~\\tilde{e\\_i} \\overset{\\tilde{\\Delta}}{\\mapsto} \\tilde{e\\_i} + \\tilde{\\Delta}ei​~​↦Δ~​ei​~​+Δ~\n\n\n\n\n To find a nearby Grand Tour rotation that respects this change, first note that ei~\\tilde{e\\_i}ei​~​ is exactly the ithi^{th}ith row of orthogonal Grand Tour matrix GTGTGT\n\n Recall that the convention is that vectors are in row form and linear transformations are matrices that are multiplied on the right.\n So eie\\_iei​ is a row vector whose iii-th entry is 111 (and 000s elsewhere) and ei~:=ei⋅GT\\tilde{e\\_i} := e\\_i \\cdot GTei​~​:=ei​⋅GT is the iii-th row of GTGTGT\n. \n Naturally, we want the new matrix to be the original GTGTGT with its ithi^{th}ith row replaced by ei~+Δ~\\tilde{e\\_i}+\\tilde{\\Delta}ei​~​+Δ~, i.e. we should add dxdxdx and dydydy to the (i,1)(i,1)(i,1)-th entry and (i,2)(i,2)(i,2)-th entry of GTGTGT respectively:\n GT~←GT\\widetilde{GT} \\leftarrow GTGT\n←GT\nGT~i,1←GTi,1+dx\\widetilde{GT}\\_{i,1} \\leftarrow GT\\_{i,1} + dxGT\ni,1​←GTi,1​+dx\nGT~i,2←GTi,2+dy\\widetilde{GT}\\_{i,2} \\leftarrow GT\\_{i,2} + dyGT\ni,2​←GTi,2​+dy\n However, GT~\\widetilde{GT}GT\n is not orthogonal for arbitrary (dx,dy)(dx, dy)(dx,dy).\n In order to find an approximation to GT~\\widetilde{GT}GT\n that is orthogonal, we apply [Gram-Schmidt orthonormalization](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) on the rows of GT~\\widetilde{GT}GT\n, with the ithi^{th}ith row considered first in the Gram-Schmidt process:\n GT(new):=GramSchmidt(GT~)GT^{(new)} := \\textsf{GramSchmidt}(\\widetilde{GT})GT(new):=GramSchmidt(GT\n)\n Note that the ithi^{th}ith row is normalized to a unit vector during the Gram-Schmidt, so the resulting position of the handle is \n ei~(new)=normalize(ei~+Δ~)\\tilde{e\\_i}^{(new)} = \\textsf{normalize}(\\tilde{e\\_i} + \\tilde{\\Delta})ei​~​(new)=normalize(ei​~​+Δ~)\n which may not be exactly the same as ei~+Δ~\\tilde{e\\_i}+\\tilde{\\Delta}ei​~​+Δ~, as the following figure shows\n \n However, for any Δ~\\tilde{\\Delta}Δ~, the norm of the difference is bounded above by ∣∣Δ~∣∣||\\tilde{\\Delta}||∣∣Δ~∣∣, as the following figure proves.\n ![](figs/direct-manipulation-rotation-2d-proof.png)\n\n  .\n\n\n\n![](figs/direct-manipulation-rotation-2d.png)\n \n#### \n The Data Point Mode\n\n\n\n\n We now explain how we directly manipulate data points. \n Technically speaking, this method only considers one point at a time.\n For a group of points, we compute their centroid and directly manipulate this single point with this method.\n Thinking more carefully about the process in axis mode gives us a way to drag any single point.\n Recall that in axis mode, we added user’s manipulation Δ~:=(dx,dy,0,0,⋯)\\tilde{\\Delta} := (dx, dy, 0, 0, \\cdots)Δ~:=(dx,dy,0,0,⋯) to the position of the ithi^{th}ith axis handle ei~\\tilde{e\\_i}ei​~​.\n This induces a delta change in the ithi^{th}ith row of the Grand Tour matrix GTGTGT.\n Next, as the first step in Gram-Schmidt, we normalized this row: \n GTi(new):=normalize(GT~i)=normalize(ei~+Δ~)\n GT\\_i^{(new)} := \\textsf{normalize}(\\widetilde{GT}\\_i) = \\textsf{normalize}(\\tilde{e\\_i} + \\tilde{\\Delta})\n GTi(new)​:=normalize(GT\ni​)=normalize(ei​~​+Δ~)\n These two steps make the axis handle move from ei~\\tilde{e\\_i}ei​~​ to ei~(new):=normalize(ei~+Δ~)\\tilde{e\\_i}^{(new)} := \\textsf{normalize}(\\tilde{e\\_i}+\\tilde{\\Delta})ei​~​(new):=normalize(ei​~​+Δ~).\n\n\n\n\n Looking at the geometry of this movement, the “add-delta-then-normalize” on ei~\\tilde{e\\_i}ei​~​ is equivalent to a *rotation* from ei~\\tilde{e\\_i}ei​~​ towards ei~(new)\\tilde{e\\_i}^{(new)}ei​~​(new), illustrated in the figure below. \n This geometric interpretation can be directly generalized to any arbitrary data point.\n\n\n\n![](figs/direct-manipulation-rotation-3d.png)\n\n The figure shows the case in 3D, but in higher dimensional space it is essentially the same, since the two vectors ei~\\tilde{e\\_i}ei​~​ and ei~+Δ~\\tilde{e\\_i}+\\tilde{\\Delta}ei​~​+Δ~ only span a 2-subspace.\n Now we have a nice geometric intuition about direct manipulation: dragging a point induces a *simple rotation*\n[Simple rotations](https://en.wikipedia.org/wiki/Rotations_in_4-dimensional_Euclidean_space#Simple_rotations) are rotations with only one [plane of rotation](https://en.wikipedia.org/wiki/Plane_of_rotation#Simple_rotations).\n in high dimensional space.\n This intuition is precisely how we implemented our direct manipulation on arbitrary data points, which we will specify as below.\n\n\n\n\n Generalizing this observation from axis handle to arbitrary data point, we want to find the rotation that moves the centroid of a selected subset of data points c~\\tilde{c}c~ to \n c~(new):=(c~+Δ~)⋅∣∣c~∣∣/∣∣c~+Δ~∣∣\n \\tilde{c}^{(new)} := (\\tilde{c} + \\tilde{\\Delta}) \\cdot ||\\tilde{c}|| / ||\\tilde{c} + \\tilde{\\Delta}||\n c~(new):=(c~+Δ~)⋅∣∣c~∣∣/∣∣c~+Δ~∣∣\n\n\n\n![](figs/direct-manipulation-rotation-2d-perp.png)\n\n First, the angle of rotation can be found by their cosine similarity:\n θ=arccos(⟨c~,c~(new)⟩∣∣c~∣∣⋅∣∣c~(new)∣∣) \\theta = \\textrm{arccos}(\n \\frac{\\langle \\tilde{c}, \\tilde{c}^{(new)} \\rangle}{||\\tilde{c}|| \\cdot ||\\tilde{c}^{(new)}||}\n )θ=arccos(∣∣c~∣∣⋅∣∣c~(new)∣∣⟨c~,c~(new)⟩​)\n Next, to find the matrix form of the rotation, we need a convenient basis.\n Let QQQ be a change of (orthonormal) basis matrix in which the first two rows form the 2-subspace span(c~,c~(new))\\textrm{span}(\\tilde{c}, \\tilde{c}^{(new)})span(c~,c~(new)).\n For example, we can let its first row to be normalize(c~)\\textsf{normalize}(\\tilde{c})normalize(c~), second row to be its orthonormal complement normalize(c~⊥(new))\\textsf{normalize}(\\tilde{c}^{(new)}\\_{\\perp})normalize(c~⊥(new)​) in span(c~,c~(new))\\textrm{span}(\\tilde{c}, \\tilde{c}^{(new)})span(c~,c~(new)), and the remaining rows complete the whole space:\n c~⊥(new):=c~−∣∣c~∣∣⋅cosθc~(new)∣∣c~(new)∣∣\n \\tilde{c}^{(new)}\\_{\\perp} \n  := \\tilde{c} - ||\\tilde{c}|| \\cdot cos \\theta \\frac{\\tilde{c}^{(new)}}{||\\tilde{c}^{(new)}||}\n c~⊥(new)​:=c~−∣∣c~∣∣⋅cosθ∣∣c~(new)∣∣c~(new)​\nQ:=[⋯normalize(c~)⋯⋯normalize(c~⊥(new))⋯P]\n Q :=\n \\begin{bmatrix}\n \\cdots \\textsf{normalize}(\\tilde{c}) \\cdots \\\\\n \\cdots \\textsf{normalize}(\\tilde{c}^{(new)}\\_{\\perp}) \\cdots \\\\\n P\n \\end{bmatrix}\n Q:=⎣⎡​⋯normalize(c~)⋯⋯normalize(c~⊥(new)​)⋯P​⎦⎤​\n where PPP completes the remaining space.\n\n Making use of QQQ, we can find the matrix that rotates the plane span(c~,c~(new))\\textrm{span}(\\tilde{c}, \\tilde{c}^{(new)})span(c~,c~(new)) by the angle θ\\thetaθ:\n ρ=QT[cosθsinθ00⋯−sinθcosθ00⋯00⋮⋮I]Q=:QTR1,2(θ)Q\n \\rho = Q^T\n \\begin{bmatrix}\n \\cos \\theta& \\sin \\theta& 0& 0& \\cdots\\\\\n -\\sin \\theta& \\cos \\theta& 0& 0& \\cdots\\\\\n 0& 0& \\\\ \n \\vdots& \\vdots& & I& \\\\\n \\end{bmatrix}\n Q\n =: Q^T R\\_{1,2}(\\theta) Q\n ρ=QT⎣⎢⎢⎢⎢⎡​cosθ−sinθ0⋮​sinθcosθ0⋮​00​00I​⋯⋯​⎦⎥⎥⎥⎥⎤​Q=:QTR1,2​(θ)Q\n The new Grand Tour matrix is the matrix product of the original GTGTGT and ρ\\rhoρ:\n GT(new):=GT⋅ρ\n GT^{(new)} := GT \\cdot \\rho\n GT(new):=GT⋅ρ\n Now we should be able to see the connection between axis mode and data point mode.\n In data point mode, finding QQQ can be done by Gram-Schmidt: Let the first basis be c~\\tilde{c}c~, find the orthogonal component of c~(new)\\tilde{c}^{(new)}c~(new) in span(c~,c~(new))\\textrm{span}(\\tilde{c}, \\tilde{c}^{(new)})span(c~,c~(new)), repeatedly take a random vector, find its orthogonal component to the span of the current basis vectors and add it to the basis set. \n In axis mode, the ithi^{th}ith-row-first Gram-Schmidt does the rotation and change of basis in one step.\n\n\n\n \n \n### \n\n Layer Transitions\n\n\n\n#### \n ReLU Layers\n\n\n\n When the lthl^{th}lth layer is a ReLU function, the output activation is Xl=ReLU(Xl−1)X^{l} = ReLU(X^{l-1})Xl=ReLU(Xl−1). Since ReLU does not change the dimensionality and the function is taken coordinate wise, we can animate the transition by a simple linear interpolation: for a time parameter t∈[0,1]t \\in [0,1]t∈[0,1],\n X(l−1)→l(t):=(1−t)Xl−1+tXl\n X^{(l-1) \\to l}(t) := (1-t) X^{l-1} + t X^{l}\n X(l−1)→l(t):=(1−t)Xl−1+tXl\n \n#### \n Linear Layers\n\n\n\n Transitions between linear layers can seem complicated, but as we will show, this comes from choosing mismatching bases on either side of the transition. \n If Xl=Xl−1MX^{l} = X^{l-1} MXl=Xl−1M where M∈Rm×nM \\in \\mathbb{R}^{m \\times n}M∈Rm×n is the matrix of a linear transformation, then it has a singular value decomposition (SVD):\n M=UΣVTM = U \\Sigma V^TM=UΣVT\n where U∈Rm×mU \\in \\mathbb{R}^{m \\times m}U∈Rm×m and VT∈Rn×nV^T \\in \\mathbb{R}^{n \\times n}VT∈Rn×n are orthogonal, Σ∈Rm×n\\Sigma \\in \\mathbb{R}^{m \\times n}Σ∈Rm×n is diagonal.\n For arbitrary UUU and VTV^TVT, the transformation on Xl−1X^{l-1}Xl−1 is a composition of a rotation (UUU), scaling (Σ\\SigmaΣ) and another rotation (VTV^TVT), which can look complicated. \n However, consider the problem of relating the Grand Tour view of layer XlX^{l}Xl to that of layer Xl+1X^{l+1}Xl+1. The Grand Tour has a single parameter that represents the current rotation of the dataset. Since our goal is to keep the transition consistent, we notice that UUU and VTV^TVT have essentially no significance - they are just rotations to the view that can be exactly “canceled” by changing the rotation parameter of the Grand Tour in either layer.\n Hence, instead of showing MMM, we seek for the transition to animate only the effect of Σ\\SigmaΣ.\n Σ\\SigmaΣ is a coordinate-wise scaling, so we can animate it similar to the ReLU after the proper change of basis.\n Given Xl=Xl−1UΣVTX^{l} = X^{l-1} U \\Sigma V^TXl=Xl−1UΣVT, we have\n (XlV)=(Xl−1U)Σ\n (X^{l}V) = (X^{l-1}U)\\Sigma\n (XlV)=(Xl−1U)Σ\n For a time parameter t∈[0,1]t \\in [0,1]t∈[0,1],\n X(l−1)→l(t):=(1−t)(Xl−1U)+t(XlV)=(1−t)(Xl−1U)+t(Xl−1UΣ)\n X^{(l-1) \\to l}(t) := (1-t) (X^{l-1}U) + t (X^{l}V) = (1-t) (X^{l-1}U) + t (X^{l-1} U \\Sigma)\n X(l−1)→l(t):=(1−t)(Xl−1U)+t(XlV)=(1−t)(Xl−1U)+t(Xl−1UΣ)\n \n#### \n Convolutional Layers\n\n\n\n Convolutional layers can be represented as special linear layers.\n With a change of representation, we can animate a convolutional layer like the previous section.\n For 2D convolutions this change of representation involves flattening the input and output, and repeating the kernel pattern in a sparse matrix M∈Rm×nM \\in \\mathbb{R}^{m \\times n}M∈Rm×n, where mmm and nnn are the dimensionalities of the input and output respectively.\n This change of representation is only practical for a small dimensionality (e.g. up to 1000), since we need to solve SVD for linear layers.\n However, the singular value decomposition of multi-channel 2D convolutions can be computed efficiently , which can be then be directly used for alignment.\n \n#### \n Max-pooling Layers\n\n\n\n Animating max-pooling layers is nontrivial because max-pooling is neither linear A max-pooling layer is piece-wise linear or coordinate-wise.\n We replace it by average-pooling and scaling by the ratio of the average to the max.\n We compute the matrix form of average-pooling and use its SVD to align the view before and after this layer. \n Functionally, our operations have equivalent results to max-pooling, but this introduces\n unexpected artifacts. For example, the max-pooling version of the vector [0.9,0.9,0.9,1.0][0.9, 0.9, 0.9, 1.0][0.9,0.9,0.9,1.0] should “give no credit” to the 0.90.90.9 entries; our implementation, however, will\n attribute about 25% of the result in the downstream layer to each those coordinates.\n \n \n\nConclusion\n----------\n\n\n\n As powerful as t-SNE and UMAP are, they often fail to offer the correspondences we need, and such correspondences can come, surprisingly, from relatively simple methods like the Grand Tour. The Grand Tour method we presented is particularly useful when direct manipulation from the user is available or desirable.\n We believe that it might be possible to design methods that highlight the best of both worlds, using non-linear dimensionality reduction to create intermediate, relatively low-dimensional representations of the activation layers, and using the Grand Tour and direct manipulation to compute the final projection.", "date_published": "2020-03-16T20:00:00Z", "authors": ["Mingwei Li", "Zhenge Zhao", "Carlos Scheidegger"], "summaries": ["By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks."], "doi": "10.23915/distill.00025", "journal_ref": "distill-pub", "bibliography": [{"link": "https://epubs.siam.org/doi/pdf/10.1137/0906011", "title": "The grand tour: a tool for viewing multidimensional data"}, {"link": "http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf", "title": "Visualizing data using t-SNE"}, {"link": "https://arxiv.org/pdf/1802.03426.pdf", "title": "Umap: Uniform manifold approximation and projection for dimension reduction"}, {"link": "https://doi.org/10.1007/s11263-015-0816-y", "title": "ImageNet Large Scale Visual Recognition Challenge"}, {"link": "https://distill.pub/2017/feature-visualization/", "title": "Feature visualization"}, {"link": "http://yann.lecun.com/exdb/mnist/", "title": "MNIST handwritten digit database"}, {"link": "https://github.com/zalandoresearch/fashion-mnist", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"}, {"link": "https://www.cs.toronto.edu/~kriz/cifar.html", "title": "Learning multiple layers of features from tiny images"}, {"link": "https://www.cs.toronto.edu/~hinton/absps/reluICML.pdf", "title": "Rectified linear units improve restricted boltzmann machines"}, {"link": "http://www.cs.rug.nl/~alext/PAPERS/EuroVis16/paper.pdf", "title": "Visualizing time-dependent data using dynamic t-SNE"}, {"link": "https://distill.pub/2016/misread-tsne/", "title": "How to use t-sne effectively"}, {"link": "http://www.ams.org/publicoutreach/feature-column/fcarc-svd", "title": "We Recommend a Singular Value Decomposition"}, {"link": "https://arxiv.org/pdf/1805.10408.pdf", "title": "The singular values of convolutional layers"}, {"link": "https://arxiv.org/pdf/1505.00387.pdf", "title": "Highway networks"}, {"link": "https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf", "title": "Going deeper with convolutions"}]} {"id": "5d14cefefdea1bbc999ab25065fae8fc", "title": "Thread: Circuits", "url": "https://distill.pub/2020/circuits", "source": "distill", "source_type": "blog", "text": "In the original narrative of deep learning, each neuron builds\n progressively more abstract, meaningful features by composing features in\n the preceding layer. In recent years, there’s been some skepticism of this\n view, but what happens if you take it really seriously?\n \n\n\n\n InceptionV1 is a classic vision model with around 10,000 unique neurons — a large number, but still on a scale that a group effort could attack.\n What if you simply go through the model, neuron by neuron, trying to\n understand each one and the connections between them? The circuits\n collaboration aims to find out.\n \n\n\n\nArticles & Comments\n-------------------\n\n\n\n The natural unit of publication for investigating circuits seems to be\n short papers on individual circuits or small families of features.\n Compared to normal machine learning papers, this is a small and unusual\n topic for a paper.\n \n\n\n\n To facilitate exploration of this direction, Distill is inviting a\n “thread” of short articles on circuits, interspersed with critical\n commentary by experts in adjacent fields. The thread will be a living\n document, with new articles added over time, organized through an open\n slack channel (#circuits in the\n [Distill slack](http://slack.distill.pub)). Content in this\n thread should be seen as early stage exploratory research.\n \n\n\nArticles and comments are presented below in chronological order:\n\n\n\n\n\n\n### \n[Zoom In: An Introduction to Circuits](zoom-in/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Chris Olah](https://colah.github.io/),\n [Nick Cammarata](http://nickcammarata.com/),\n [Ludwig Schubert](https://schubert.io/),\n [Gabriel Goh](http://gabgoh.github.io/),\n [Michael Petrov](https://twitter.com/mpetrov),\n [Shan Carter](http://shancarter.com/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n\n Does it make sense to treat individual neurons and the connections\n between them as a serious object of study? This essay proposes three\n claims which, if true, might justify serious inquiry into them: the\n existence of meaningful features, the existence of meaningful circuits\n between features, and the universality of those features and circuits.\n \n \n\n It also discuses historical successes of science “zooming in,” whether\n we should be concerned about this research being qualitative, and\n approaches to rigorous investigation.\n \n \n\n[Read Full Article](zoom-in/)\n\n\n\n\n\n\n\n### \n[An Overview of Early Vision in InceptionV1](early-vision/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Chris Olah](https://colah.github.io/),\n [Nick Cammarata](http://nickcammarata.com/),\n [Ludwig Schubert](https://schubert.io/),\n [Gabriel Goh](http://gabgoh.github.io/),\n [Michael Petrov](https://twitter.com/mpetrov),\n [Shan Carter](http://shancarter.com/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n An overview of all the neurons in the first five layers of\n InceptionV1, organized into a taxonomy of “neuron groups.” This\n article sets the stage for future deep dives into particular aspects\n of early vision.\n \n \n\n [Read Full Article](early-vision/) \n\n\n\n\n\n\n\n### \n[Curve Detectors](curve-detectors/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Nick Cammarata](http://nickcammarata.com/),\n [Gabriel Goh](http://gabgoh.github.io/),\n [Shan Carter](http://shancarter.com/),\n [Ludwig Schubert](https://schubert.io/),\n [Michael Petrov](https://twitter.com/mpetrov),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n\n\n Every vision model we’ve explored in detail contains neurons which\n detect curves. Curve detectors is the first in a series of three\n articles exploring this neuron family in detail.\n \n \n\n[Read Full Article](curve-detectors/)\n\n\n\n\n\n\n\n### \n[Naturally Occurring Equivariance in Neural Networks](equivariance/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Chris Olah](https://colah.github.io/),\n [Nick Cammarata](http://nickcammarata.com/), [Chelsea Voss](https://csvoss.com/),\n [Ludwig Schubert](https://schubert.io/),\n [Gabriel Goh](http://gabgoh.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n\n\n Neural networks naturally learn many transformed copies of the same\n feature, connected by symmetric weights.\n \n \n\n[Read Full Article](equivariance/)\n\n\n\n\n\n\n\n### \n[High-Low Frequency Detectors](frequency-edges/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Ludwig Schubert](https://schubert.io/),\n [Chelsea Voss](https://csvoss.com/),\n [Nick Cammarata](http://nickcammarata.com/),\n [Gabriel Goh](http://gabgoh.github.io/),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n A family of early-vision neurons reacting to directional transitions\n from high to low spatial frequency.\n \n \n\n[Read Full Article](frequency-edges/)\n\n\n\n\n\n\n\n### \n[Curve Circuits](curve-circuits/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Nick Cammarata](http://nickcammarata.com/),\n [Gabriel Goh](http://gabgoh.github.io/),\n [Shan Carter](http://shancarter.com/),\n [Chelsea Voss](https://csvoss.com/),\n [Ludwig Schubert](https://schubert.io/),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n We reverse engineer a non-trivial learned algorithm from the weights\n of a neural network and use its core ideas to craft an artificial\n artificial neural network from scratch that reimplements it.\n \n \n\n[Read Full Article](curve-circuits/)\n\n\n\n\n\n\n\n### \n[Visualizing Weights](visualizing-weights/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Chelsea Voss](https://csvoss.com),\n [Nick Cammarata](http://nickcammarata.com),\n [Gabriel Goh](https://gabgoh.github.io),\n [Michael Petrov](https://twitter.com/mpetrov),\n [Ludwig Schubert](https://schubert.io/),\n Ben Egan,\n [Swee Kiat Lim](https://greentfrapp.github.io/),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/),\n [Mount Royal University](https://mtroyal.ca),\n [Stanford University](https://stanford.edu)\n\n\n\n\n\n\n We present techniques for visualizing, contextualizing, and\n understanding neural network weights.\n \n \n\n[Read Full Article](visualizing-weights/)\n\n\n\n\n\n\n\n### \n[Branch Specialization](branch-specialization/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Chelsea Voss](https://csvoss.com),\n [Gabriel Goh](https://gabgoh.github.io),\n [Nick Cammarata](http://nickcammarata.com),\n [Michael Petrov](https://twitter.com/mpetrov),\n [Ludwig Schubert](https://schubert.io/),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n When a neural network layer is divided into multiple branches, neurons\n self-organize into coherent groupings.\n \n \n\n[Read Full Article](branch-specialization/)\n\n\n\n\n\n\n\n### \n[Weight Banding](weight-banding/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Michael Petrov](https://twitter.com/mpetrov),\n [Chelsea Voss](https://csvoss.com),\n [Ludwig Schubert](https://schubert.io/),\n [Nick Cammarata](http://nickcammarata.com),\n [Gabriel Goh](https://gabgoh.github.io),\n [Chris Olah](https://colah.github.io/)\n\n\n\n\n[OpenAI](https://openai.com/)\n\n\n\n\n\n\n Weights in the final layer of common visual models appear as horizontal bands. We investigate how and why.\n \n \n\n[Read Full Article](weight-banding/)\n\n\n\n\n\n\n#### This is a living document\n\n\n\n Expect more articles on this topic, along with critical comments from\n experts.\n \n\n\n\nGet Involved\n------------\n\n\n\n The Circuits thread is open to articles exploring individual features,\n circuits, and their organization within neural networks. Critical\n commentary and discussion of existing articles is also welcome. The thread\n is organized through the open `#circuits` channel on the\n [Distill slack](http://slack.distill.pub). Articles can be\n suggested there, and will be included at the discretion of previous\n authors in the thread, or in the case of disagreement by an uninvolved\n editor.\n \n\n\n\n If you would like get involved but don’t know where to start, small\n projects may be available if you ask in the channel.\n \n\n\nAbout the Thread Format\n-----------------------\n\n\n\n Part of Distill’s mandate is to experiment with new forms of scientific\n publishing. We believe that that reconciling faster and more continuous\n approaches to publication with review and discussion is an important open\n problem in scientific publishing.\n \n\n\n\n Threads are collections of short articles, experiments, and critical\n commentary around a narrow or unusual research topic, along with a slack\n channel for real time discussion and collaboration. They are intended to\n be earlier stage than a full Distill paper, and allow for more fluid\n publishing, feedback and discussion. We also hope they’ll allow for wider\n participation. Think of a cross between a Twitter thread, an academic\n workshop, and a book of collected essays.\n \n\n\n\n Threads are very much an experiment. We think it’s possible they’re a\n great format, and also possible they’re terrible. We plan to trial two\n such threads and then re-evaluate our thought on the format.", "date_published": "2020-03-10T20:00:00Z", "authors": ["Nick Cammarata", "Shan Carter", "Gabriel Goh", "Chris Olah", "Michael Petrov", "Ludwig Schubert", "Chelsea Voss", "Swee Kiat Lim", "Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter", "Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter", "Nick Cammarata", "Gabriel Goh", "Shan Carter", "Ludwig Schubert", "Michael Petrov", "Chris Olah", "Chris Olah", "Nick Cammarata", "Chelsea Voss", "Ludwig Schubert", "Gabriel Goh", "Ludwig Schubert", "Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Chris Olah", "Nick Cammarata", "Gabriel Goh", "Shan Carter", "Chelsea Voss", "Ludwig Schubert", "Chris Olah", "Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Michael Petrov", "Ludwig Schubert", "Swee Kiat Lim", "Chris Olah", "Chelsea Voss", "Gabriel Goh", "Nick Cammarata", "Michael Petrov", "Ludwig Schubert", "Chris Olah", "Michael Petrov", "Chelsea Voss", "Ludwig Schubert", "Nick Cammarata", "Gabriel Goh", "Chris Olah"], "summaries": ["What can we learn if we invest heavily in reverse engineering a single neural network?"], "doi": "10.23915/distill.00024", "journal_ref": "distill-pub", "bibliography": []} {"id": "ed5ab808068ad61f2b5d9b519a94db3d", "title": "Zoom In: An Introduction to Circuits", "url": "https://distill.pub/2020/circuits/zoom-in", "source": "distill", "source_type": "blog", "text": "![](images/multiple-pages.svg)\n\n This article is part of the [Circuits thread](/2020/circuits/), an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.\n \n\n\n\n[Circuits Thread](/2020/circuits/)\n[An Overview of Early Vision in InceptionV1](/2020/circuits/early-vision/)\n\n\n\n### Contents\n\n\n\n[Three Speculative Claims](#three-speculative-claims)\n[Claim 1: Features](#claim-1)\n* [Example 1: Curve Detectors](#claim-1-curves)\n* [Example 2: High-Low Frequency Detectors](#claim-1-hilo)\n* [Example 3: Pose-Invariant Dog Head Detector](#claim-1-dog)\n* [Polysemantic Neurons](#claim-1-polysemantic)\n\n\n[Claim 2: Circuits](#claim-2)\n* [Circuit 1: Curve Detectors](#claim-2-curves)\n* [Circuit 2: Oriented Dog Head Detection](#claim-2-dog)\n* [Circuit 3: Cars in Superposition](#claim-2-superposition)\n* [Circuit Motifs](#claim-2-motifs)\n\n\n[Claim 3: Universality](#claim-3)\n[Interpretability as a Natural Science](#natural-science)\n[Closing Thoughts](#closing)\n\n\n\n\n\n\n Many important transition points in the history of science have been moments when science “zoomed in.”\n At these points, we develop a visualization or tool that allows us to see the world in a new level of detail, and a new field of science develops to study the world through this lens.\n \n\n\n\n For example, microscopes let us see cells, leading to cellular biology. Science zoomed in. Several techniques including x-ray crystallography let us see DNA, leading to the molecular revolution. Science zoomed in. Atomic theory. Subatomic particles. Neuroscience. Science zoomed in.\n \n\n\n\n These transitions weren’t just a change in precision: they were qualitative changes in what the objects of scientific inquiry are.\n For example, cellular biology isn’t just more careful zoology.\n It’s a new kind of inquiry that dramatically shifts what we can understand.\n \n\n\n\n The famous examples of this phenomenon happened at a very large scale,\n but it can also be the more modest shift of a small research community realizing they can now study their topic in a finer grained level of detail.\n \n\n\n\n\n![](./images/micrographia2.jpg)\n\n Hooke’s Micrographia revealed a rich microscopic world as seen\n through a microscope, including the initial discovery of cells.\n \nImages from the National Library of Wales.\n \n\nJust as the early microscope hinted at a new world of cells and microorganisms, visualizations of artificial neural networks have revealed tantalizing hints and glimpses of a rich inner world within our models (e.g. ).\n This has led us to wonder: Is it possible that deep learning is at a similar, albeit more modest, transition point?\n \n\n\n\n Most work on interpretability aims to give simple explanations of an entire neural network’s behavior.\n But what if we instead take an approach inspired by neuroscience or cellular biology — an approach of zooming in?\n What if we treated individual neurons, even individual weights, as being worthy of serious investigation?\n What if we were willing to spend thousands of hours tracing through every neuron and its connections?\n What kind of picture of neural networks would emerge?\n \n\n\n\n In contrast to the typical picture of neural networks as a black box, we’ve been surprised how approachable the network is on this scale.\n Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the “circuits” of connections between them seem to be meaningful algorithms corresponding to facts about the world.\n You can watch a circle detector be assembled from curves.\n You can see a dog head be assembled from eyes, snout, fur and tongue.\n You can observe how a car is composed from wheels and windows.\n You can even find circuits implementing simple logic: cases where the network implements AND, OR or XOR over high-level visual features.\n \n\n\n\n\n![](./images/deepdream.jpg)\n\n Over the last few years, we’ve seen many incredible visualizations and analyses hinting at a rich world of internal features in modern\n neural networks. Above, we see a [DeepDream](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html) image, which sparked a great deal of excitement in this space.\n \n\n\n This introductory essay offers a high-level overview of our thinking and some of the working principles that we’ve found useful in this line of research.\n In future articles, we and our collaborators will publish detailed explorations of this inner world.\n\n\n \n\n\n But the truth is that we’ve only scratched the surface of understanding a single vision model.\n If these questions resonate with you, you are welcome to join us and our collaborators in the Circuits project, an open scientific collaboration hosted on the [Distill slack](http://slack.distill.pub/).\n\n\n\n \n\n\n\n---\n\n\nThree Speculative Claims\n------------------------\n\n\nOne of the earliest articulations of something approaching modern cell theory was three claims by Theodor Schwann — who you may know for Schwann cells — in 1839:\n\n\n\n\n![](./images/schwann-book.jpg)\n\n#### Schwann’s Claims about Cells\n\n\n\nClaim 1\n The cell is the unit of structure, physiology, and organization in living things.\n \n\nClaim 2\n The cell retains a dual existence as a distinct entity and a building block in the construction of organisms.\n \n\nClaim 3\n Cells form by free-cell formation, similar to the formation of crystals.\n \n\n This translation/summarization of Schwann’s claims can be found in many biology texts; we were unable to determine what the original source of the translation is. The image of Schwann's book is from the [Deutsches Textarchiv](http://www.deutschestextarchiv.de/book/show/schwann_mikroskopische_1839).\n\n\n\n\nThe first two of these claims are likely familiar, persisting in modern cellular theory. The third is likely not familiar, since it turned out to be horribly wrong.\n\n\nWe believe there’s a lot of value in articulating a strong version of something one may believe to be true, even if it might be false like Schwann’s third claim. In this spirit, we offer three claims about neural networks. They are intended both as empirical claims about the nature of neural networks, and also as normative claims about how it’s useful to understand them.\n\n\n\n\n![](./images/atlas-book-crop.png)\n\n#### Three Speculative Claims about Neural Networks\n\n\n\nClaim 1: Features\n Features are the fundamental unit of neural networks. \n\n They correspond to directions.\n By “direction” we mean a linear combination of neurons in a layer.\n You can think of this as a direction vector in the vector space of activations of neurons in a given layer.\n Often, we find it most helpful to talk about individual neurons,\n but we’ll see that there are some cases where other combinations are a more useful way to analyze networks — especially when neurons are “polysemantic.”\n (See the [glossary](#glossary-direction) for a detailed definition.)\n \n These features can be rigorously studied and understood.\n \n\nClaim 2: Circuits\n Features are connected by weights, forming circuits.\n A “circuit” is a computational subgraph of a neural network.\n It consists of a set of features, and the weighted edges that go between them in the original\n network.\n Often, we study quite small circuits — say with less than a dozen features — but they can also be much larger.\n (See the [glossary](#glossary-circuit) for a detailed definition.)\n \n\n These circuits can also be rigorously studied and understood.\n \n\nClaim 3: Universality\n Analogous features and circuits form across models and tasks.\n \n\n Left: An [activation atlas](https://distill.pub/2019/activation-atlas/) visualizing part of the space neural network features can represent.\n \n\n\n\nThese claims are deliberately speculative.\n They also aren’t totally novel: claims along the lines of (1) and (3) have been suggested before, as we’ll discuss in more depth below.\n \n\n\n\n But we believe these claims are important to consider because, if true, they could form the basis of a new “zoomed in” field of\n interpretability. In the following sections, we’ll discuss each one individually and present some of the evidence that has led us to believe they might be true.\n\n\n\n\n---\n\n\nClaim 1: Features\n-----------------\n\n\n\n .claim-quote {\n border-left: 1px solid #CCC;\n padding-left: 20px;\n color: #555;\n margin-bottom: 30px;\n }\n \n\n Features are the fundamental unit of neural networks.\n They correspond to directions. They can be rigorously studied and understood.\n \n\n\n\n We believe that neural networks consist of meaningful, understandable features. Early layers contain features like edge or curve detectors, while later layers have features like floppy ear detectors or wheel detectors.\n The community is divided on whether this is true.\n While many researchers treat the existence of meaningful neurons as an almost trivial fact — there’s even a small literature studying them  — many others are deeply skeptical and believe that past cases of neurons that seemed to track meaningful latent variables were mistaken .\n The community disagreement on meaningful features is hard to pin down, and only partially expressed in the literature. Foundational descriptions of deep learning often describe neural networks as detecting a hierarchy of meaningful features , and a number of papers have been written demonstrating seemingly meaningful features in different domains domains. At the same time, a more skeptical parallel literature has developed suggesting that neural networks primarily or only focus on texture, local structure, or imperceptible patterns , that meaningful features, when they exist, are less important than uninterpretable ones and that seemingly interpretable neurons may be misunderstood . Although many of these papers express a highly nuanced view, that isn’t always how they’ve been understood. A number of media articles have been written embracing strong versions of these views, and we anecdotally find that the belief that neural networks don’t understand anything more than texture is quite common. Finally, people often have trouble articulating their exact views, because they don’t have clear language for articulating nuances between “a texture detector highly correlated with an object” and “an object detector.”\n Nevertheless, thousands of hours of studying individual neurons have led us to believe the typical case is that neurons (or in some cases, other directions in the vector space of neuron activations) are understandable.\n\n \n\n\n Of course, being understandable doesn’t mean being simple or easily understandable.\n Many neurons are initially mysterious and don’t follow our a priori guesses of what features might exist!\n However, our experience is that there’s usually a simple explanation behind these neurons, and that they’re actually doing something quite natural.\n For example, we were initially confused by high-low frequency detectors (discussed below) but in retrospect, they are simple and elegant.\n\n\n\n\n\n This introductory essay will only give an overview of a couple examples we think are illustrative, but it will be followed both by deep dives carefully characterizing individual features, and broad overviews sketching out all the features we understand to exist.\n We will take our examples from InceptionV1 for now, but believe these claims hold generally and will discuss other models in the final section on universality.\n\n \n\n\n Regardless of whether we’re correct or mistaken about meaningful features,\n we believe this is an important question for the community to resolve.\n We hope that introducing several specific carefully explored examples of seemingly understandable features will help advance the dialogue.\n\n\n\n\n### Example 1: Curve Detectors\n\n\n\n Curve detecting neurons can be found in every non-trivial vision model we’ve carefully examined.\n These units are interesting because they straddle the boundary between features the community broadly agrees exist (e.g. edge detectors) and features for which there’s significant skepticism (e.g. high-level features such as ears, automotives, and faces).\n \n\n\nWe’ll focus on curve detectors in layer [`mixed3b`](/2020/circuits/early-vision/#mixed3b), an early layer of InceptionV1. These units responded to curved lines and boundaries with a radius of around 60 pixels. They are also slightly additionally excited by perpendicular lines along the boundary of the curve, and prefer the two sides of the curve to be different colors.\n\n \n\nCurve detectors are found in families of units, with each member of the family detecting the same curve feature in a different orientation. Together, they jointly span the full range of orientations.\n\n \n\n\n It’s important to distinguish curve detectors from other units which may seem superficially similar.\n In particular, there are many units which use curves to detect a curved sub-component (e.g. circles, spirals, S-curves, hourglass shape, 3d curvature, …).\n There are also units which respond to curve related shapes like lines or sharp corners.\n We do not consider these units to be curve detectors.\n\n \n\n\n![](./images/curves.png)\n\n\n\n\n But are these “curve detectors” really detecting curves?\n We will be dedicating an entire later [article](/2020/circuits/curve-detectors/) to exploring this in depth,\n but the summary is that we think the evidence is quite strong.\n \n\n\n\n We offer seven arguments, outlined below.\n It’s worth noting that none of these arguments are curve specific: they’re a useful, general toolkit for testing our understanding of other features as well.\n Several of these arguments — dataset examples, synthetic examples, and tuning curves — are classic methods from visual neuroscience (e.g. ).\n The last three arguments are based on circuits, which we’ll discuss in the next section.\n \n\n\n\n\n![](images/arg-fv.png)\n\n#### Argument 1: Feature Visualization\n\n\nOptimizing the input to cause curve detectors to fire reliably produces curves. This establishes a causal link, since everything in the resulting image was added to cause the neuron to fire more.\n \n\nYou can learn more about feature visualization [here](https://distill.pub/2017/feature-visualization/).\n\n\n\n\n\n![](images/arg-data.png)\n\n#### Argument 2: Dataset Examples\n\n\nThe ImageNet images that cause these neurons to strongly fire are reliably curves in the expected orientation. The images that cause them to fire moderately are generally less perfect curves or curves off orientation.\n\n\n\n\n\n![](images/arg-synthetic.png)\n\n#### Argument 3: Synthetic Examples\n\n\nCurve detectors respond as expected to a range of synthetic curves images created with varying orientations, curvatures, and backgrounds. They fire only near the expected orientation, and do not fire strongly for straight lines or sharp corners.\n\n\n\n\n\n![](images/arg-tune.png)\n\n#### Argument 4: Joint Tuning\n\n\nIf we take dataset examples that cause a neuron to fire and rotate them, they gradually stop firing and the curve detectors in the next orientation begins firing. This shows that they detect rotated versions of the same thing. Together, they tile the full 360 degrees of potential orientations.\n\n\n\n\n\n![](images/arg-weights.png)\n\n#### Argument 5: Feature implementation\n (circuit-based argument)\n\n\nBy looking at the circuit constructing the curve detectors, we can read a curve detection algorithm off of the weights. We also don’t see anything suggestive of a second alternative cause of firing, although there are many smaller weights we don’t understand the role of.\n\n\n\n\n\n![](images/arg-use.png)\n\n#### Argument 6: Feature use\n (circuit-based argument)\n\n\nThe downstream clients of curve detectors are features that naturally involve curves (e.g. circles, 3d curvature, spirals…). The curve detectors are used by these clients in the expected manner.\n\n\n\n\n\n![](images/arg-hand.png)\n\n#### Argument 7: Handwritten Circuits\n (circuit-based argument)\n\n\nBased on our understanding of how curve detectors are implemented,\n we can do a cleanroom reimplementation,\n hand setting all weights to reimplement curve detection.\n These weights are an understandable curve detection algorithm, and significantly mimic the original curve detectors.\n\n\n\n\n\nThe above arguments don’t fully exclude the possibility of some rare secondary case where curve detectors fire for a different kind of stimulus. But they do seem to establish\n that (1) curves cause these neurons to fire,\n (2) each unit responds to curves at different angular orientations,\n and (3) if there are other stimuli that cause them to fire those stimuli are rare or cause weaker activations.\n More generally, these arguments seem to meet the evidentiary standards we understand to be used in neuroscience, which has established traditions and institutional knowledge of how to evaluate such claims.\n\n \n\n\n All of these arguments will be explored in detail in the later articles on [curve detectors](/2020/circuits/curve-detectors/) and curve detection circuits.\n\n \n\n### Example 2: High-Low Frequency Detectors\n\n\n\n Curve detectors are an intuitive type of feature — the kind of feature one might guess exists in neural networks a priori.\n Given that they’re present, it’s not surprising we can understand them.\n But what about features that aren’t intuitive? Can we also understand those?\n We believe so.\n\n\n \n\n\n High-low frequency detectors are an example of a less intuitive type of feature. We find them in [early vision](/2020/circuits/early-vision/), and once you understand what they’re doing, they’re quite simple. They look for low-frequency patterns on one side of their receptive field, and high-frequency patterns on the other side. Like curve detectors, high-low frequency detectors are found in families of features that look for the same thing in different orientations.\n\n \n\n\n![](./images/high-low.png)\n\n\nWhy are high-low frequency detectors useful to the network? They seem to be one of several heuristics for detecting the boundaries of objects, especially when the background is out of focus. In a later article, we’ll explore how they’re used in the construction of [sophisticated boundary detectors](/2020/circuits/early-vision/#mixed3b_discussion_boundary).\n\n \n\n\n (One hope some researchers have for interpretability is that understanding models will be able to teach us better abstractions for thinking about the world . High-low frequency detectors are, perhaps, an example of a small success in this: a natural, useful visual feature that we didn’t anticipate in advance.)\n\n \n\nAll seven of the techniques we used to interrogate curve neurons can also be used to study high-low frequency neurons with some tweaking — for instance, rendering synthetic high-low frequency examples. Again we believe these arguments collectively provide strong support for the idea that these really are a family of high-low frequency contrast detectors.\n\n\n\n \n\n### Example 3: Pose-Invariant Dog Head Detector\n\n\nBoth curve detectors and high-low frequency detectors are low-level visual features, found in the early layers of InceptionV1. What about more complex, high-level features?\n\n \n\nLet’s consider this unit which we believe to be a pose-invariant dog detector. As with any neuron, we can create a feature visualization and collect dataset examples. If you look at the feature visualization, the geometry is… not possible, but very informative about what it’s looking for and the dataset examples validate it.\n\n \n\n\n![](./images/dog-pose.png)\n\n\n\n It’s worth noting that the combination of feature visualization and dataset examples alone are already quite a strong argument.\n Feature visualization establishes a causal link, while dataset examples test the neuron’s use in practice and whether there are a second type of stimuli that it reacts to.\n But we can bring all our other approaches to analyzing a neuron to bear again.\n For example, we can use a 3D model to generate synthetic dog head images from different angles.\n\n \n\n\n At the same time, some of the approaches we’ve emphasized so far become a lot of effort for these higher-level, more abstract features.\n Thankfully, our circuit-based arguments — which we’ll discuss more soon — will continue to be easy to apply, and give us really powerful tools for understanding and testing high-level features that don’t require a lot of effort.\n\n\n \n\n### Polysemantic Neurons\n\n\nThis essay may be giving you an overly rosy picture: perhaps every neuron yields a nice, human-understandable concept if one seriously investigates it?\n\n \n\n\n Alas, this is not the case.\n Neural networks often contain “polysemantic neurons” that respond to multiple unrelated inputs.\n For example, InceptionV1 contains one neuron that responds to cat faces, fronts of cars, and cat legs.\n\n \n\n\n![](./images/polysemantic.png)\n4e:55 is a polysemantic neuron which responds to cat faces, fronts of cars, and cat legs. It was discussed in more depth in [Feature Visualization](https://distill.pub/2017/feature-visualization/) .\n\nTo be clear, this neuron isn’t responding to some commonality of cars and cat faces. Feature visualization shows us that it’s looking for the eyes and whiskers of a cat, for furry legs, and for shiny fronts of cars — not some subtle shared feature.\n\n \n\n\n We can still study such features, characterizing each different case they fire, and reason about their circuits to some extent.\n Despite this, polysemantic neurons are a major challenge for the circuits agenda, significantly limiting our ability to reason about neural networks.\n Why are polysemantic neurons so challenging? If one neuron with five different meanings connects to another neuron with five different meanings, that’s effectively 25 connections that can’t be considered individually.\n Our hope is that it may be possible to resolve polysemantic neurons,\n perhaps by “unfolding” a network to turn polysemantic neurons into pure features, or training networks to not exhibit polysemanticity in the first place.\n This is essentially the problem studied in the literature of disentangling representations, although at present that literature tends to focus on known features in the latent spaces of generative models.\n\n \n\n\n One natural question to ask is why do polysemantic neurons form?\n In the next section, we’ll see that they seem to result from a phenomenon we call “superposition” where a circuit spreads a feature across many neurons, presumably to pack more features into the limited number of neurons it has available.\n\n\n \n\n\n\n---\n\n\nClaim 2: Circuits\n-----------------\n\n\n\n Features are connected by weights, forming circuits. \n These circuits can also be rigorously studied and understood.\n \n\n\n\n All neurons in our network are formed from linear combinations of neurons in the previous layer, followed by ReLU.\n If we can understand the features in both layers, shouldn’t we also be able to understand the connections between them?\n To explore this, we find it helpful to study circuits:\n sub-graphs of the network, consisting a set of tightly linked features and the weights between them.\n\n \n\nThe remarkable thing is how tractable and meaningful these circuits seem to be as objects of study. When we began looking, we expected to find something quite messy. Instead, we’ve found beautiful rich structures, often with [symmetry](/2020/circuits/equivariance/) to them. Once you understand what features they’re connecting together, the individual floating point number weights in your neural network become meaningful! *You can literally read meaningful algorithms off of the weights.*\n\n\nLet’s consider some examples.\n\n \n\n### Circuit 1: Curve Detectors\n\n\nIn the previous section, we discussed curve detectors, a family of units detecting curves in different angular orientations. In this section, we’ll explore how curve detectors are implemented from earlier features and connect to the rest of the model.\n\n \n\n\n Curve detectors are primarily implemented from earlier, less sophisticated curve detectors and line detectors. These curve detectors are used in the next layer to create 3D geometry and complex shape detectors. Of course, there’s a long tail of smaller connections to other features, but this seems to be the primary story.\n\n \n\nFor this introduction, we’ll focus on the interaction of the early curve detectors and our full curve detectors.\n\n \n\n\n![](./images/curve-circuit.png)\n\n\nLet’s focus even more and look at how a single early curve detector connects to a more sophisticated curve detector in the same orientation.\n\n \n\nIn this case, our model is implementing a 5x5 convolution, so the weights linking these two neurons are a 5x5 set of weights, which can be positive or negative.\n Many of the neurons discussed in this article, including curve detectors, live in branches of InceptionV1 that are structured as a 1x1 convolution that reduce the number of channels to a small bottleneck followed by a 3x3 or 5x5 convolution. The weights we present in this essay are the multiplied out version of the 1x1 and larger conv weights. We think it’s often useful to view this as a single low-rank weight matrix, but this technically does ignore one ReLU non-linearity.\n \n\n A positive weight means that if the earlier neuron fires in that position, it excites the late neuron. Conversely a negative weight would mean that it inhibits it.\n\n \n\nWhat we see are strong positive weights, arranged in the shape of the curve detector. We can think of this as meaning that, at each point along the curve, our curve detector is looking for a “tangent curve” using the earlier curve detector.\n\n\n \n\n\n\n![](./images/curve-weights-a.png)\nThe raw weights between the early curve detector and late curve detector in the same orientation are a curve of positive weights surrounded by small negative or zero weights.\n \n\n![](./images/curve-weights-b.png)\nThis can be interpreted as looking for “tangent curves”\nat each point along the curve.\n\n\n\nThis is true for every pair of early and full curve detectors in similar orientations. At every point along the curve, it detects the curve in a similar orientation. Similarly, curves in the opposite orientation are inhibitory at every point along the curve.\n\n \n\n\n\nCurve detectors are excited by earlier detectors \n in **similar orientations**…\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n… and inhibited by earlier detectors in \n **opposing orientations**.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIt’s worth reflecting here that we’re looking at neural network weights and they’re meaningful.\n\n \n\nAnd the structure gets richer the closer you look. For example, if you look at an early curve detector and full curve detector in similar, but not exactly the same orientation you can often see it have stronger positive weights on the side of the curve it is more aligned with.\n\n \n\nIt’s also worth noting how the weights rotate with the orientation of the curve detector. The symmetry of the problem is reflected as a symmetry in the weights. We call circuits with exhibiting this phenomenon an “equivariant circuit”, and will discuss it in depth in a [later article](/2020/circuits/equivariance/).\n\n \n\n\n### Circuit 2: Oriented Dog Head Detection\n\n\nThe curve detector circuit is a low-level circuit and only spans two layers. In this section, we’ll discuss a higher-level circuit spanning across four layers. This circuit will also teach us about how neural networks implement sophisticated invariances.\n\n \n\nRemember that a huge part of what an ImageNet model has to do is tell apart different animals. In particular, it has to distinguish between a hundred different species of dogs! And so, unsurprisingly, it develops a large number of neurons dedicated to recognizing dog related features, including heads.\n\n \n\nWithin this “dog recognition” system, one circuit strikes us as particularly interesting: a collection of neurons that handle dog heads facing to the left and dog heads facing to the right. Over three layers, the network maintains two mirrored pathways, detecting analogous units facing to the left and to the right. At each step, these pathways try to inhibit each other, sharpening the contrast. Finally, it creates invariant neurons which respond to both pathways.\n\n \n @media (min-width: 1000px){\n .dog-circuit-figure {\n grid-column-start: page-start; grid-column-end: page-end; margin-left: -30px; padding-right: 50px;\n }\n }\n @media (max-width: 1000px){\n .dog-circuit-figure {\n grid-column-start: screen-start; grid-column-end: screen-end;\n padding-left: 20px;\n padding-right: 20px;\n }\n }\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe call this pattern “unioning over cases”. The network separately detects two cases (left and right) and then takes a union over them to create invariant “multifaceted” units. Note that, because the two pathways inhibit each other, this circuit actually has some XOR like properties.\n\n \n\nThis circuit is striking because the network could have easily done something much less sophisticated.\n It could easily create invariant neurons by not caring very much about where the eyes, fur and snout went, and just looking for a jumble of them together.\n But instead, the network has learned to carve apart the left and right cases and handle them separately. We’re somewhat surprised that gradient descent could learn to do this!To be clear, there are also more direct pathways by which various constituents of heads influence these later head detectors, without going through the left and right pathways\n\n\n\nBut this summary of the circuit is only scratching the surface of what is going on. Every connection between neurons is a convolution, so we can also look at where an input neuron excites the the next one. And the models tends to be doing what you might have optimistically hoped. For example, consider these “head with neck” units. The head is only detected on the correct side:\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n The union step is also interesting to look at the details of.\n The network doesn’t indiscriminately respond to the heads into the two orientations:\n the regions of excitation extend from the center in different directions depending on orientation, allowing snouts to converge in to the same point.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThere’s a lot more to say about this circuit, so we plan to return to it in a future article and analyze it in depth, including testing our theory of the circuit by editing the weights.\n\n \n\n\n### Circuit 3: Cars in Superposition\n\n\nIn `mixed4c`, a mid-late layer of InceptionV1, there is a car detecting neuron. Using features from the previous layers, it looks for wheels at the bottom of its convolutional window, and windows at the top.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBut then the model does something surprising. Rather than create another pure car detector at the next layer, it spreads its car feature over a number of neurons that seem to primarily be doing something else — in particular, dog detectors.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis circuit suggests that polysemantic neurons are, in some sense, deliberate. That is, you could imagine a world where the process of detecting cars and dogs was deeply intertwined in the model for some reason, and as a result polysemantic neurons were difficult to avoid. But what we’re seeing here is that the model had a “pure neuron” and then mixed it up with other features.\n\n \n\nWe call this phenomenon superposition.\n\n \n\nWhy would it do such a thing? We believe superposition allows the model to use fewer neurons, conserving them for more important tasks. As long as cars and dogs don’t co-occur, the model can accurately retrieve the dog feature in a later layer, allowing it to store the feature without dedicating a neuron.Fundamentally, this is a property of the geometry of high-dimensional spaces, which only allow for n orthogonal vectors, but exponentially many almost orthogonal vectors. \n\n\n### Circuit Motifs\n\n\nAs we’ve studied circuits throughout InceptionV1 and other models,\n we’ve seen the same abstract patterns over and over.\n [Equivariance](/2020/circuits/equivariance/), as we saw with the curve detectors.\n Unioning over cases, as we saw with the pose-invariant dog head detector.\n Superposition, as we saw with the car detector.\n\n \n\nIn biology, a circuit motif is a recurring pattern in complex graphs like transcription networks or biological neural networks.\n Motifs are helpful because understanding one motif can give researchers leverage on all graphs where it occurs.\n\n \n\n\n We think it’s quite likely that studying motifs will be important in understanding the circuits of artificial neural networks. In the long run, it may be more important than the study of individual circuits.\n At the same time, we expect investigations of motifs to be well served by us first building up a solid foundation of well understood circuits first.\n\n\n \n\n\n\n---\n\n\nClaim 3: Universality\n---------------------\n\n\nAnalogous features and circuits form across models and tasks.\n\n \n\n\n It’s a widely accepted fact that the first layer of vision models trained on natural images will learn Gabor filters.\n Once you accept that there are meaningful features in later layers, would it really be surprising for the same features to also form in layers beyond the first one?\n And once you believe there are analogous features in multiple layers, wouldn’t it be natural for them to connect in the same ways?\n\n \n\n\n Universality (or “convergent learning”) of features has been suggested before.\n Prior work has shown that different neural networks can develop highly correlated neurons \n and that they learn similar representations at hidden layers .\n This work seems highly suggestive, but there are alternative explanations to analogous features forming.\n For example, one could imagine two features — such as a fur texture detector and a sophisticated dog body detector — being highly correlated despite being importantly different features.\n Adopting the meaningful feature-skeptic perspective, it doesn’t seem definitive.\n\n \n\n\n Ideally, one would like to characterize several features and then rigorously demonstrate that those features — and not just correlated ones — are forming across many models.\n Then, to further establish that analogous circuits form, one would want to find analogous features over several layers of multiple models and show that the same weight structure forms between them in each model.\n\n \n\n\n Unfortunately, the only evidence we can offer today is anecdotal:\n we simply have not yet invested enough in the comparative study of features and circuits to give confident answers.\n With that said, we have observed that a couple low-level features seem to form across a variety of vision model architectures (including AlexNet, InceptionV1, InceptionV3, and residual networks) and in models trained on Places365 instead of ImageNet. We’ve also observed them repeatedly form in vanilla conv nets trained from scratch on ImageNet.\n \n\n\n\n .diagram-universality {\n display: grid;\n grid-column: text-start / page-end;\n grid-gap: 1rem;\n grid-template-columns: 1fr;\n justify-content: center;\n }\n\n @media (min-width: 1180px) {\n .diagram-universality {\n grid-column: page;\n grid-template-columns: 528px 448px;\n }\n .diagram-universality > figure:last-of-type .info {\n display: none;\n }\n .diagram-universality > figure:last-of-type li {\n grid-template-columns: 1fr;\n }\n .diagram-universality > figure:last-of-type > figcaption {\n padding-left: unset;\n }\n }\n\n .diagram-universality a {\n border-bottom: none;\n display: inline;\n }\n .diagram-universality a:hover {\n border-bottom: none;\n }\n .diagram-universality a:hover img {\n --filter: brightness(80%);\n }\n\n .diagram-universality h3 {\n margin-top: 0;\n }\n\n .diagram-universality h4 {\n margin: 4px 0;\n }\n\n .diagram-universality > figure {\n margin: 0;\n }\n\n .diagram-universality > figure > figcaption {\n padding-left: 9rem;\n }\n\n .diagram-universality ul {\n list-style: none;\n padding-left: unset;\n }\n\n .diagram-universality li {\n display: grid;\n grid-template-columns: 9rem 1fr;\n }\n\n .diagram-universality li .images {\n display: grid;\n grid-template-columns: repeat( auto-fill, 88px);\n grid-gap: 8px;\n }\n\n .diagram-universality img {\n width: 88px;\n height: 88px;\n background-color: gray;\n border-radius: 4px;\n object-fit: none;\n }\n \n\n\n\n### Curve detectors\n\n\n\n* #### AlexNet\n\n\nKrizhevsky et al.\n\n\n\n[![](images/universality/curves/AlexNet/unit-0.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/AlexNet/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/AlexNet/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/AlexNet/unit-3.png)](https://microscope.openai.com/models/)\n* #### InceptionV1\n\n\nSzegedy et al.\n\n\n\n[![](images/universality/curves/GoogLeNet/unit-0.png)](https://microscope.openai.com/models/inceptionv1/mixed3b_0/379)\n[![](images/universality/curves/GoogLeNet/unit-1.png)](https://microscope.openai.com/models/inceptionv1/mixed3b_0/385)\n[![](images/universality/curves/GoogLeNet/unit-2.png)](https://microscope.openai.com/models/inceptionv1/mixed3b_0/342)\n[![](images/universality/curves/GoogLeNet/unit-3.png)](https://microscope.openai.com/models/inceptionv1/mixed3b_0/340)\n* #### VGG19\n\n\nSimonyan et al.\n\n\n\n\n[![](images/universality/curves/VGG19/unit-0.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/VGG19/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/VGG19/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/VGG19/unit-3.png)](https://microscope.openai.com/models/)\n* #### ResNetV2-50\n\n\nHe et al.\n\n\n[![](images/universality/curves/ResNetV2/unit-0.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/ResNetV2/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/ResNetV2/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/curves/ResNetV2/unit-3.png)](https://microscope.openai.com/models/)\n\n\n\n\n\n### High-Low Frequency detectors\n\n\n\n* #### AlexNet\n\n\nKrizhevsky et al.\n\n\n[![](images/universality/hilo/AlexNet/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/AlexNet/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/AlexNet/unit-3.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/AlexNet/unit-0.png)](https://microscope.openai.com/models/)\n* #### InceptionV1\n\n\nSzegedy et al.\n\n\n\n[![](images/universality/hilo/GoogLeNet/unit-0.png)](https://microscope.openai.com/models/inceptionv1/mixed3a_0/136)\n[![](images/universality/hilo/GoogLeNet/unit-1.png)](https://microscope.openai.com/models/inceptionv1/mixed3a_0/)\n[![](images/universality/hilo/GoogLeNet/unit-2.png)](https://microscope.openai.com/models/inceptionv1/mixed3a_0/)\n[![](images/universality/hilo/GoogLeNet/unit-3.png)](https://microscope.openai.com/models/inceptionv1/mixed3a_0/)\n* #### VGG19\n\n\nSimonyan et al.\n\n\n[![](images/universality/hilo/VGG19/unit-0.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/VGG19/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/VGG19/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/VGG19/unit-3.png)](https://microscope.openai.com/models/)\n* #### ResNetV2-50\n\n\nHe et al.\n\n\n[![](images/universality/hilo/ResNetV2/unit-3.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/ResNetV2/unit-2.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/ResNetV2/unit-1.png)](https://microscope.openai.com/models/)\n[![](images/universality/hilo/ResNetV2/unit-0.png)](https://microscope.openai.com/models/)\n\n\n\n\n\n These results have led us to suspect that the universality hypothesis is likely true, but further work will be needed to understand if the apparent universality of some low-level vision features is the exception or the rule.\n\n \n\n\n If it turns out that the universality hypothesis is broadly true in neural networks,\n it will be tempting to speculate: might biological neural networks also learn similar features?\n Researchers working at the intersection of neuroscience and deep learning have already shown that the units in artificial vision models can be useful for modeling biological neurons .\n And some of the features we’ve discovered in artificial neural networks, such as curve detectors, are also believed to exist in biological neural networks (e.g. ).\n This seems like significant cause for optimism.\n \n\n One particularly exciting possibility might be if artificial neural networks could predict features which were previously unknown but could then be found in biology.\n (Some neuroscientists we have spoken to have suggested that high-low frequency detectors might be a candidate for this.)\n If such a prediction could be made, it would be extremely strong evidence for the universality hypothesis.\n \n\n\n\n\n Focusing on the study of circuits, is universality really necessary?\n Unlike the first two claims, it wouldn’t be completely fatal to circuits research if this claim turned out to be false. But it does greatly inform what kind of research makes sense. We introduced circuits as a kind of “cellular biology of deep learning.” But imagine a world where every species had cells with a completely different set of organelles and proteins. Would it still make sense to study cells in general, or would we limit ourselves to the narrow study of a few kinds of particularly important species of cells? Similarly, imagine the study of anatomy in a world where every species of animal had a completely unrelated anatomy: would we seriously study anything other than humans and a couple domestic animals?\n\n \n\nIn the same way, the universality hypothesis determines what form of circuits research makes sense. If it was true in the strongest sense, one could imagine a kind of “periodic table of visual features” which we observe and catalogue across models. On the other hand, if it was mostly false, we would need to focus on a handful of models of particular societal importance and hope they stop changing every year. There might also be in between worlds, where some lessons transfer between models but others need to be learned from scratch.\n\n \n\n\n\n---\n\n\nInterpretability as a Natural Science\n-------------------------------------\n\n\n\n*The Structure of Scientific Revolutions* by Thomas Kuhn is a classic text on the history and sociology of science.\n In it, Kuhn distinguishes between “normal science” in which a scientific community has a paradigm, and “extraordinary science” in which a community lacks a paradigm, either because it never had one or because it was weakened by crisis.\n It’s worth noting that “extraordinary science” is not a desirable state: it’s a period where researchers struggle to be productive.\n\n \n\n\n Kuhn’s description of pre-paradigmatic fields feel eerily reminiscent of interpretability today.\n We were introduced to Kuhn’s work and this connection by conversations with Tom McGrath at DeepMind\n There isn’t consensus on what the objects of study are, what methods we should use to answer them, or how to evaluate research results.\n To quote a recent interview with Ian Goodfellow: “For interpretability, I don’t think we even have the right definitions.”\n\n\n\n\n One particularly challenging aspect of being in a pre-paradigmatic field is that there isn’t a shared sense of how to evaluate work in interpretability.\n There are two common proposals for dealing with this, drawing on the standards of adjacent fields. Some researchers, especially those with a deep learning background, want an “interpretability benchmark” which can evaluate how effective an interpretability method is. Other researchers with an HCI background may wish to evaluate interpretability methods through user studies.\n\n \n\n\n But interpretability could also borrow from a third paradigm: natural science.\n In this view, neural networks are an object of empirical investigation, perhaps similar to an organism in biology. Such work would try to make empirical claims about a given network, which could be held to the standard of falsifiability.\n\n \n\n\n Why don’t we see more of this kind of evaluation of work in interpretability and visualization?\n To be clear, we do see researchers who take more of this natural science approach, especially in earlier interpretability research. It just seems less common right now.\n Especially given that there’s so much adjacent ML work which does adopt this frame!\n One reason might be that it’s very difficult to make robustly true statements about the behavior of a neural network as a whole.\n They’re incredibly complicated objects.\n It’s also hard to formalize what the interesting empirical statements about them would, exactly, be.\n And so we often get standards of evaluations more targeted at whether an interpretability method is useful rather than whether we’re learning true statements.\n\n \n\n\n Circuits sidestep these challenges by focusing on tiny subgraphs of a neural network for which rigorous empirical investigation is tractable.\n They’re very much falsifiable: for example, if you understand a circuit, you should be able to predict what will change if you edit the weights.\n In fact, for small enough circuits, statements about their behavior become questions of mathematical reasoning.\n Of course, the cost of this rigor is that statements about circuits are much smaller in scope than overall model behavior.\n But it seems like, with sufficient effort, statements about model behavior could be broken down into statements about circuits.\n If so, perhaps circuits could act as a kind of epistemic foundation for interpretability.\n \n\n\n\n\n---\n\n\nClosing Thoughts\n----------------\n\n\nWe take it for granted that the microscope is an important scientific instrument. It’s practically a symbol of science. But this wasn’t always the case, and microscopes didn’t initially take off as a scientific tool. In fact, they seem to have languished for around fifty years. The turning point was when Robert Hooke published Micrographia, a collection of drawings of things he’d seen using a microscope, including the first picture of a cell.\n \n\n\n\n Our impression is that there is some anxiety in the interpretability community that we aren’t taken very seriously.\n That this research is too qualitative.\n That it isn’t scientific.\n But the lesson of the microscope and cellular biology is that perhaps this is expected.\n The discovery of cells was a qualitative research result.\n That didn’t stop it from changing the world.\n \n\n\n \n\n\n![](images/multiple-pages.svg)\n\n This article is part of the Circuits thread, a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. \n\n\n\n\n\n\n[Circuits Thread](/2020/circuits/)\n[An Overview of Early Vision in InceptionV1](/2020/circuits/early-vision/)", "date_published": "2020-03-10T20:00:00Z", "authors": ["Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter"], "summaries": ["By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks."], "doi": "10.23915/distill.00024.001", "journal_ref": "distill-pub", "bibliography": [{"link": "https://archive.org/details/mobot31753000817897/page/n11/mode/thumb", "title": "Micrographia: or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses. With Observations and Inquiries Thereupon"}, {"link": "https://arxiv.org/pdf/1506.02078.pdf", "title": "Visualizing and understanding recurrent networks"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "https://arxiv.org/pdf/1612.00005.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "https://arxiv.org/pdf/1704.03296.pdf", "title": "Interpretable Explanations of Black Boxes by Meaningful Perturbation"}, {"link": "https://arxiv.org/pdf/1705.05598.pdf", "title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"link": "https://arxiv.org/pdf/1906.02715.pdf", "title": "Visualizing and Measuring the Geometry of BERT"}, {"link": "https://distill.pub/2019/activation-atlas", "title": "Activation atlas"}, {"link": "https://arxiv.org/pdf/1904.02323.pdf", "title": "Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations"}, {"link": "https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf", "title": "Distributed representations of words and phrases and their compositionality"}, {"link": "https://arxiv.org/pdf/1704.01444.pdf", "title": "Learning to generate reviews and discovering sentiment"}, {"link": "https://arxiv.org/pdf/1412.6856.pdf", "title": "Object detectors emerge in deep scene cnns"}, {"link": "https://arxiv.org/pdf/1704.05796.pdf", "title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"link": "https://arxiv.org/pdf/1711.11561.pdf", "title": "Measuring the tendency of CNNs to Learn Surface Statistical Regularities"}, {"link": "https://arxiv.org/pdf/1811.12231.pdf", "title": "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness"}, {"link": "https://arxiv.org/pdf/1904.00760.pdf", "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet"}, {"link": "https://arxiv.org/pdf/1905.02175.pdf", "title": "Adversarial examples are not bugs, they are features"}, {"link": "https://arxiv.org/pdf/1803.06959.pdf", "title": "On the importance of single directions for generalization"}, {"link": "https://s3.us-east-2.amazonaws.com/hkg-website-assets/static/pages/files/DeepLearning.pdf", "title": "Deep learning"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "https://distill.pub/2017/aia/", "title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"link": "https://arxiv.org/pdf/1602.03616.pdf", "title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"link": "https://doi.org/10.1201/9781420011432", "title": "An introduction to systems biology: design principles of biological circuits"}, {"link": "https://arxiv.org/pdf/1511.07543.pdf", "title": "Convergent learning: Do different neural networks learn the same representations?"}, {"link": "http://papers.nips.cc/paper/7188-svcca-singular-vector-canonical-correlation-analysis-for-deep-learning-dynamics-and-interpretability.pdf", "title": "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability"}, {"link": "https://arxiv.org/pdf/1905.00414.pdf", "title": "Similarity of neural network representations revisited"}, {"link": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf", "title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"link": "http://arxiv.org/pdf/1409.1556.pdf", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"link": "http://arxiv.org/pdf/1512.03385.pdf", "title": "Deep Residual Learning for Image Recognition"}, {"link": "https://www.biorxiv.org/content/early/2019/10/20/808907", "title": "Discrete neural clusters encode orientation, curvature and corners in macaque V4"}, {"link": "https://doi.org/10.7208/chicago/9780226458106.001.0001", "title": "The structure of scientific revolutions"}, {"link": "https://www.youtube.com/watch?v=Z6rxFNMGdn0", "title": "Ian Goodfellow: Generative Adversarial Networks"}, {"link": "https://distill.pub/2018/building-blocks", "title": "The Building Blocks of Interpretability"}]} {"id": "fe09c2226ea403295828fbc9d50304ed", "title": "Growing Neural Cellular Automata", "url": "https://distill.pub/2020/growing-ca", "source": "distill", "source_type": "blog", "text": "### Contents\n\n\n[Model](#model)\n[Experiments](#experiment-1)\n* [Learning to Grow](#experiment-1)\n* [What persists, exists](#experiment-2)\n* [Learning to regenerate](#experiment-3)\n* [Rotating the perceptive field](#experiment-4)\n\n\n[Related Work](#related-work)\n[Discussion](#discussion)\n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Differentiable Self-organizing Systems Thread](/2020/selforg/)\n[Self-classifying MNIST Digits](/2020/selforg/mnist/)\n\n\n Most multicellular organisms begin their life as a single egg cell - a\n single cell whose progeny reliably self-assemble into highly complex\n anatomies with many organs and tissues in precisely the same arrangement\n each time. The ability to build their own bodies is probably the most\n fundamental skill every living creature possesses. Morphogenesis (the\n process of an organism’s shape development) is one of the most striking\n examples of a phenomenon called *self-organisation*. Cells, the tiny\n building blocks of bodies, communicate with their neighbors to decide the\n shape of organs and body plans, where to grow each organ, how to\n interconnect them, and when to eventually stop. Understanding the interplay\n of the emergence of complex outcomes from simple rules and\n homeostatic\n Self-regulatory feedback loops trying maintain the body in a stable state\n or preserve its correct overall morphology under external\n perturbations\n feedback loops is an active area of research\n . What is clear\n is that evolution has learned to exploit the laws of physics and computation\n to implement the highly robust morphogenetic software that runs on\n genome-encoded cellular hardware.\n \n\n\n\n This process is extremely robust to perturbations. Even when the organism is\n fully developed, some species still have the capability to repair damage - a\n process known as regeneration. Some creatures, such as salamanders, can\n fully regenerate vital organs, limbs, eyes, or even parts of the brain!\n Morphogenesis is a surprisingly adaptive process. Sometimes even a very\n atypical development process can result in a viable organism - for example,\n when an early mammalian embryo is cut in two, each half will form a complete\n individual - monozygotic twins!\n \n\n\n\n The biggest puzzle in this field is the question of how the cell collective\n knows what to build and when to stop. The sciences of genomics and stem cell\n biology are only part of the puzzle, as they explain the distribution of\n specific components in each cell, and the establishment of different types\n of cells. While we know of many genes that are *required* for the\n process of regeneration, we still do not know the algorithm that is\n *sufficient* for cells to know how to build or remodel complex organs\n to a very specific anatomical end-goal. Thus, one major lynch-pin of future\n work in biomedicine is the discovery of the process by which large-scale\n anatomy is specified within cell collectives, and how we can rewrite this\n information to have rational control of growth and form. It is also becoming\n clear that the software of life possesses numerous modules or subroutines,\n such as “build an eye here”, which can be activated with simple signal\n triggers. Discovery of such subroutines and a\n mapping out of the developmental logic is a new field at the intersection of\n developmental biology and computer science. An important next step is to try\n to formulate computational models of this process, both to enrich the\n conceptual toolkit of biologists and to help translate the discoveries of\n biology into better robotics and computational technology.\n \n\n\n\n Imagine if we could design systems of the same plasticity and robustness as\n biological life: structures and machines that could grow and repair\n themselves. Such technology would transform the current efforts in\n regenerative medicine, where scientists and clinicians seek to discover the\n inputs or stimuli that could cause cells in the body to build structures on\n demand as needed. To help crack the puzzle of the morphogenetic code, and\n also exploit the insights of biology to create self-repairing systems in\n real life, we try to replicate some of the desired properties in an\n *in silico* experiment.\n \n\n\nModel\n-----\n\n\n\n Those in engineering disciplines and researchers often use many kinds of\n simulations incorporating local interaction, including systems of partial\n derivative equation (PDEs), particle systems, and various kinds of Cellular\n Automata (CA). We will focus on Cellular Automata models as a roadmap for\n the effort of identifying cell-level rules which give rise to complex,\n regenerative behavior of the collective. CAs typically consist of a grid of\n cells being iteratively updated, with the same set of rules being applied to\n each cell at every step. The new state of a cell depends only on the states\n of the few cells in its immediate neighborhood. Despite their apparent\n simplicity, CAs often demonstrate rich, interesting behaviours, and have a\n long history of being applied to modeling biological phenomena.\n \n\n\n\n Let’s try to develop a cellular automata update rule that, starting from a\n single cell, will produce a predefined multicellular pattern on a 2D grid.\n This is our analogous toy model of organism development. To design the CA,\n we must specify the possible cell states, and their update function. Typical\n CA models represent cell states with a set of discrete values, although\n variants using vectors of continuous values exist. The use of continuous\n values has the virtue of allowing the update rule to be a differentiable\n function of the cell’s neighbourhood’s states. The rules that guide\n individual cell behavior based on the local environment are analogous to the\n low-level hardware specification encoded by the genome of an organism.\n Running our model for a set amount of steps from a starting configuration\n will reveal the patterning behavior that is enabled by such hardware.\n \n\n\n\n So - what is so special about differentiable update rules? They will allow\n us to use the powerful language of loss functions to express our wishes, and\n the extensive existing machinery around gradient-based numerical\n optimization to fulfill them. The art of stacking together differentiable\n functions, and optimizing their parameters to perform various tasks has a\n long history. In recent years it has flourished under various names, such as\n (Deep) Neural Networks, Deep Learning or Differentiable Programming.\n \n\n\n\n\n\n\n\nA single update step of the model.\n\n\n### Cell State\n\n\n\n We will represent each cell state as a vector of 16 real values (see the\n figure above). The first three channels represent the cell color visible to\n us (RGB). The target pattern has color channel values in range [0.0,1.0][0.0, 1.0][0.0,1.0]\n and an α\\alphaα equal to 1.0 for foreground pixels, and 0.0 for background.\n \n\n\n\n The alpha channel (α\\alphaα) has a special meaning: it demarcates living\n cells, those belonging to the pattern being grown. In particular, cells\n having α>0.1\\alpha > 0.1α>0.1 and their neighbors are considered “living”. Other\n cells are “dead” or empty and have their state vector values explicitly set\n to 0.0 at each time step. Thus cells with α>0.1\\alpha > 0.1α>0.1 can be thought of\n as “mature”, while their neighbors with α≤0.1\\alpha \\leq 0.1α≤0.1 are “growing”, and\n can become mature if their alpha passes the 0.1 threshold.\n \n\n\n\n![](figures/alive2.svg)\n\nstate⃗→0.00\\vec{state} \\rightarrow 0.00state⃗→0.00 when no neighbour with α>0.10\\alpha > 0.10α>0.10\n\n\n\n Hidden channels don’t have a predefined meaning, and it’s up to the update\n rule to decide what to use them for. They can be interpreted as\n concentrations of some chemicals, electric potentials or some other\n signaling mechanism that are used by cells to orchestrate the growth. In\n terms of our biological analogy - all our cells share the same genome\n (update rule) and are only differentiated by the information encoded the\n chemical signalling they receive, emit, and store internally (their state\n vectors).\n \n\n\n### Cellular Automaton rule\n\n\n\n Now it’s time to define the update rule. Our CA runs on a regular 2D grid of\n 16-dimensional vectors, essentially a 3D array of shape [height, width, 16].\n We want to apply the same operation to each cell, and the result of this\n operation can only depend on the small (3x3) neighborhood of the cell. This\n is heavily reminiscent of the convolution operation, one of the cornerstones\n of signal processing and differential programming. Convolution is a linear\n operation, but it can be combined with other per-cell operations to produce\n a complex update rule, capable of learning the desired behaviour. Our cell\n update rule can be split into the following phases, applied in order:\n \n\n\n\n**Perception.** This step defines what each cell perceives of\n the environment surrounding it. We implement this via a 3x3 convolution with\n a fixed kernel. One may argue that defining this kernel is superfluous -\n after all we could simply have the cell learn the requisite perception\n kernel coefficients. Our choice of fixed operations are motivated by the\n fact that real life cells often rely only on chemical gradients to guide the\n organism development. Thus, we are using classical Sobel filters to estimate\n the partial derivatives of cell state channels in the x⃗\\vec{x}x⃗ and\n y⃗\\vec{y}y⃗​ directions, forming a 2D gradient vector in each direction, for\n each state channel. We concatenate those gradients with the cells own\n states, forming a 16∗2+16=4816\\*2+16=4816∗2+16=48 dimensional *perception vector*, or\n rather *percepted vector,* for each cell.\n \n\n\n\ndef perceive(state\\_grid):\n\n\nsobel\\_x = [[-1, 0, +1],\n\n\n[-2, 0, +2],\n\n\n[-1, 0, +1]]\n\n\nsobel\\_y = transpose(sobel\\_x)\n\n\n# Convolve sobel filters with states\n\n\n# in x, y and channel dimension.\n\n\ngrad\\_x = conv2d(sobel\\_x, state\\_grid)\n\n\ngrad\\_y = conv2d(sobel\\_y, state\\_grid)\n\n\n# Concatenate the cell’s state channels,\n\n\n# the gradients of channels in x and\n\n\n# the gradient of channels in y.\n\n\nperception\\_grid = concat(\n\n\nstate\\_grid, grad\\_x, grad\\_y, axis=2)\n\n\nreturn perception\\_grid\n\n\n\n\n**Update rule.** Each cell now applies a series of operations\n to the perception vector, consisting of typical differentiable programming\n building blocks, such as 1x1-convolutions and ReLU nonlinearities, which we\n call the cell’s “update rule”. Recall that the update rule is learned, but\n every cell runs the same update rule. The network parametrizing this update\n rule consists of approximately 8,000 parameters. Inspired by residual neural\n networks, the update rule outputs an incremental update to the cell’s state,\n which applied to the cell before the next time step. The update rule is\n designed to exhibit “do-nothing” initial behaviour - implemented by\n initializing the weights of the final convolutional layer in the update rule\n with zero. We also forego applying a ReLU to the output of the last layer of\n the update rule as the incremental updates to the cell state must\n necessarily be able to both add or subtract from the state.\n \n\n\n\ndef update(perception\\_vector):\n\n\n# The following pseudocode operates on\n\n\n# a single cell’s perception vector.\n\n\n# Our reference implementation uses 1D\n\n\n# convolutions for performance reasons.\n\n\nx = dense(perception\\_vector, output\\_len=128)\n\n\nx = relu(x)\n\n\nds = dense(x, output\\_len=16, weights\\_init=0.0)\n\n\nreturn ds\n\n\n\n\n**Stochastic cell update.** Typical cellular automata update\n all cells simultaneously. This implies the existence of a global clock,\n synchronizing all cells. Relying on global synchronisation is not something\n one expects from a self-organising system. We relax this requirement by\n assuming that each cell performs an update independently, waiting for a\n random time interval between updates. To model this behaviour we apply a\n random per-cell mask to update vectors, setting all update values to zero\n with some predefined probability (we use 0.5 during training). This\n operation can be also seen as an application of per-cell dropout to update\n vectors.\n \n\n\n\ndef stochastic\\_update(state\\_grid, ds\\_grid):\n\n\n# Zero out a random fraction of the updates.\n\n\nrand\\_mask = cast(random(64, 64) < 0.5, float32)\n\n\nds\\_grid = ds\\_grid \\* rand\\_mask\n\n\nreturn state\\_grid + ds\\_grid\n\n\n\n\n**Living cell masking.** We want to model the growth process\n that starts with a single cell, and don’t want empty cells to participate in\n computations or carry any hidden state. We enforce this by explicitly\n setting all channels of empty cells to zeros. A cell is considered empty if\n there is no “mature” (alpha>0.1) cell in its 3x3 neightborhood.\n \n\n\n\ndef alive\\_masking(state\\_grid):\n\n\n# Take the alpha channel as the measure of “life”.\n\n\nalive = max\\_pool(state\\_grid[:, :, 3], (3,3)) > 0.1\n\n\nstate\\_grid = state\\_grid \\* cast(alive, float32)\n\n\nreturn state\\_grid\n\n\n\nExperiment 1: Learning to Grow\n------------------------------\n\n\n\n\n\n\n\nTraining regime for learning a target pattern.\n\n\n\n In our first experiment, we simply train the CA to achieve a target image\n after a random number of updates. This approach is quite naive and will run\n into issues. But the challenges it surfaces will help us refine future\n attempts.\n \n\n\n\n We initialize the grid with zeros, except a single seed cell in the center,\n which will have all channels except RGB\n We set RGB channels of the seed to zero because we want it to be visible\n on the white background.\n set to one. Once the grid is initialized, we iteratively apply the update\n rule. We sample a random number of CA steps from the [64, 96]\n This should be a sufficient number of steps to grow the pattern of the\n size we work with (40x40), even considering the stochastic nature of our\n update rule.\n range for each training step, as we want the pattern to be stable across a\n number of iterations. At the last step we apply pixel-wise L2 loss between\n RGBA channels in the grid and the target pattern. This loss can be\n differentiably optimized\n We observed training instabilities, that were manifesting themselves as\n sudden jumps of the loss value in the later stages of the training. We\n managed to mitigate them by applying per-variable L2 normalization to\n parameter gradients. This may have the effect similar to the weight\n normalization . Other training\n parameters are available in the accompanying source code.\n with respect to the update rule parameters by backpropagation-through-time,\n the standard method of training recurrent neural networks.\n \n\n\n\n Once the optimisation converges, we can run simulations to see how our\n learned CAs grow patterns starting from the seed cell. Let’s see what\n happens when we run it for longer than the number of steps used during\n training. The animation below shows the behaviour of a few different models,\n trained to generate different emoji patterns.\n \n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n Many of the patterns exhibit instability for longer time periods.\n \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=4O4tzfe-GRJ7)\n\n\n\n\n We can see that different training runs can lead to models with drastically\n different long term behaviours. Some tend to die out, some don’t seem to\n know how to stop growing, but some happen to be almost stable! How can we\n steer the training towards producing persistent patterns all the time?\n \n\n\nExperiment 2: What persists, exists\n-----------------------------------\n\n\n\n One way of understanding why the previous experiment was unstable is to draw\n a parallel to dynamical systems. We can consider every cell to be a\n dynamical system, with each cell sharing the same dynamics, and all cells\n being locally coupled amongst themselves. When we train our cell update\n model we are adjusting these dynamics. Our goal is to find dynamics that\n satisfy a number of properties. Initially, we wanted the system to evolve\n from the seed pattern to the target pattern - a trajectory which we achieved\n in Experiment 1. Now, we want to avoid the instability we observed - which\n in our dynamical system metaphor consists of making the target pattern an\n attractor.\n \n\n\n\n One strategy to achieve this is letting the CA iterate for much longer time\n and periodically applying the loss against the target, training the system\n by backpropagation through these longer time intervals. Intuitively we claim\n that with longer time intervals and several applications of loss, the model\n is more likely to create an attractor for the target shape, as we\n iteratively mold the dynamics to return to the target pattern from wherever\n the system has decided to venture. However, longer time periods\n substantially increase the training time and more importantly, the memory\n requirements, given that the entire episode’s intermediate activations must\n be stored in memory for a backwards-pass to occur.\n \n\n\n\n Instead, we propose a “sample pool” based strategy to a similar effect. We\n define a pool of seed states to start the iterations from, initially filled\n with the single black pixel seed state. We then sample a batch from this\n pool which we use in our training step. To prevent the equivalent of\n “catastrophic forgetting” we replace one sample in this batch with the\n original, single-pixel seed state. After concluding the training step , we\n replace samples in the pool that were sampled for the batch with the output\n states from the training step over this batch. The animation below shows a\n random sample of the entries in the pool every 20 training steps.\n \n\n\n\ndef pool\\_training():\n\n\n# Set alpha and hidden channels to (1.0).\n\n\nseed = zeros(64, 64, 16)\n\n\nseed[64//2, 64//2, 3:] = 1.0\n\n\ntarget = targets[‘lizard’]\n\n\npool = [seed] \\* 1024\n\n\nfor i in range(training\\_iterations):\n\n\nidxs, batch = pool.sample(32)\n\n\n# Sort by loss, descending.\n\n\nbatch = sort\\_desc(batch, loss(batch))\n\n\n# Replace the highest-loss sample with the seed.\n\n\nbatch[0] = seed\n\n\n# Perform training.\n\n\noutputs, loss = train(batch, target)\n\n\n# Place outputs back in the pool.\n\n\npool[idxs] = outputs\n\n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n A random sample of the patterns in the pool during training, sampled\n every 20 training steps. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=B4JAbAJf6Alw)\n\n\n\n\n Early on in the training process, the random dynamics in the system allow\n the model to end up in various incomplete and incorrect states. As these\n states are sampled from the pool, we refine the dynamics to be able to\n recover from such states. Finally, as the model becomes more robust at going\n from a seed state to the target state, the samples in the pool reflect this\n and are more likely to be very close to the target pattern, allowing the\n training to refine these almost completed patterns further.\n \n\n\n\n Essentially, we use the previous final states as new starting points to\n force our CA to learn how to persist or even improve an already formed\n pattern, in addition to being able to grow it from a seed. This makes it\n possible to add a periodical loss for significantly longer time intervals\n than otherwise possible, encouraging the generation of an attractor as the\n target shape in our coupled system. We also noticed that reseeding the\n highest loss sample in the batch, instead of a random one, makes training\n more stable at the initial stages, as it helps to clean up the low quality\n states from the pool.\n \n\n\n\n Here is what a typical training progress of a CA rule looks like. The cell\n rule learns to stabilize the pattern in parallel to refining its features.\n \n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n CA behaviour at training steps 100, 500, 1000, 4000. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=nqvkfl9W4ODI)\n\n\n\nExperiment 3: Learning to regenerate\n------------------------------------\n\n\n\n In addition to being able to grow their own bodies, living creatures are\n great at maintaining them. Not only does worn out skin get replaced with new\n skin, but very heavy damage to complex vital organs can be regenerated in\n some species. Is there a chance that some of the models we trained above\n have regenerative capabilities?\n \n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n Patterns exhibit some regenerative properties upon being damaged, but\n not full re-growth. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=S5JRLGxX1dnX)\n\n\n\n\n The animation above shows three different models trained using the same\n settings. We let each of the models develop a pattern over 100 steps, then\n damage the final state in five different ways: by removing different halves\n of the formed pattern, and by cutting out a square from the center. Once\n again, we see that these models show quite different out-of-training mode\n behaviour. For example “the lizard” develops quite strong regenerative\n capabilities, without being explicitly trained for it!\n \n\n\n\n Since we trained our coupled system of cells to generate an attractor\n towards a target shape from a single cell, it was likely that these systems,\n once damaged, would generalize towards non-self-destructive reactions.\n That’s because the systems were trained to grow, stabilize, and never\n entirely self-destruct. Some of these systems might naturally gravitate\n towards regenerative capabilities, but nothing stops them from developing\n different behaviors such as explosive mitoses (uncontrolled growth),\n unresponsiveness to damage (overstabilization), or even self destruction,\n especially for the more severe types of damage.\n \n\n\n\n If we want our model to show more consistent and accurate regenerative\n capabilities, we can try to increase the basin of attraction for our target\n pattern - increase the space of cell configurations that naturally gravitate\n towards our target shape. We will do this by damaging a few pool-sampled\n states before each training step. The system now has to be capable of\n regenerating from states damaged by randomly placed erasing circles. Our\n hope is that this will generalize to regenerational capabilities from\n various types of damage.\n \n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n Damaging samples in the pool encourages the learning of robust\n regenerative qualities. Row 1 are samples from the pool, Row 2 are their\n respective states after iterating the model. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=QeXZKb5v2gxj)\n\n\n\n\n The animation above shows training progress, which includes sample damage.\n We sample 8 states from the pool. Then we replace the highest-loss sample\n (top-left-most in the above) with the seed state, and damage the three\n lowest-loss (top-right-most) states by setting a random circular region\n within the pattern to zeros. The bottom row shows states after iteration\n from the respective top-most starting state. As in Experiment 2, the\n resulting states get injected back into the pool.\n \n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n Patterns exposed to damage during training exhibit astounding\n regenerative capabilities. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=TDzJM69u4_8p)\n\n\n\n\n As we can see from the animation above, models that were exposed to damage\n during training are much more robust, including to types of damage not\n experienced in the training process (for instance rectangular damage as\n above).\n \n\n\nExperiment 4: Rotating the perceptive field\n-------------------------------------------\n\n\n\n As previously described, we model the cell’s perception of its neighbouring\n cells by estimating the gradients of state channels in x⃗\\vec{x}x⃗ and\n y⃗\\vec{y}y⃗​ using Sobel filters. A convenient analogy is that each agent has\n two sensors (chemosensory receptors, for instance) pointing in orthogonal\n directions that can sense the gradients in the concentration of certain\n chemicals along the axis of the sensor. What happens if we rotate those\n sensors? We can do this by rotating the Sobel kernels.\n \n\n\n\n[KxKy]=[cosθ−sinθsinθcosθ]∗[SobelxSobely] \\begin{bmatrix} K\\_x \\\\ K\\_y \\end{bmatrix} = \\begin{bmatrix} \\cos \\theta &\n -\\sin \\theta \\\\ \\sin \\theta & \\cos \\theta \\end{bmatrix} \\* \\begin{bmatrix}\n Sobel\\_x \\\\ Sobel\\_y \\end{bmatrix} [Kx​Ky​​]=[cosθsinθ​−sinθcosθ​]∗[Sobelx​Sobely​​]\n\n\n This simple modification of the perceptive field produces rotated versions\n of the pattern for an angle of choosing without retraining as seen below.\n \n\n\n\n\n\n\n![](figures/rotation.png)\n\n Rotating the axis along which the perception step computes gradients\n brings about rotated versions of the pattern. \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=1CVR9MeYnjuY)\n\n\n\n\n In a perfect world, not quantized by individual cells in a pixel-lattice,\n this would not be too surprising, as, after all, one would expect the\n perceived gradients in x⃗\\vec{x}x⃗ and y⃗\\vec{y}y⃗​ to be invariant to the chosen\n angle - a simple change of frame of reference. However, it is important to\n note that things are not as simple in a pixel based model. Rotating pixel\n based graphics involves computing a mapping that’s not necessarily bijective\n and classically involves interpolating between pixels to achieve the desired\n result. This is because a single pixel, when rotated, will now likely\n overlap several pixels. The successful growth of patterns as above suggests\n a certain robustness to the underlying conditions outside of those\n experienced during training.\n \n\n\nRelated Work\n------------\n\n\n### CA and PDEs\n\n\n\n There exists an extensive body of literature that describes the various\n flavours of cellular automata and PDE systems, and their applications to\n modelling physical, biological or even social systems. Although it would be\n impossible to present a just overview of this field in a few lines, we will\n describe some prominent examples that inspired this work. Alan Turing\n introduced his famous Turing patterns back in 1952\n , suggesting how\n reaction-diffusion systems can be a valid model for chemical behaviors\n during morphogenesis. A particularly inspiring reaction-diffusion model that\n stood the test of time is the Gray-Scott model\n , which shows an extreme variety of\n behaviors controlled by just a few variables.\n \n\n\n\n Ever since von Neumann introduced CAs\n as models for self-replication they\n have captivated researchers’ minds, who observed extremely complex\n behaviours emerging from very simple rules. Likewise, the a broader audience\n outside of academia were seduced by CA’s life-like behaviours thanks to\n Conway’s Game of Life . Perhaps\n motivated in part by the proof that something as simple as the Rule 110 is\n Turing complete, Wolfram’s “*A New Kind of Science”*\n asks for a paradigm shift centered\n around the extensive usage of elementary computer programs such as CA as\n tools for understanding the world.\n \n\n\n\n More recently, several researchers generalized Conway’s Game of life to work\n on more continuous domains. We were particularly inspired by Rafler’s\n SmoothLife and Chan’s Lenia\n , the latter of\n which also discovers and classifies entire species of “lifeforms”.\n \n\n\n\n A number of researchers have used evolutionary algorithms to find CA rules\n that reproduce predefined simple patterns\n .\n For example, J. Miller proposed an\n experiment similar to ours, using evolutionary algorithms to design a CA\n rule that could build and regenerate the French flag, starting from a seed\n cell.\n \n\n\n### Neural Networks and Self-Organisation\n\n\n\n The close relation between Convolutional Neural Networks and Cellular\n Automata has already been observed by a number of researchers\n . The\n connection is so strong it allowed us to build Neural CA models using\n components readily available in popular ML frameworks. Thus, using a\n different jargon, our Neural CA could potentially be named “Recurrent\n Residual Convolutional Networks with ‘per-pixel’ Dropout”.\n \n\n\n\n The Neural GPU\n offers\n a computational architecture very similar to ours, but applied in the\n context of learning multiplication and a sorting algorithm.\n \n\n\n\n Looking more broadly, we think that the concept of self-organisation is\n finding its way into mainstream machine learning with popularisation of\n Graph Neural Network models.\n Typically, GNNs run a repeated computation across vertices of a (possibly\n dynamic) graph. Vertices communicate locally through graph edges, and\n aggregate global information required to perform the task over multiple\n rounds of message exchanges, just as atoms can be thought of as\n communicating with each other to produce the emergent properties of a\n molecule , or even points of a point\n cloud talk to their neighbors to figure out their global shape\n .\n \n\n\n\n Self-organization also appeared in fascinating contemporary work using more\n traditional dynamic graph networks, where the authors evolved\n Self-Assembling Agents to solve a variety of virtual tasks\n .\n \n\n\n### Swarm Robotics\n\n\n\n One of the most remarkable demonstrations of the power of self-organisation\n is when it is applied to swarm modeling. Back in 1987, Reynolds’ Boids\n simulated the flocking behaviour of birds with\n just a tiny set of handcrafted rules. Nowadays, we can embed tiny robots\n with programs and test their collective behavior on physical agents, as\n demonstrated by work such as Mergeable Nervous Systems\n and Kilobots\n . To the best of our knowledge, programs\n embedded into swarm robots are currently designed by humans. We hope our\n work can serve as an inspiration for the field and encourage the design of\n collective behaviors through differentiable modeling.\n \n\n\nDiscussion\n----------\n\n\n### Embryogenetic Modeling\n\n\n\n\n\n\n\n\n Your browser does not support the video tag.\n \n\n Regeneration-capable 2-headed planarian, the creature that inspired this\n work \n \n \n\n[Reproduce in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb#scrollTo=fQ1u2MqFy7Ni)\n\n\n\n\n This article describes a toy embryogenesis and regeneration model. This is a\n major direction for future work, with many applications in biology and\n beyond. In addition to the implications for understanding the evolution and\n control of regeneration, and harnessing this understanding for biomedical\n repair, there is the field of bioengineering. As the field transitions from\n synthetic biology of single cell collectives to a true synthetic morphology\n of novel living machines , it\n will be essential to develop strategies for programming system-level\n capabilities, such as anatomical homeostasis (regenerative repair). It has\n long been known that regenerative organisms can restore a specific\n anatomical pattern; however, more recently it’s been found that the target\n morphology is not hard coded by the DNA, but is maintained by a\n physiological circuit that stores a setpoint for this anatomical homeostasis\n . Techniques are\n now available for re-writing this setpoint, resulting for example\n in 2-headed flatworms\n that, when cut into pieces in plain water (with no more manipulations)\n result in subsequent generations of 2-headed regenerated worms (as shown\n above). It is essential to begin to develop models of the computational\n processes that store the system-level target state for swarm behavior\n , so that efficient strategies can be developed for rationally editing this\n information structure, resulting in desired large-scale outcomes (thus\n defeating the inverse problem that holds back regenerative medicine and many\n other advances).\n \n\n\n### Engineering and machine learning\n\n\n\n The models described in this article run on the powerful GPU of a modern\n computer or a smartphone. Yet, let’s speculate about what a “more physical”\n implementation of such a system could look like. We can imagine it as a grid\n of tiny independent computers, simulating individual cells. Each of those\n computers would require approximately 10Kb of ROM to store the “cell\n genome”: neural network weights and the control code, and about 256 bytes of\n RAM for the cell state and intermediate activations. The cells must be able\n to communicate their 16-value state vectors to neighbors. Each cell would\n also require an RGB-diode to display the color of the pixel it represents. A\n single cell update would require about 10k multiply-add operations and does\n not have to be synchronised across the grid. We propose that cells might\n wait for random time intervals between updates. The system described above\n is uniform and decentralised. Yet, our method provides a way to program it\n to reach the predefined global state, and recover this state in case of\n multi-element failures and restarts. We therefore conjecture this kind of\n modeling may be used for designing reliable, self-organising agents. On the\n more theoretical machine learning front, we show an instance of a\n decentralized model able to accomplish remarkably complex tasks. We believe\n this direction to be opposite to the more traditional global modeling used\n in the majority of contemporary work in the deep learning field, and we hope\n this work to be an inspiration to explore more decentralized learning\n modeling.\n \n\n\n\n![](images/multiple-pages.svg)\n\n This article is part of the\n [Differentiable Self-organizing Systems Thread](/2020/selforg/),\n an experimental format collecting invited short articles delving into\n differentiable self-organizing systems, interspersed with critical\n commentary from several experts in adjacent fields.\n \n\n\n[Differentiable Self-organizing Systems Thread](/2020/selforg/)\n[Self-classifying MNIST Digits](/2020/selforg/mnist/)", "date_published": "2020-02-11T20:00:00Z", "authors": ["Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin"], "summaries": ["Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns."], "doi": "10.23915/distill.00023", "journal_ref": "distill-pub", "bibliography": [{"link": "https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2016.0555", "title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"link": "http://dx.doi.org/10.1039/C5IB00221D", "title": "Re-membering the body: applications of computational neuroscience to the top-down control of regeneration of limbs and other complex organs"}, {"link": "https://dev.biologists.org/content/139/2/313", "title": "Transmembrane voltage potential controls embryonic eye patterning in Xenopus laevis"}, {"link": "http://arxiv.org/pdf/1602.07868.pdf", "title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"}, {"link": "https://doi.org/10.1007/BF02459572", "title": "The chemical basis of morphogenesis"}, {"link": "https://science.sciencemag.org/content/261/5118/189", "title": "Complex Patterns in a Simple System"}, {"link": "http://www.jstor.org/stable/24927642", "title": "MATHEMATICAL GAMES"}, {"link": "https://www.wolframscience.com", "title": "A New Kind of Science"}, {"link": "http://dx.doi.org/10.25088/complexsystems.28.3.251", "title": "Lenia: Biology of Artificial Life"}, {"link": "https://doi.org/10.1007/978-3-540-24854-5_12", "title": "Evolving a Self-Repairing, Self-Regulating, French Flag Organism"}, {"link": "https://papers.nips.cc/paper/703-learning-cellular-automaton-dynamics-with-neural-networks.pdf", "title": "Learning Cellular Automaton Dynamics with Neural Networks"}, {"link": "http://arxiv.org/pdf/1809.02942.pdf", "title": "Cellular automata as convolutional neural networks"}, {"link": "http://papers.nips.cc/paper/5954-convolutional-networks-on-graphs-for-learning-molecular-fingerprints.pdf", "title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"link": "http://dx.doi.org/10.1145/3326362", "title": "Dynamic Graph CNN for Learning on Point Clouds"}, {"link": "https://pathak22.github.io/modular-assemblies/", "title": "Learning to Control Self- Assembling Morphologies: A Study of Generalization via Modularity"}, {"link": "https://doi.org/10.1145/37402.37406", "title": "Flocks, Herds and Schools: A Distributed Behavioral Model"}, {"link": "https://doi.org/10.1038/s41467-017-00109-2", "title": "Mergeable nervous systems for robots"}, {"link": "https://doi.org/10.1109/ICRA.2012.6224638", "title": "Kilobot: A low cost scalable robot system for collective behaviors"}, {"link": "http://www.youtube.com/watch?v=RjD1aLm4Thg", "title": "What Bodies Think About: Bioelectric Computation Outside the Nervous System"}, {"link": "https://www.pnas.org/content/117/4/1853", "title": "A scalable pipeline for designing reconfigurable organisms"}, {"link": "https://doi.org/10.1063/1.5038337", "title": "Perspective: The promise of multi-cellular engineered living systems"}, {"link": "https://doi.org/10.1080/19420889.2016.1192733", "title": "Physiological inputs regulate species-specific anatomy during embryogenesis and regeneration"}, {"link": "http://www.sciencedirect.com/science/article/pii/S001216060901402X", "title": "Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration"}, {"link": "http://www.sciencedirect.com/science/article/pii/S0006349517304277", "title": "Long-Term, Stochastic Editing of Regenerative Anatomy via Targeting Endogenous Bioelectric Gradients"}, {"link": "https://www.mitpressjournals.org/doi/abs/10.1162/isal_a_00043", "title": "Pattern Regeneration in Coupled Networks"}, {"link": "http://www.sciencedirect.com/science/article/pii/S0079610718300415", "title": "Bioelectrical control of positional information in development and regeneration: A review of conceptual and computational advances"}, {"link": "https://www.mitpressjournals.org/doi/abs/10.1162/isal_a_00041", "title": "Modeling Cell Migration in a Simulated Bioelectrical Signaling Network for Anatomical Regeneration"}, {"link": "https://www.mitpressjournals.org/doi/abs/10.1162/isal_a_029", "title": "Investigating the effects of noise on a cell-to-cell communication mechanism for structure regeneration"}, {"link": "https://slideslive.com/38922302", "title": "Social Intelligence"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}]} {"id": "67e0be00bc03466d6c890df4511e3a4d", "title": "Visualizing the Impact of Feature Attribution Baselines", "url": "https://distill.pub/2020/attribution-baselines", "source": "distill", "source_type": "blog", "text": "Path attribution methods are a gradient-based way\n of explaining deep models. These methods require choosing a\n hyperparameter known as the *baseline input*.\n What does this hyperparameter mean, and how important is it? In this article,\n we investigate these questions using image classification networks\n as a case study. We discuss several different ways to choose a baseline\n input and the assumptions that are implicit in each baseline.\n Although we focus here on path attribution methods, our discussion of baselines\n is closely connected with the concept of missingness in the feature space -\n a concept that is critical to interpretability research.\n \n\n\nIntroduction\n------------\n\n\n\n If you are in the business of training neural networks,\n you might have heard of the integrated gradients method, which\n was introduced at \n [ICML](https://en.wikipedia.org/wiki/International_Conference_on_Machine_Learning) two years ago \n .\n The method computes which features are important \n to a neural network when making a prediction on a \n particular data point. This helps users\n understand which features their network relies on.\n Since its introduction,\n integrated gradients has been used to interpret \n networks trained on a variety of data types, \n including retinal fundus images \n and electrocardiogram recordings .\n \n\n\n\n If you’ve ever used integrated gradients,\n you know that you need to define a baseline input \\(x’\\) before\n using the method. Although the original paper discusses the need for a baseline\n and even proposes several different baselines for image data - including \n the constant black image and an image of random noise - there is\n little existing research about the impact of this baseline. \n Is integrated gradients sensitive to the \n hyperparameter choice? Why is the constant black image \n a “natural baseline” for image data? Are there any alternative choices?\n \n\n\n\n In this article, we will delve into how this hyperparameter choice arises,\n and why understanding it is important when you are doing model interpretation.\n As a case-study, we will focus on image classification models in order \n to visualize the effects of the baseline input. We will explore several \n notions of missingness, including both constant baselines and baselines\n defined by distributions. Finally, we will discuss different ways to compare\n baseline choices and talk about why quantitative evaluation\n remains a difficult problem.\n \n\n\nImage Classification\n--------------------\n\n\n\n We focus on image classification as a task, as it will allow us to visually\n plot integrated gradients attributions, and compare them with our intuition\n about which pixels we think should be important. We use the Inception V4 architecture \n , a convolutional \n neural network designed for the ImageNet dataset ,\n in which the task is to determine which class an image belongs to out of 1000 classes.\n On the ImageNet validation set, Inception V4 has a top-1 accuracy of over 80%.\n We download weights from TensorFlow-Slim ,\n and visualize the predictions of the network on four different images from the \n validation set.\n \n \n\n\n\n\n\n\n Right: The predicted logits of the network on the original image. The\n network correctly classifies all images with high confidence.\n Left: Pixel-wise attributions of the Inception V4 network using integrated gradients.\n You might notice that some attributions highlight pixels that do not seem important\n relative to the true class label.\n \n\n\n\n Although state of the art models perform well on unseen data,\n users may still be left wondering: *how* did the model figure\n out which object was in the image? There are a myriad of methods to\n interpret machine learning models, including methods to\n visualize and understand how the network represents inputs internally , \n feature attribution methods that assign an importance score to each feature \n for a specific input ,\n and saliency methods that aim to highlight which regions of an image\n the model was looking at when making a decision\n .\n These categories are not mutually exclusive: for example, an attribution method can be\n visualized as a saliency method, and a saliency method can assign importance\n scores to each individual pixel. In this article, we will focus\n on the feature attribution method integrated gradients.\n \n\n\n\n Formally, given a target input \\(x\\) and a network function \\(f\\), \n feature attribution methods assign an importance score \\(\\phi\\_i(f, x)\\)\n to the \\(i\\)th feature value representing how much that feature\n adds or subtracts from the network output. A large positive or negative \\(\\phi\\_i(f, x)\\)\n indicates that feature strongly increases or decreases the network output \n \\(f(x)\\) respectively, while an importance score close to zero indicates that\n the feature in question did not influence \\(f(x)\\).\n \n\n\n\n In the same figure above, we visualize which pixels were most important to the network’s correct\n prediction using integrated gradients. \n The pixels in white indicate more important pixels. In order to plot\n attributions, we follow the same design choices as .\n That is, we plot the absolute value of the sum of feature attributions\n across the channel dimension, and cap feature attributions at the 99th percentile to avoid\n high-magnitude attributions dominating the color scheme.\n \n\n\nA Better Understanding of Integrated Gradients\n----------------------------------------------\n\n\n\n As you look through the attribution maps, you might find some of them\n unintuitive. Why does the attribution for “goldfinch” highlight the green background?\n Why doesn’t the attribution for “killer whale” highlight the black parts of the killer whale?\n To better understand this behavior, we need to explore how\n we generated feature attributions. Formally, integrated gradients\n defines the importance value for the \\(i\\)th feature value as follows:\n $$\\phi\\_i^{IG}(f, x, x’) = \\overbrace{(x\\_i - x’\\_i)}^{\\text{Difference from baseline}} \n \\times \\underbrace{\\int\\_{\\alpha = 0}^ 1}\\_{\\text{From baseline to input…}}\n \\overbrace{\\frac{\\delta f(x’ + \\alpha (x - x’))}{\\delta x\\_i} d \\alpha}^{\\text{…accumulate local gradients}}\n $$\n where \\(x\\) is the current input,\n \\(f\\) is the model function and \\(x’\\) is some baseline input that is meant to represent \n “absence” of feature input. The subscript \\(i\\) is used\n to denote indexing into the \\(i\\)th feature.\n \n\n\n\n As the formula above states, integrated gradients gets importance scores\n by accumulating gradients on images interpolated between the baseline value and the current input.\n But why would doing this make sense? Recall that the gradient of\n a function represents the direction of maximum increase. The gradient\n is telling us which pixels have the steepest local slope with respect\n to the output. For this reason, the gradient of a network at the input\n was one of the earliest saliency methods.\n \n\n\n\n Unfortunately, there are many problems with using gradients to interpret\n deep neural networks . \n One specific issue is that neural networks are prone to a problem\n known as saturation: the gradients of input features may have small magnitudes around a \n sample even if the network depends heavily on those features. This can happen\n if the network function flattens after those features reach a certain magnitude.\n Intuitively, shifting the pixels in an image by a small amount typically\n doesn’t change what the network sees in the image. We can illustrate\n saturation by plotting the network output at all\n images between the baseline \\(x’\\) and the current image. The figure\n below displays that the network\n output for the correct class increases initially, but then quickly flattens.\n \n\n\n\n\n\n\n A plot of network outputs at \\(x’ + \\alpha (x - x’)\\).\n Notice that the network output saturates the correct class\n at small values of \\(\\alpha\\). By the time \\(\\alpha = 1\\),\n the network output barely changes.\n \n\n\n\n What we really want to know is how our network got from \n predicting essentially nothing at \\(x’\\) to being \n completely saturated towards the correct output class at \\(x\\).\n Which pixels, when scaled along this path, most\n increased the network output for the correct class? This is\n exactly what the formula for integrated gradients gives us.\n \n\n\n\n By integrating over a path, \n integrated gradients avoids problems with local gradients being\n saturated. We can break the original equation\n down and visualize it in three separate parts: the interpolated image between\n the baseline image and the target image, the gradients at the interpolated\n image, and accumulating many such gradients over \\(\\alpha\\).\n\n $$\n \\int\\_{\\alpha’ = 0}^{\\alpha} \\underbrace{(x\\_i - x’\\_i) \\times \n \\frac{\\delta f(\\text{ }\\overbrace{x’ + \\alpha’ (x - x’)}^{\\text{(1): Interpolated Image}}\\text{ })}\n {\\delta x\\_i} d \\alpha’}\\_{\\text{(2): Gradients at Interpolation}} \n = \\overbrace{\\phi\\_i^{IG}(f, x, x’; \\alpha)}^{\\text{(3): Cumulative Gradients up to }\\alpha}\n $$\n \n\n We visualize these three pieces of the formula below.Note that in practice, we use a discrete sum\n approximation of the integral with 500 linearly-spaced points between 0 and 1.\n\n\n\n\n\n\n\n Integrated gradients, visualized. In the line chart, the red line refers to\n equation (4) and the blue line refers to \\(f(x) - f(x’)\\). Notice how high magnitude gradients\n accumulate at small values of \\(\\alpha\\).\n \n\n\n\n We have casually omitted one part of the formula: the fact\n that we multiply by a difference from a baseline. Although\n we won’t go into detail here, this term falls out because we\n care about the derivative of the network\n function \\(f\\) with respect to the path we are integrating over.\n That is, if we integrate over the\n straight-line between \\(x’\\) and \\(x\\), which\n we can represent as \\(\\gamma(\\alpha) =\n x’ + \\alpha(x - x’)\\), then:\n $$\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\alpha} =\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\gamma(\\alpha)} \\times \n \\frac{\\delta \\gamma(\\alpha)}{\\delta \\alpha} = \n \\frac{\\delta f(x’ + \\alpha’ (x - x’))}{\\delta x\\_i} \\times (x\\_i - x’\\_i) \n $$\n The difference from baseline term is the derivative of the \n path function \\(\\gamma\\) with respect to \\(\\alpha\\).\n The theory behind integrated gradients is discussed\n in more detail in the original paper. In particular, the authors\n show that integrated gradients satisfies several desirable\n properties, including the completeness axiom:\n $$\n \\textrm{Axiom 1: Completeness}\\\\\n \\sum\\_i \\phi\\_i^{IG}(f, x, x’) = f(x) - f(x’)\n $$\n Note that this theorem holds for any baseline \\(x’\\).\n Completeness is a desirable property because it states that the \n importance scores for each feature break down the output of the network:\n each importance score represents that feature’s individual contribution to\n the network output, and added when together, we recover the output value itself.\n Although it’s not essential to our discussion here, we can prove \n that integrated gradients satisfies this axiom using the\n [fundamental\n theorem of calculus for path integrals](https://en.wikipedia.org/wiki/Gradient_theorem). We leave a\n full discussion of all of the properties that integrated \n gradients satisfies to the original paper, since they hold\n independent of the choice of baseline. The completeness \n axiom also provides a way to measure convergence.\n \n\n\n\n In practice, we can’t compute the exact value of the integral. Instead,\n we use a discrete sum approximation with \\(k\\) linearly-spaced points between\n 0 and 1 for some value of \\(k\\). If we only chose 1 point to \n approximate the integral, that feels like too few. Is 10 enough? 100?\n Intuitively 1,000 may seem like enough, but can we be certain?\n As proposed in the original paper, we can use the completeness axiom\n as a sanity check on convergence: run integrated gradients with \\(k\\)\n points, measure \\(|\\sum\\_i \\phi\\_i^{IG}(f, x, x’) - (f(x) - f(x’))|\\),\n and if the difference is large, re-run with a larger \\(k\\) \n Of course, this brings up a new question: what is “large” in this context?\n One heuristic is to compare the difference with the magnitude of the\n output itself.\n .\n \n\n\n\n The line chart above plots the following equation in red:\n $$\n \\underbrace{\\sum\\_i \\phi\\_i^{IG}(f, x, x’; \\alpha)}\\_{\\text{(4): Sum of Cumulative Gradients up to }\\alpha}\n $$\n That is, it sums all of the pixel attributions in the saliency map.\n This lets us compare to the blue line, which plots \\(f(x) - f(x’)\\).\n We can see that with 500 samples, we seem (at least intuitively) to\n have converged. But this article isn’t about how \n to get good convergence - it’s about baselines! In order\n to advance our understanding of the baseline, we will need a brief excursion\n into the world of game theory.\n \n\n\nGame Theory and Missingness\n---------------------------\n\n\n\n Integrated gradients is inspired by work\n from cooperative game theory, specifically the Aumann-Shapley value\n . In cooperative game theory,\n a non-atomic game is a construction used to model large-scale economic systems\n where there are enough participants that it is desirable to model them continuously.\n Aumann-Shapley values provide a theoretically grounded way to\n determine how much different groups of participants contribute to the system.\n \n\n\n\n In game theory, a notion of missingness is well-defined. Games are defined\n on coalitions - sets of participants - and for any specific coalition,\n a participant of the system can be in or out of that coalition. The fact\n that games can be evaluated on coalitions is the foundation of\n the Aumann-Shapley value. Intuitively, it computes how\n much value a group of participants adds to the game \n by computing how much the value of the game would increase\n if we added more of that group to any given coalition.\n \n\n\n\n Unfortunately, missingness is a more difficult notion when\n we are speaking about machine learning models. In order\n to evaluate how important the \\(i\\)th feature is, we\n want to be able to compute how much the output of\n the network would increase if we successively increased\n the “presence” of the \\(i\\)th feature. But what does this mean, exactly?\n In order to increase the presence of a feature, we would need to start\n with the feature being “missing” and have a way of interpolating \n between that missingness and its current, known value.\n \n\n\n\n Hopefully, this is sounding awfully familiar. Integrated gradients\n has a baseline input \\(x’\\) for exactly this reason: to model a\n feature being absent. But how should you choose\n \\(x’\\) in order to best represent this? It seems to be common practice\n to choose a baseline input \\(x’\\) to be the vector of\n all zeros. But consider the following scenario: you’ve learned a model\n on a healthcare dataset, and one of the features is blood sugar level.\n The model has correctly learned that excessively low levels of blood sugar,\n which correspond to hypoglycemia, is dangerous. Does\n a blood sugar level of \\(0\\) seem like a good choice to represent missingness?\n \n\n\n\n The point here is that fixed feature values may have unintended meaning.\n The problem compounds further when you consider the difference from\n baseline term \\(x\\_i - x’\\_i\\).\n For the sake of a thought experiment, suppose a patient had a blood sugar level of \\(0\\). \n To understand why our machine learning model thinks this patient\n is at high risk, you run integrated gradients on this data point with a\n baseline of the all-zeros vector. The blood sugar level of the patient would have \\(0\\) feature importance,\n because \\(x\\_i - x’\\_i = 0\\). This is despite the fact that \n a blood sugar level of \\(0\\) would be fatal!\n \n\n\n\n We find similar problems when we move to the image domain.\n If you use a constant black image as a baseline, integrated gradients will\n not highlight black pixels as important even if black pixels make up\n the object of interest. More generally, the method is blind to the color you use as a baseline, which\n we illustrate with the figure below. Note that this was acknowledged by the original\n authors in , and is in fact\n central to the definition of a baseline: we wouldn’t want integrated gradients\n to highlight missing features as important! But then how do we avoid\n giving zero importance to the baseline color?\n \n\n\n\n\n\n\n Mouse over the segmented image to choose a different color\n as a baseline input \\(x’\\). Notice that pixels\n of the baseline color are not highlighted as important, \n even if they make up part of the main object in the image.\n \n\n\nAlternative Baseline Choices\n----------------------------\n\n\n\n It’s clear that any constant color baseline will have this problem.\n Are there any alternatives? In this section, we\n compare four alternative choices for a baseline in the image domain.\n Before proceeding, it’s important to note that this article isn’t\n the first article to point out the difficulty of choosing a baselines.\n Several articles, including the original paper, discuss and compare\n several notions of “missingness”, both in the\n context of integrated gradients and more generally \n .\n Nonetheless, choosing the right baseline remains a challenge. Here we will\n present several choices for baselines: some based on existing literature,\n others inspired by the problems discussed above. The figure at the end \n of the section visualizes the four baselines presented here.\n \n\n\n### The Maximum Distance Baseline\n\n\n\n If we are worried about constant baselines that are blind to the baseline\n color, can we explicitly construct a baseline that doesn’t suffer from this\n problem? One obvious way to construct such a baseline is to take the \n farthest image in L1 distance from the current image such that the\n baseline is still in the valid pixel range. This baseline, which\n we will refer to as the maximum distance baseline (denoted\n *max dist.* in the figure below),\n avoids the difference from baseline issue directly. \n \n\n\n### The Blurred Baseline\n\n\n\n The issue with the maximum distance baseline is that it doesn’t \n really represent *missingness*. It actually contains a lot of\n information about the original image, which means we are no longer\n explaining our prediction relative to a lack of information. To better\n preserve the notion of missingness, we take inspiration from \n . In their paper,\n Fong and Vedaldi use a blurred version of the image as a \n domain-specific way to represent missing information. This baseline\n is attractive because it captures the notion of missingness in images\n in a very human intuitive way. In the figure below, this baseline is\n denoted *blur*. The figure lets you play with the smoothing constant\n used to define the baseline.\n \n\n\n### The Uniform Baseline\n\n\n\n One potential drawback with the blurred baseline is that it is biased\n to highlight high-frequency information. Pixels that are very similar\n to their neighbors may get less importance than pixels that are very \n different than their neighbors, because the baseline is defined as a weighted\n average of a pixel and its neighbors. To overcome this, we can again take inspiration\n from both and the original integrated\n gradients paper. Another way to define missingness is to simply sample a random\n uniform image in the valid pixel range and call that the baseline. \n We refer to this baseline as the *uniform* baseline in the figure below.\n \n\n\n### The Gaussian Baseline\n\n\n\n Of course, the uniform distribution is not the only distribution we can\n draw random noise from. In their paper discussing the SmoothGrad (which we will \n touch on in the next section), Smilkov et al. \n make frequent use of a gaussian distribution centered on the current image with \n variance \\(\\sigma\\). We can use the same distribution as a baseline for \n integrated gradients! In the figure below, this baseline is called the *gaussian*\n baseline. You can vary the standard deviation of the distribution \\(\\sigma\\) using the slider. \n One thing to note here is that we truncate the gaussian baseline in the valid pixel\n range, which means that as \\(\\sigma\\) approaches \\(\\infty\\), the gaussian\n baseline approaches the uniform baseline.\n \n\n\n\n\n\n\n\n Comparing alternative baseline choices. For the blur and gaussian\n baselines, you can vary the parameter \\(\\sigma\\), which refers\n to the width of the smoothing kernel and the standard deviation of\n noise respectively.\n \n\n\nAveraging Over Multiple Baselines\n---------------------------------\n\n\n\n You may have nagging doubts about those last two baselines, and you\n would be right to have them. A randomly generated baseline\n can suffer from the same blindness problem that a constant image can. If \n we draw a uniform random image as a baseline, there is a small chance\n that a baseline pixel will be very close to its corresponding input pixel\n in value. Those pixels will not be highlighted as important. The resulting\n saliency map may have artifacts due to the randomly drawn baseline. Is there\n any way we can fix this problem?\n \n\n\n\n Perhaps the most natural way to do so is to average over multiple\n different baselines, as discussed in \n .\n Although doing so may not be particularly natural for constant color images\n (which colors do you choose to average over and why?), it is a\n very natural notion for baselines drawn from distributions. Simply\n draw more samples from the same distribution and average the\n importance scores from each sample.\n \n\n\n### Assuming a Distribution\n\n\n\n At this point, it’s worth connecting the idea of averaging over multiple\n baselines back to the original definition of integrated gradients. When\n we average over multiple baselines from the same distribution \\(D\\),\n we are attempting to use the distribution itself as our baseline. \n We use the distribution to define the notion of missingness: \n if we don’t know a pixel value, we don’t assume its value to be 0 - instead\n we assume that it has some underlying distribution \\(D\\). Formally, given\n a baseline distribution \\(D\\), we integrate over all possible baselines\n \\(x’ \\in D\\) weighted by the density function \\(p\\_D\\):\n $$ \\phi\\_i(f, x) = \\underbrace{\\int\\_{x’}}\\_{\\text{Integrate over baselines…}} \\bigg( \\overbrace{\\phi\\_i^{IG}(f, x, x’\n )}^{\\text{integrated gradients \n with baseline } x’\n } \\times \\underbrace{p\\_D(x’) dx’}\\_{\\text{…and weight by the density}} \\bigg)\n $$\n \n\n\n\n In terms of missingness, assuming a distribution might intuitively feel \n like a more reasonable assumption to make than assuming a constant value.\n But this doesn’t quite solve the issue: instead of having to choose a baseline\n \\(x’\\), now we have to choose a baseline distribution \\(D\\). Have we simply\n postponed the problem? We will discuss one theoretically motivated\n way to choose \\(D\\) in an upcoming section, but before we do, we’ll take\n a brief aside to talk about how we compute the formula above in practice,\n and a connection to an existing method that arises as a result.\n \n\n\n### Expectations, and Connections to SmoothGrad\n\n\n\n Now that we’ve introduced a second integral into our formula,\n we need to do a second discrete sum to approximate it, which\n requires an additional hyperparameter: the number of baselines to sample. \n In , Erion et al. make the \n observation that both integrals can be thought of as expectations:\n the first integral as an expectation over \\(D\\), and the second integral \n as an expectation over the path between \\(x’\\) and \\(x\\). This formulation,\n called *expected gradients*, is defined formally as:\n $$ \\phi\\_i^{EG}(f, x; D) = \\underbrace{\\mathop{\\mathbb{E}}\\_{x’ \\sim D, \\alpha \\sim U(0, 1)}}\\_\n {\\text{Expectation over \\(D\\) and the path…}} \n \\bigg[ \\overbrace{(x\\_i - x’\\_i) \\times \n \\frac{\\delta f(x’ + \\alpha (x - x’))}{\\delta x\\_i}}^{\\text{…of the \n importance of the } i\\text{th pixel}} \\bigg]\n $$\n \n\n\n\n Expected gradients and integrated gradients belong to a family of methods\n known as “path attribution methods” because they integrate gradients\n over one or more paths between two valid inputs. \n Both expected gradients and integrated gradients use straight-line paths,\n but one can integrate over paths that are not straight as well. This is discussed\n in more detail in the original paper. To compute expected gradients in\n practice, we use the following formula:\n $$\n \\hat{\\phi}\\_i^{EG}(f, x; D) = \\frac{1}{k} \\sum\\_{j=1}^k (x\\_i - x’^j\\_i) \\times \n \\frac{\\delta f(x’^j + \\alpha^{j} (x - x’^j))}{\\delta x\\_i}\n $$\n where \\(x’^j\\) is the \\(j\\)th sample from \\(D\\) and \n \\(\\alpha^j\\) is the \\(j\\)th sample from the uniform distribution between\n 0 and 1. Now suppose that we use the gaussian baseline with variance\n \\(\\sigma^2\\). Then we can re-write the formula for expected gradients as follows:\n \n $$\n \\hat{\\phi}\\_i^{EG}(f, x; N(x, \\sigma^2 I)) \n = \\frac{1}{k} \\sum\\_{j=1}^k \n \\epsilon\\_{\\sigma}^{j} \\times \n \\frac{\\delta f(x + (1 - \\alpha^j)\\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n $$\n \n where \\(\\epsilon\\_{\\sigma}\\ \\sim N(\\bar{0}, \\sigma^2 I)\\)\n \n To see how we arrived\n at the above formula, first observe that \n $$ \n \\begin{aligned}\n x’ \\sim N(x, \\sigma^2 I) &= x + \\epsilon\\_{\\sigma} \\\\\n x’- x &= \\epsilon\\_{\\sigma} \\\\\n \\end{aligned}\n $$\n by definition of the gaussian baseline. Now we have: \n $$\n \\begin{aligned}\n x’ + \\alpha(x - x’) &= \\\\\n x + \\epsilon\\_{\\sigma} + \\alpha(x - (x + \\epsilon\\_{\\sigma})) &= \\\\\n x + (1 - \\alpha)\\epsilon\\_{\\sigma}\n \\end{aligned}\n $$\n The above formula simply substitues the last line\n of each equation block back into the formula.\n . \n \n This looks awfully familiar to an existing method called SmoothGrad\n . If we use the (gradients \\(\\times\\) input image)\n variant of SmoothGrad SmoothGrad is\n was a method designed to sharpen saliency maps and was meant to be run\n on top of an existing saliency method. The idea is simple:\n instead of running a saliency method once on an image, first\n add some gaussian noise to an image, then run the saliency method.\n Do this several times with different draws of gaussian noise, then\n average the results. Multipying the gradients by the input and using that as a saliency map\n is discussed in more detail in the original SmoothGrad paper., \n then we have the following formula:\n $$\n \\phi\\_i^{SG}(f, x; N(\\bar{0}, \\sigma^2 I)) \n = \\frac{1}{k} \\sum\\_{j=1}^k \n (x + \\epsilon\\_{\\sigma}^{j}) \\times \n \\frac{\\delta f(x + \\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n $$\n \n\n\n\n We can see that SmoothGrad and expected gradients with a\n gaussian baseline are quite similar, with two key differences:\n SmoothGrad multiplies the gradient by \\(x + \\epsilon\\_{\\sigma}\\) while expected\n gradients multiplies by just \\(\\epsilon\\_{\\sigma}\\), and while expected\n gradients samples uniformly along the path, SmoothGrad always\n samples the endpoint \\(\\alpha = 0\\).\n \n\n\n\n\n Can this connection help us understand why SmoothGrad creates\n smooth-looking saliency maps? When we assume the above gaussian distribution as our baseline, we are\n assuming that each of our pixel values is drawn from a\n gaussian *independently* of the other pixel values. But we know\n this is far from true: in images, there is a rich correlation structure\n between nearby pixels. Once your network knows the value of a pixel, \n it doesn’t really need to use its immediate neighbors because\n it’s likely that those immediate neighbors have very similar intensities.\n \n\n\n\n \n Assuming each pixel is drawn from an independent gaussian\n breaks this correlation structure. It means that expected gradients\n tabulates the importance of each pixel independently of\n the other pixel values. The generated saliency maps\n will be less noisy and better highlight the object of interest\n because we are no longer allowing the network to rely \n on only pixel in a group of correlated pixels. This may be\n why SmoothGrad is smooth: because it is implicitly assuming\n independence among pixels. In the figure below, you can compare\n integrated gradients with a single randomly drawn baseline\n to expected gradients sampled over a distribution. For\n the gaussian baseline, you can also toggle the SmoothGrad\n option to use the SmoothGrad formula above. For all figures,\n \\(k=500\\).\n \n\n\n\n\n\n\n\n The difference between a single baseline and multiple\n baselines from the same distribution. Use the \n “Multi-Reference” button to toggle between the two. For the gaussian\n baseline, you can also toggle the “Smooth Grad” button\n to toggle between expected gradients and SmoothGrad\n with gradients \\* inputs.\n \n\n\n### Using the Training Distribution\n\n\n\n Is it really reasonable to assume independence among\n pixels while generating saliency maps? In supervised learning, \n we make the assumption that the data is drawn\n from some distribution \\(D\\_{\\text{data}}\\). This assumption that the training and testing data \n share a common, underlying distribution is what allows us to \n do supervised learning and make claims about generalizability. Given\n this assumption, we don’t need to\n model missingness using a gaussian or a uniform distribution:\n we can use \\(D\\_{\\text{data}}\\) to model missingness directly.\n \n\n\n\n The only problem is that we do not have access to the underlying distribution.\n But because this is a supervised learning task, we do have access to many \n independent draws from the underlying distribution: the training data!\n We can simply use samples from the training data as random draws\n from \\(D\\_{\\text{data}}\\). This brings us to the variant\n of expected gradients used in ,\n which we again visualize in three parts:\n $$\n \\frac{1}{k} \\sum\\_{j=1}^k \n \\underbrace{(x\\_i - x’^j\\_i) \\times \n \\frac{\\delta f(\\text{ } \n \\overbrace{x’^j + \\alpha^{j} (x - x’^j)}^{\\text{(1): Interpolated Image}}\n \\text{ })}{\\delta x\\_i}}\\_{\\text{ (2): Gradients at Interpolation}}\n = \\overbrace{\\hat{\\phi\\_i}^{EG}(f, x, k; D\\_{\\text{data}})}\n ^{\\text{(3): Cumulative Gradients up to }\\alpha}\n $$\n \n \n\n\n\n\n\n\n A visual representation of expected gradients. Instead of taking contributions\n from a single path, expected gradients averages contributions from \n all paths defined by the underlying data distribution. Note that\n this figure only displays every 10th sample to avoid loading many images.\n \n\n\n\n In (4) we again plot the sum of the importance scores over pixels. As mentioned\n in the original integrated gradients paper, all path methods, including expected\n gradients, satisfy the completeness axiom. We can definitely see that\n completeness is harder to satisfy when we integrate over both a path\n and a distribution: that is, with the same number\n of samples, expected gradients doesn’t converge as quickly as \n integrated gradients does. Whether or not this is an acceptable price to\n pay to avoid color-blindness in attributions seems subjective.\n \n\n\nComparing Saliency Methods\n--------------------------\n\n\n\n So we now have many different choices for a baseline. How do we choose\n which one we should use? The different choices of distributions and constant\n baselines have different theoretical motivations and practical concerns.\n Do we have any way of comparing the different baselines? In this section,\n we will touch on several different ideas about how to compare\n interpretability methods. This section is not meant to be a comprehensive overview\n of all of the existing evaluation metrics, but is instead meant to \n emphasize that evaluating interpretability methods remains a difficult problem.\n \n\n\n### The Dangers of Qualitative Assessment\n\n\n\n One naive way to evaluate our baselines is to look at the saliency maps \n they produce and see which ones best highlight the object in the image. \n From our earlier figures, it does seem like using \\(D\\_{\\text{data}}\\) produces\n reasonable results, as does using a gaussian baseline or the blurred baseline.\n But is visual inspection really a good way judge our baselines? For one thing,\n we’ve only presented four images from the test set here. We would need to\n conduct user studies on a much larger scale with more images from the test\n set to be confident in our results. But even with large-scale user studies,\n qualitative assessment of saliency maps has other drawbacks.\n \n\n\n\n When we rely on qualitative assessment, we are assuming that humans\n know what an “accurate” saliency map is. When we look at saliency maps\n on data like ImageNet, we often check whether or not the saliency map\n highlights the object that we see as representing the true class in the image.\n We make an assumption between the data and the label, and then further assume\n that a good saliency map should reflect that assumption. But doing so\n has no real justification. Consider the figure below, which compares \n two saliency methods on a network that gets above 99% accuracy\n on (an altered version of) MNIST.\n The first saliency method is just an edge detector plus gaussian smoothing,\n while the second saliency method is expected gradients using the training\n data as a distribution. Edge detection better reflects what we humans\n think is the relationship between the image and the label.\n \n\n\n\n\n\n\n Qualitative assessment can be dangerous because we rely\n on our human knowledge of the relationship between\n the data and the labels, and then we assume\n that an accurate model has learned that very relationship.\n \n\n\n\n Unfortunately, the edge detection method here does not highlight \n what the network has learned. This dataset is a variant of \n decoy MNIST, in which the top left corner of the image has\n been altered to directly encode the image’s class\n . That is, the intensity\n of the top left corner of each image has been altered to \n be \\(255 \\times \\frac{y}{9} \\) where \\(y\\) is the class\n the image belongs to. We can verify by removing this\n patch in the test set that the network heavily relies on it to make\n predictions, which is what the expected gradients saliency maps show.\n \n\n\n\n This is obviously a contrived example. Nonetheless, the fact that\n visual assessment is not necessarily a useful way to evaluate \n saliency maps and attribution methods has been extensively\n discussed in recent literature, with many proposed qualitative\n tests as replacements \n .\n At the heart of the issue is that we don’t have ground truth explanations:\n we are trying to evaluate which methods best explain our network without\n actually knowing what our networks are doing.\n \n\n\n### Top K Ablation Tests\n\n\n\n One simple way to evaluate the importance scores that \n expected/integrated gradients produces is to see whether \n ablating the top k features as ranked by their importance\n decreases the predicted output logit. In the figure below, we\n ablate either by mean-imputation or by replacing each pixel\n by its gaussian-blurred counter-part (*Mean Top K* and *Blur Top K* in the plot). We generate pixel importances\n for 1000 different correctly classified test-set images using each\n of the baselines proposed above \n For the blur baseline and the blur\n ablation test, we use \\(\\sigma = 20\\).\n For the gaussian baseline, we use \\(\\sigma = 1\\). These choices\n are somewhat arbitrary - a more comprehensive evaluation\n would compare across many values of \\(\\sigma\\).\n . As a\n control, we also include ranking features randomly\n (*Random Noise* in the plot). \n \n\n\n\n We plot, as a fraction of the original logit, the output logit\n of the network at the true class. That is, suppose the original\n image is a goldfinch and the network predicts the goldfinch class correctly\n with 95% confidence. If the confidence of class goldfinch drops\n to 60% after ablating the top 10% of pixels as ranked by \n feature importance, then we plot a curve that goes through\n the points \\((0.0, 0.95)\\) and \\((0.1, 0.6)\\). The baseline choice\n that best highlights which pixels the network \n should exhibit the fastest drop in logit magnitude, because\n it highlights the pixels that most increase the confidence of the network.\n That is, the lower the curve, the better the baseline.\n \n\n\n### Mass Center Ablation Tests\n\n\n\n One problem with ablating the top k features in an image\n is related to an issue we already brought up: feature correlation.\n No matter how we ablate a pixel, that pixel’s neighbors \n provide a lot of information about the pixel’s original value.\n With this in mind, one could argue that progressively ablating \n pixels one by one is a rather meaningless thing to do. Can\n we instead perform ablations with feature correlation in mind?\n \n\n\n\n One straightforward way to do this is simply compute the \n center of mass \n of the saliency map, and ablate a boxed region centered on\n the center of mass. This tests whether or not the saliency map\n is generally highlighting an important region in the image. We plot\n replacing the boxed region around the saliency map using mean-imputation\n and blurring below as well (*Mean Center* and *Blur Center*, respectively).\n As a control, we compare against a saliency map generated from random gaussian\n noise (*Random Noise* in the plot).\n \n\n\n\n\n\n\n\n\n\n\n A variety of ablation tests on a variety of baselines.\n Using the training distribution and using the uniform distribution\n outperform most other methods on the top k ablation tests. The\n blur baseline inspired by \n does equally well on the blur top-k test. All methods\n perform similarly on the mass center ablation tests. Mouse\n over the legend to highlight a single curve.\n \n\n\n\n The ablation tests seem to indicate some interesting trends. \n All methods do similarly on the mass center ablation tests, and\n only slightly better than random noise. This may be because the \n object of interest generally lies in the center of the image - it\n isn’t hard for random noise to be centered at the image. In contrast,\n using the training data or a uniform distribution seems to do quite well\n on the top-k ablation tests. Interestingly, the blur baseline\n inspired by also\n does quite well on the top k baseline tests, especially when\n we ablate pixels by blurring them! Would the uniform\n baseline do better if you ablate the image with uniform random noise?\n Perhaps the training distribution baseline would do even better if you ablate an image\n by progressively replacing it with a different image. We leave\n these experiments as future work, as there is a more pressing question\n we need to discuss.\n \n\n\n### The Pitfalls of Ablation Tests\n\n\n\n Can we really trust the ablations tests presented above? We ran each method with 500 samples. \n Constant baselines tend to not need as many samples\n to converge as baselines over distributions. How do we fairly compare between baselines that have\n different computational costs? Valuable but computationally-intensive future work would be\n comparing not only across baselines but also across number of samples drawn, \n and for the blur and gaussian baselines, the parameter \\(\\sigma\\).\n As mentioned above, we have defined many notions of missingness other than \n mean-imputation or blurring: more extensive comparisons would also compare\n all of our baselines across all of the corresponding notions of missing data.\n \n\n\n\n But even with all of these added comparisons, do ablation\n tests really provide a well-founded metric to judge attribution methods? \n The authors of argue\n against ablation tests. They point out that once we artificially ablate\n pixels an image, we have created inputs that do not come from\n the original data distribution. Our trained model has never seen such \n inputs. Why should we expect to extract any reasonable information\n from evaluating our model on them?\n \n\n\n\n On the other hand, integrated gradients and expected gradients\n rely on presenting interpolated images to your model, and unless\n you make some strange convexity assumption, those interpolated images \n don’t belong to the original training distribution either. \n In general, whether or not users should present\n their models with inputs that don’t belong to the original training distribution\n is a subject of ongoing debate\n . Nonetheless, \n the point raised in is still an\n important one: “it is unclear whether the degradation in model \n performance comes from the distribution shift or because the \n features that were removed are truly informative.”\n \n\n\n### Alternative Evaluation Metrics\n\n\n\n So what about other evaluation metrics proposed in recent literature? In\n , Hooker et al. propose a variant of\n an ablation test where we first ablate pixels in the training and\n test sets. Then, we re-train a model on the ablated data and measure\n by how much the test-set performance degrades. This approach has the advantage\n of better capturing whether or not the saliency method\n highlights the pixels that are most important for predicting the output class.\n Unfortunately, it has the drawback of needing to re-train the model several\n times. This metric may also get confused by feature correlation.\n \n\n\n\n Consider the following scenario: our dataset has two features \n that are highly correlated. We train a model which learns to only\n use the first feature, and completely ignore the second feature.\n A feature attribution method might accurately reveal what the model is doing:\n it’s only using the first feature. We could ablate that feature in the dataset, \n re-train the model and get similar performance because similar information \n is stored in the second feature. We might conclude that our feature\n attribution method is lousy - is it? This problem fits into a larger discussion\n about whether or not your attribution method\n should be “true to the model” or “true to the data”\n which has been discussed in several recent articles\n .\n \n\n\n\n In , the authors propose several\n sanity checks that saliency methods should pass. One is the “Model Parameter\n Randomization Test”. Essentially, it states that a feature attribution\n method should produce different attributions when evaluated on a trained\n model (assumedly a trained model that performs well) and a randomly initialized\n model. This metric is intuitive: if a feature attribution method produces\n similar attributions for random and trained models, is the feature\n attribution really using information from the model? It might just\n be relying entirely on information from the input image.\n \n\n\n\n But consider the following figure, which is another (modified) version\n of MNIST. We’ve generated expected gradients attributions using the training\n distribution as a baseline for two different networks. One of the networks\n is a trained model that gets over 99% accuracy on the test set. The other\n network is a randomly initialized model that doesn’t do better than random guessing.\n Should we now conclude that expected gradients is an unreliable method?\n \n\n\n\n\n\n\n A comparison of two network’s saliency maps using expected gradients. One\n network has randomly initialized weights, the other gets >99% accuracy\n on the test set.\n \n\n\n\n Of course, we modified MNIST in this example specifically so that expected gradients\n attributions of an accurate model would look exactly like those of a randomly initialized model.\n The way we did this is similar to the decoy MNIST dataset, except instead of the top left\n corner encoding the class label, we randomly scattered noise througout each training and\n test image where the intensity of the noise encodes the true class label. Generally,\n you would run these kinds of saliency method sanity checks on un-modified data.\n \n\n\n\n But the truth is, even for natural images, we don’t actually\n know what an accurate model’s saliency maps should look like. \n Different architectures trained on ImageNet can all get good performance\n and have very different saliency maps. Can we really say that \n trained models should have saliency maps that don’t look like \n saliency maps generated on randomly initialized models? That isn’t\n to say that the model randomization test doesn’t have merit: it\n does reveal interesting things about what saliency methods are are doing.\n It just doesn’t tell the whole story.\n \n\n\n\n As we mentioned above, there’s a variety of metrics that have been proposed to evaluate \n interpretability methods. There are many metrics we do not explicitly discuss here\n .\n Each proposed metric comes with their various pros and cons. \n In general, evaluating supervised models is somewhat straightforward: we set aside a\n test-set and use it to evaluate how well our model performs on unseen data. Evaluating explanations is hard:\n we don’t know what our model is doing and have no ground truth to compare\n against.\n \n\n\nConclusion\n----------\n\n\n\n So what should be done? We have many baselines and \n no conclusion about which one is the “best.” Although\n we don’t provide extensive quantitative results\n comparing each baseline, we do provide a foundation\n for understanding them further. At the heart of\n each baseline is an assumption about missingness \n in our model and the distribution of our data. In this article,\n we shed light on some of those assumptions, and their impact\n on the corresponding path attribution. We lay\n groundwork for future discussion about baselines in the\n context of path attributions, and more generally about\n the relationship between representations of missingness \n and how we explain machine learning models.\n \n \n\n\n\n\n\n\n A side-by-side comparison of integrated gradients\n using a black baseline \n and expected gradients using the training data\n as a baseline.\n \n\n\nRelated Methods\n---------------\n\n\n\n This work focuses on a specific interpretability method: integrated gradients\n and its extension, expected gradients. We refer to these\n methods as path attribution methods because they integrate \n importances over a path. However, path attribution methods\n represent only a tiny fraction of existing interpretability methods. We focus\n on them here both because they are amenable to interesting visualizations,\n and because they provide a springboard for talking about missingness.\n We briefly cited several other methods at the beginning of this article.\n Many of those methods use some notion of baseline and have contributed to\n the discussion surrounding baseline choices.\n \n\n\n\n In , Fong and Vedaldi propose\n a model-agnostic method to explain neural networks that is based\n on learning the minimal deletion to an image that changes the model\n prediction. In section 4, their work contains an extended discussion on \n how to represent deletions: that is, how to represent missing pixels. They\n argue that one natural way to delete pixels in an image is to blur them.\n This discussion inspired the blurred baseline that we presented in our article.\n They also discuss how noise can be used to represent missingness, which\n was part of the inspiration for our uniform and gaussian noise baselines.\n \n\n\n\n In , Shrikumar et al. \n propose a feature attribution method called deepLIFT. It assigns\n importance scores to features by propagating scores from the output\n of the model back to the input. Similar to integrated gradients,\n deepLIFT also defines importance scores relative to a baseline, which\n they call the “reference”. Their paper has an extended discussion on\n why explaining relative to a baseline is meaningful. They also discuss\n a few different baselines, including “using a blurred version of the original\n image”. \n \n\n\n\n The list of other related methods that we didn’t discuss\n in this article goes on: SHAP and DeepSHAP\n ,\n layer-wise relevance propagation ,\n LIME ,\n RISE and \n Grad-CAM \n among others. Many methods for explaining machine learning models\n define some notion of baseline or missingness, \n because missingness and explanations are closely related. When we explain\n a model, we often want to know which features, when missing, would most\n change model output. But in order to do so, we need to define \n what missing means because most machine learning models cannot\n handle arbitrary patterns of missing inputs. This article\n does not discuss all of the nuances presented alongside\n each existing method, but it is important to note that these methods were\n points of inspiration for a larger discussion about missingness.", "date_published": "2020-01-10T20:00:00Z", "authors": ["Pascal Sturmfels", "Scott Lundberg", "Su-In Lee"], "summaries": ["Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior."], "doi": "10.23915/distill.00022", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1703.01365", "title": "Axiomatic attribution for deep networks"}, {"link": "https://www.sciencedirect.com/science/article/pii/S0161642018315756", "title": "Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy"}, {"link": "https://iopscience.iop.org/article/10.1088/1361-6579/aad386/meta", "title": "Ensembling convolutional and long short-term memory networks for electrocardiogram arrhythmia detection"}, {"link": "https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/viewFile/14806/14311", "title": "Inception-v4, inception-resnet and the impact of residual connections on learning"}, {"link": "https://www.researchgate.net/profile/Li_Jia_Li/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database/links/00b495388120dbc339000000/ImageNet-a-Large-Scale-Hierarchical-Image-Database.pdf", "title": "Imagenet: A large-scale hierarchical image database"}, {"link": "https://github.com/tensorflow/models/tree/master/research/slim", "title": "Tensorflow-slim image classification model library"}, {"link": "https://doi.org/10.23915/distill.00010", "title": "The Building Blocks of Interpretability"}, {"link": "https://doi.org/10.23915/distill.00007", "title": "Feature Visualization"}, {"link": "http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf", "title": "A unified approach to interpreting model predictions"}, {"link": "https://arxiv.org/pdf/1604.00825.pdf", "title": "Layer-wise relevance propagation for neural networks with local renormalization layers"}, {"link": "https://arxiv.org/pdf/1704.02685", "title": "Learning important features through propagating activation differences"}, {"link": "https://arxiv.org/pdf/1706.03825.pdf", "title": "Smoothgrad: removing noise by adding noise"}, {"link": "http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf", "title": "Understanding the difficulty of training deep feedforward neural networks"}, {"link": "https://arxiv.org/pdf/1611.02639.pdf", "title": "Gradients of counterfactuals"}, {"link": "https://www.jstor.org/stable/j.ctt13x149m", "title": "Values of non-atomic games"}, {"link": "https://arxiv.org/pdf/1906.10670.pdf", "title": "Learning Explainable Models Using Attribution Priors"}]} {"id": "8ef6f034b4508a82d3da36f052345ec6", "title": "Computing Receptive Fields of Convolutional Neural Networks", "url": "https://distill.pub/2019/computing-receptive-fields", "source": "distill", "source_type": "blog", "text": "While deep neural networks have overwhelmingly established state-of-the-art\n results in many artificial intelligence problems, they can still be\n difficult to develop and debug.\n Recent research on deep learning understanding has focused on\n feature visualization ,\n theoretical guarantees ,\n model interpretability ,\n and generalization .\n \n\n\n\n In this work, we analyze deep neural networks from a complementary\n perspective, focusing on convolutional models.\n We are interested in understanding the extent to\n which input signals may affect output features, and mapping\n features at any part of the network to the region in the input that\n produces them. The key parameter to associate an output feature to an input\n region is the *receptive field* of the convolutional network, which is\n defined as the size of the region in the input that produces the feature.\n \n\n\n\n As our first contribution, we\n present a mathematical derivation and an efficient algorithm to compute\n receptive fields of modern convolutional neural networks.\n Previous work \n discussed receptive field\n computation for simple convolutional\n networks where there is a single path from the input to the output,\n providing recurrence equations that apply to this case.\n In this work, we revisit these derivations to obtain a closed-form\n expression for receptive field computation in the single-path case.\n Furthermore, we extend receptive field computation to modern convolutional\n networks where there may be multiple paths from the input to the output.\n To the best of our knowledge, this is the first exposition of receptive\n field computation for such recent convolutional architectures.\n \n\n\n\n Today, receptive field computations are needed in a variety of applications. For example,\n for the computer vision task of object detection, it is important\n to represent objects at multiple scales in order to recognize small and large instances;\n understanding a convolutional feature’s span is often required for that goal\n (e.g., if the receptive field of the network is small, it may not be able to recognize large objects).\n However, these computations are often done by hand, which is both tedious and error-prone.\n This is because there are no libraries to compute these parameters automatically.\n As our second contribution, we fill the void by introducing an\n [open-source library](https://github.com/google-research/receptive_field)\n which handily performs the computations described here. The library is integrated into the Tensorflow codebase and\n can be easily employed to analyze a variety of models,\n as presented in this article.\n \n\n\n\n\n We expect these derivations and open-source code to improve the understanding of complex deep learning models,\n leading to more productive machine learning research.\n \n\n\nOverview of the article\n-----------------------\n\n\n\n We consider fully-convolutional neural networks, and derive their receptive\n field size and receptive field locations for output features with respect to the\n input signal.\n While the derivations presented here are general enough for any type of signal used at the input of convolutional\n neural networks, we use images as a running example, referring to modern computer vision architectures when\n appropriate.\n \n\n\n\n First, we derive closed-form expressions when the network has a\n single path from input to output (as in\n AlexNet \n or\n VGG ). Then, we discuss the\n more general case of arbitrary computation graphs with multiple paths from the\n input to the output (as in\n ResNet \n or\n Inception ). We consider\n potential alignment issues that arise in this context, and explain\n an algorithm to compute the receptive field size and locations.\n \n\n\n\n Finally, we analyze the receptive fields of modern convolutional neural networks, showcasing results obtained\n using our open-source library.\n \n\n\nProblem setup\n-------------\n\n\n\n Consider a fully-convolutional network (FCN) with \\(L\\) layers, \\(l = 1,2,\\ldots\n  ,L\\). Define feature map \\(f\\_l \\in R^{h\\_l\\times w\\_l\\times d\\_l}\\) to denote the\n output of the \\(l\\)-th layer, with height \\(h\\_l\\), width \\(w\\_l\\) and depth\n \\(d\\_l\\). We denote the input image by \\(f\\_0\\). The final output feature map\n corresponds to \\(f\\_{L}\\).\n \n\n\n\n To simplify the presentation, the derivations presented in this document consider\n \\(1\\)-dimensional input signals and feature maps. For higher-dimensional signals\n (e.g., \\(2\\)D images), the\n derivations can be applied to each dimension independently. Similarly, the figures\n depict \\(1\\)-dimensional depth, since this does not affect the receptive field computation.\n \n\n\nEach layer \\(l\\)’s spatial configuration is parameterized by 4 variables, as illustrated in the following figure:\n \n\n\n* \\(k\\_l\\): kernel size (positive integer)\n* \\(s\\_l\\): stride (positive integer)\n* \\(p\\_l\\): padding applied to the left side of the input feature map\n (non-negative integer)\n \n A more general definition of padding may also be considered: negative\n padding, interpreted as cropping, can be used in our derivations\n without any changes. In order to make the article more concise, our\n presentation focuses solely on non-negative padding.\n* \\(q\\_l\\): padding applied to the right side of the input feature map\n (non-negative integer)\n\n\n\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (kl): 2\nLeft Padding (pl): 1\nRight Padding (ql): 1\nStride (sl): 3\n\nfl\nfl-1\nkl\nkl\nkl\nsl\npl\nql\n\n\n\n\n We consider layers whose output features depend locally on input features:\n e.g., convolution, pooling, or elementwise operations such as non-linearities,\n addition and filter concatenation. These are commonly used in state-of-the-art\n networks. We define elementwise operations to\n have a “kernel size” of \\(1\\), since each output feature depends on a single\n location of the input feature maps.\n \n\n\n\n Our notation is further illustrated with the simple\n network below. In this case, \\(L=4\\) and the model consists of a\n convolution, followed by ReLU, a second convolution and max-pooling.\n \n We adopt the convention where the first output feature for each layer is\n computed by placing the kernel on the left-most position of the input,\n including padding. This convention is adopted by all major deep learning\n libraries.\n \n\n\n\n\n![](https://distill.pub/2018/feature-wise-transformations/images/pointer.svg)\n\n Sorry, your browser does not support inline SVG.\n \nf0\nf1\nf2\nf3\nf4\n\n**Convolution** \n\n Kernel Size (k1): 1 \n\n Padding (p1, q1): 0 \n\n Stride (s1): 1 \n\nInvalid setup! Decrease kernel \nsize or increase padding.\n\n\n**ReLU** \n\n Kernel Size (k2): 1 \n\n Padding (p2, q2): 0 \n\n Stride (s2): 1 \n\n\n\n**Convolution** \n\n Kernel Size (k3): 1 \n\n Padding (p3, q3): 0 \n\n Stride (s3): 1 \n\nInvalid setup! Decrease kernel \nsize or increase padding.\n\n\n**Max Pooling** \n\n Kernel Size (k4): 1 \n\n Padding (p4, q4): 0 \n\n Stride (s4): 1 \n\nInvalid setup! Decrease kernel \nsize or increase padding.\n\n\n\n\nSingle-path networks\n--------------------\n\n\n\n In this section, we compute recurrence and closed-form expressions for\n fully-convolutional networks with a single path from input to output\n (e.g.,\n AlexNet \n or\n VGG ).\n \n\n\n### Computing receptive field size\n\n\n\n Define \\(r\\_l\\) as the receptive field size of\n the final output feature map \\(f\\_{L}\\), with respect to feature map \\(f\\_l\\). In\n other words, \\(r\\_l\\) corresponds to the number of features in feature map\n \\(f\\_l\\) which contribute to generate one feature in \\(f\\_{L}\\). Note\n that \\(r\\_{L}=1\\).\n \n\n\n\n As a simple example, consider layer \\(L\\), which takes features \\(f\\_{L-1}\\) as\n input, and generates \\(f\\_{L}\\) as output. Here is an illustration:\n \n\n\n\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (kL): 2\nPadding (pL, qL): 0\nStride (sL): 3\n\nkL\nkL\nfL-1\nfL\n\n\n\n\n It is easy to see that \\(k\\_{L}\\)\n features from \\(f\\_{L-1}\\) can influence one feature from \\(f\\_{L}\\), since each\n feature from \\(f\\_{L}\\) is directly connected to \\(k\\_{L}\\) features from\n \\(f\\_{L-1}\\). So, \\(r\\_{L-1} = k\\_{L}\\).\n \n\n\n\n Now, consider the more general case where we know \\(r\\_{l}\\) and want to compute\n \\(r\\_{l-1}\\). Each feature \\(f\\_{l}\\) is connected to \\(k\\_{l}\\) features from\n \\(f\\_{l-1}\\).\n \n\n\n\n First, consider the situation where \\(k\\_l=1\\): in this case, the \\(r\\_{l}\\)\n features in \\(f\\_{l}\\) will cover \\(r\\_{l-1}=s\\_l\\cdot r\\_{l} - (s\\_l - 1)\\) features\n in in \\(f\\_{l-1}\\). This is illustrated in the figure below, where \\(r\\_{l}=2\\)\n (highlighted in red). The first term \\(s\\_l \\cdot r\\_{l}\\) (green) covers the\n entire region where the\n features come from, but it will cover \\(s\\_l - 1\\) too many features (purple),\n which is why it needs to be deducted.\n \n As in the illustration below, note that, in some cases, the receptive\n field region may contain “holes”, i.e., some of the input features may be\n unused for a given layer.\n \n\n\n\n\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (kl): 1\nPadding (pl, ql): 0\nStride (sl): 3\n\nsl⋅rl\nsl-1\nfl-1\nfl\nrl=2\n\n\n\n\n For the case where \\(k\\_l > 1\\), we just need to add \\(k\\_l-1\\) features, which\n will cover those from the left and the right of the region. For example, if we\n use a kernel size of \\(5\\) (\\(k\\_l=5\\)), there would be \\(2\\) extra features used\n on each side, adding \\(4\\) in total. If \\(k\\_l\\) is even, this works as well,\n since the left and right padding will add to \\(k\\_l-1\\).\n \n Due to border effects, note that the size of the region in the original\n image which is used to compute each output feature may be different. This\n happens if padding is used, in which case the receptive field for\n border features includes the padded region. Later in the article, we\n discuss how to compute the receptive field region for each feature,\n which can be used to determine exactly which image pixels are used for\n each output feature.\n \n\n\n\n\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (kl): 5\nPadding (pl, ql): 0\nStride (sl): 3\n\nsl⋅rl - (sl-1)\n(kl-1)/2\n(kl-1)/2\nfl-1\nfl\nrl=2\n\n\n\n\n So, we obtain the general recurrence equation (which is\n [first-order,\n non-homogeneous, with variable\n coefficients](https://en.wikipedia.org/wiki/Recurrence_relation#Solving_first-order_non-homogeneous_recurrence_relations_with_variable_coefficients) \n ):\n \n\n\n\n \\(\\begin{align}\n r\\_{l-1} = s\\_l \\cdot r\\_{l} + (k\\_l - s\\_l)\n \\label{eq:rf\\_recurrence}\\\n \\end{align}\\)\n \n\n\n\n This equation can be used in a recursive algorithm to compute the receptive\n field size of the network, \\(r\\_0\\). However, we can do even better: we can [solve\n the recurrence equation](#solving-receptive-field-size) and obtain a solution in terms of the \\(k\\_l\\)’s and\n \\(s\\_l\\)’s:\n \n\n\n\n \\begin{equation}\n r\\_0 = \\sum\\_{l=1}^{L} \\left((k\\_l-1)\\prod\\_{i=1}^{l-1}\n s\\_i\\right) + 1 \\label{eq:rf\\_recurrence\\_final} \\end{equation}\n \n\n\n\n This expression makes intuitive sense, which can be seen by considering some\n special cases. For example, if all kernels are of size 1, naturally the\n receptive field is also of size 1. If all strides are 1, then the receptive\n field will simply be the sum of \\((k\\_l-1)\\) over all layers, plus 1, which is\n simple to see. If the stride is greater than 1 for a particular layer, the region\n increases proportionally for all layers below that one. Finally, note that\n padding does not need to be taken into account for this derivation.\n \n\n\n### Computing receptive field region in input image\n\n\n\n While it is important to know the size of the region which generates one feature\n in the output feature map, in many cases it is also critical to precisely\n localize the region which generated a feature. For example, given feature\n \\(f\\_{L}(i, j)\\), what is the region in the input image which generated it? This\n is addressed in this section.\n \n\n\n\n Let’s denote \\(u\\_l\\) and \\(v\\_l\\) the left-most\n and right-most coordinates (in \\(f\\_l\\)) of the region which is used to compute the\n desired feature in \\(f\\_{L}\\). In these derivations, the coordinates are zero-indexed (i.e., the first feature in\n each map is at coordinate \\(0\\)).\n\n Note that \\(u\\_{L} = v\\_{L}\\) corresponds to the\n location of the desired feature in \\(f\\_{L}\\). The figure below illustrates a\n simple 2-layer network, where we highlight the region in \\(f\\_0\\) which is used\n to compute the first feature from \\(f\\_2\\). Note that in this case the region\n includes some padding. In this example, \\(u\\_2=v\\_2=0\\), \\(u\\_1=0,v\\_1=1\\), and\n \\(u\\_0=-1, v\\_0=4\\).\n \n\n\n\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (k1): 3\nPadding (p1, q1): 1\nStride (s1): 3\n\n\nKernel Size (k2): 2\nPadding (p2, q2): 0\nStride (s2): 1\n\nu0 = -1\nv0 = 4\nf0\nf1\nf2\n\n\n\n\n We’ll start by asking the following question: given \\(u\\_{l}, v\\_{l}\\), can we\n compute \\(u\\_{l-1},v\\_{l-1}\\)?\n \n\n\n\n Start with a simple case: let’s say \\(u\\_{l}=0\\) (this corresponds to the first\n position in \\(f\\_{l}\\)). In this case, the left-most feature \\(u\\_{l-1}\\) will\n clearly be located at \\(-p\\_l\\), since the first feature will be generated by\n placing the left end of the kernel over that position. If \\(u\\_{l}=1\\), we’re\n interested in the second feature, whose left-most position \\(u\\_{l-1}\\) is \\(-p\\_l\n + s\\_l\\); for \\(u\\_{l}=2\\), \\(u\\_{l-1}=-p\\_l + 2\\cdot s\\_l\\); and so on. In general:\n \n\n\n\n \\(\\begin{align}\n u\\_{l-1}&= -p\\_l + u\\_{l}\\cdot s\\_l \\label{eq:rf\\_loc\\_recurrence\\_u} \\\\\n v\\_{l-1}&= -p\\_l + v\\_{l}\\cdot s\\_l + k\\_l -1\n \\label{eq:rf\\_loc\\_recurrence\\_v}\n \\end{align}\\)\n \n\n\n\n where the computation of \\(v\\_l\\) differs only by adding \\(k\\_l-1\\), which is\n needed since in this case we want to find the right-most position.\n \n\n\n\n Note that these expressions are very similar to the recursion derived for the\n receptive field size \\eqref{eq:rf\\_recurrence}. Again, we could implement a\n recursion over the network to obtain \\(u\\_l,v\\_l\\) for each layer; but we can also\n [solve for \\(u\\_0,v\\_0\\)](#solving-receptive-field-region) and obtain closed-form expressions in terms of the\n network parameters:\n \n\n\n\n \\(\\begin{align}\n u\\_0&= u\\_{L}\\prod\\_{i=1}^{L}s\\_i - \\sum\\_{l=1}^{L}\n p\\_l\\prod\\_{i=1}^{l-1} s\\_i\n \\label{eq:rf\\_loc\\_recurrence\\_final\\_left}\n \\end{align}\\)\n \n\n\n\n This gives us the left-most feature position in the input image as a function of\n the padding (\\(p\\_l\\)) and stride (\\(s\\_l\\)) applied in each layer of the network,\n and of the feature location in the output feature map (\\(u\\_{L}\\)).\n \n\n\n\n And for the right-most feature location \\(v\\_0\\):\n \n\n\n\n \\(\\begin{align}\n v\\_0&= v\\_{L}\\prod\\_{i=1}^{L}s\\_i -\\sum\\_{l=1}^{L}(1 + p\\_l -\n k\\_l)\\prod\\_{i=1}^{l-1} s\\_i\n \\label{eq:rf\\_loc\\_recurrence\\_final\\_right}\n \\end{align}\\)\n \n\n\n\n Note that, different from \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left}, this\n expression also depends on the kernel sizes (\\(k\\_l\\)) of each layer.\n \n\n\n\n**Relation between receptive field size and region.**\n You may be wondering that\n the receptive field size \\(r\\_0\\) must be directly related to \\(u\\_0\\) and\n \\(v\\_0\\). Indeed, this is the case; it is easy to show that \\(r\\_0 = v\\_0 - u\\_0 +\n 1\\), which we leave as a follow-up exercise for the curious reader. To\n emphasize, this means that we can rewrite\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_right} as:\n \n\n\n\n \\(\\begin{align}\n v\\_0&= u\\_0 + r\\_0 - 1\n \\label{eq:rf\\_loc\\_recurrence\\_final\\_right\\_rewrite}\n \\end{align}\\)\n \n\n\n\n**Effective stride and effective padding.**\n To compute \\(u\\_0\\) and \\(v\\_0\\) in practice, it\n is convenient to define two other variables, which depend only on the paddings\n and strides of the different layers:\n \n\n\n* *effective stride*\n \\(S\\_l = \\prod\\_{i=l+1}^{L}s\\_i\\): the stride between a\n given feature map \\(f\\_l\\) and the output feature map \\(f\\_{L}\\)\n* *effective padding*\n \\(P\\_l = \\sum\\_{m=l+1}^{L}p\\_m\\prod\\_{i=l+1}^{m-1} s\\_i\\):\n the padding between a given feature map \\(f\\_l\\) and the output feature map\n \\(f\\_{L}\\)\n\n\n\n With these definitions, we can rewrite \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left}\n as:\n \n\n\n\n \\(\\begin{align}\n u\\_0&= -P\\_0 + u\\_{L}\\cdot S\\_0\n \\label{eq:rf\\_loc\\_recurrence\\_final\\_left\\_effective}\n \\end{align}\\)\n \n\n\n\n Note the resemblance between \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left\\_effective}\n and \\eqref{eq:rf\\_loc\\_recurrence\\_u}. By using \\(S\\_l\\) and \\(P\\_l\\), one can\n compute the locations \\(u\\_l,v\\_l\\) for feature map \\(f\\_l\\) given the location at\n the output feature map \\(u\\_{L}\\). When one is interested in computing feature\n locations for a given network, it is handy to pre-compute three variables:\n \\(P\\_0,S\\_0,r\\_0\\). Using these three, one can obtain \\(u\\_0\\) using\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left\\_effective} and \\(v\\_0\\) using\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_right\\_rewrite}. This allows us to obtain the\n mapping from any output feature location to the input region which influences\n it.\n \n\n\n\n It is also possible to derive recurrence equations for the effective stride and\n effective padding. It is straightforward to show that:\n \n\n\n\n \\(\\begin{align}\n S\\_{l-1}&= s\\_l \\cdot S\\_l \\label{eq:effective\\_stride\\_recurrence} \\\\\n P\\_{l-1}&= s\\_l \\cdot P\\_l + p\\_l \\label{eq:effective\\_padding\\_recurrence}\n \\end{align}\\)\n \n\n\n\n These expressions will be handy when deriving an algorithm to solve the case\n for arbitrary computation graphs, presented in the next section.\n \n\n\n\n**Center of receptive field region.**\n It is also interesting to derive an\n expression for the center of the receptive field region which influences a\n particular output feature. This can be used as the location of the feature in\n the input image (as done for recent\n deep learning-based local features , for\n example).\n \n\n\n\n We define the center of the receptive field region for each layer \\(l\\) as\n \\(c\\_l = \\frac{u\\_l + v\\_l}{2}\\). Given the above expressions for \\(u\\_0,v\\_0,r\\_0\\),\n it is straightforward to derive \\(c\\_0\\) (remember that \\(u\\_{L}=v\\_{L}\\)):\n \n\n\n\n \\(\\begin{align}\n c\\_0&= u\\_{L}\\prod\\_{i=1}^{L}s\\_i\n - \\sum\\_{l=1}^{L}\n \\left(p\\_l - \\frac{k\\_l - 1}{2}\\right)\\prod\\_{i=1}^{l-1} s\\_i \\nonumber \\\\&= u\\_{L}\\cdot S\\_0\n - \\sum\\_{l=1}^{L}\n \\left(p\\_l - \\frac{k\\_l - 1}{2}\\right)\\prod\\_{i=1}^{l-1} s\\_i\n \\nonumber \\\\&= -P\\_0 + u\\_{L}\\cdot S\\_0 + \\left(\\frac{r\\_0 - 1}{2}\\right)\n \\label{eq:rf\\_loc\\_recurrence\\_final\\_center\\_effective}\n \\end{align}\\)\n \n\n\n\n This expression can be compared to\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left\\_effective} to observe that the center is\n shifted from the left-most pixel by \\(\\frac{r\\_0 - 1}{2}\\), which makes sense.\n Note that the receptive field centers for the different output features are\n spaced by the effective stride \\(S\\_0\\), as expected. Also, it is interesting to\n note that if \\(p\\_l = \\frac{k\\_l - 1}{2}\\) for all \\(l\\), the centers of the\n receptive field regions for the output features will be aligned to the first\n image pixel and located at \\({0, S\\_0, 2S\\_0, 3S\\_0, \\ldots}\\) (note that in this\n case all \\(k\\_l\\)’s must be odd).\n \n\n\n\n**Other network operations.**\n The derivations provided in this section cover most basic operations at the\n core of convolutional neural networks. A curious reader may be wondering\n about other commonly-used operations, such as dilation, upsampling, etc. You\n can find a discussion on these [in the appendix](#other-network-operations).\n \n\n\nArbitrary computation graphs\n----------------------------\n\n\n\n Most state-of-the-art convolutional neural networks today (e.g.,\n ResNet or\n Inception ) rely on models\n where each layer may have more than one input, which\n means that there might be several different paths from the input image to the\n final output feature map. These architectures are usually represented using\n directed acyclic computation graphs, where the set of nodes \\(\\mathcal{L}\\)\n represents the layers and the set of edges \\(\\mathcal{E}\\) encodes the\n connections between them (the feature maps flow through the edges).\n \n\n\n\n The computation presented in the previous section can be used for each of the\n possible paths from input to output independently. The situation becomes\n trickier when one wants to take into account all different paths to find the\n receptive field size of the network and the receptive field regions which\n correspond to each of the output features.\n \n\n\n\n**Alignment issues.**\n The first potential issue is that one output feature may\n be computed using misaligned regions of the input image, depending on the\n path from input to output. Also, the relative position between the image regions\n used for the computation of each output feature may vary. As a consequence,\n **the receptive field size may not be shift-invariant**\n  . This is illustrated in the\n figure below with a toy example, in which case the centers of the regions used\n in the input image are different for the two paths from input to output.\n \n\n\n\n![](https://distill.pub/2018/feature-wise-transformations/images/pointer.svg)\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (k1): 5\nLeft Pad (p1): 2\nRight Pad (q1): 1\nStride (s1): 2\n\n\nKernel Size (k2): 3\nLeft Pad (p2): 0\nRight Pad (q2): 0\nStride (s2): 1\n\n\nKernel Size (k3): 3\nLeft Pad (p3): 0\nRight Pad (q3): 0\nStride (s3): 1\n\n\nAdd\n\n\n\n\n\n In this example, padding is used only for the left branch. The first three layers\n are convolutional, while the last layer performs a simple addition.\n The relative position between the receptive field regions of the left and\n right paths is inconsistent for different output features, which leads to a\n lack of alignment (this can be seen by hovering over the different output features).\n Also, note that the receptive field size for each output\n feature may be different. For the second feature from the left, \\(6\\) input\n samples are used, while only \\(5\\) are used for the third feature. This means\n that the receptive field size may not be shift-invariant when the network is not\n aligned.\n \n\n\n\n For many computer vision tasks, it is highly desirable that output features be aligned:\n “image-to-image translation” tasks (e.g., semantic segmentation, edge detection,\n surface normal estimation, colorization, etc), local feature matching and\n retrieval, among others.\n \n\n\n\n When the network is aligned, all different paths lead to output features being\n centered consistently in the same locations. All different paths must have the\n same effective stride. It is easy to see that the receptive field size will be\n the largest receptive field among all possible paths. Also, the effective\n padding of the network corresponds to the effective padding for the path with\n largest receptive field size, such that one can apply\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_left\\_effective},\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_center\\_effective} to localize the region which\n generated an output feature.\n \n\n\n\n The figure below gives one simple example of an aligned network. In this case,\n the two different paths lead to the features being centered at the same\n locations. The receptive field size is \\(3\\), the effective stride is \\(4\\) and\n the effective padding is \\(1\\).\n \n\n\n\n![](https://distill.pub/2018/feature-wise-transformations/images/pointer.svg)\n\n Sorry, your browser does not support inline SVG.\n \n\nKernel Size (k1): 1\nLeft Pad (p1): 0\nRight Pad (q1): 0\nStride (s1): 4\n\n\nKernel Size (k2): 3\nLeft Pad (p2): 1\nRight Pad (q2): 0\nStride (s2): 2\n\n\nKernel Size (k3): 1\nLeft Pad (p3): 0\nRight Pad (q3): 0\nStride (s3): 2\n\n\nAdd\n\n\n\n\n\n**Alignment criteria**\n  . More precisely, for a network to be aligned at every\n layer, we need every possible pair of paths \\(i\\) and \\(j\\) to have\n \\(c\\_l^{(i)} = c\\_l^{(j)}\\) for any layer \\(l\\) and output feature \\(u\\_{L}\\). For\n this to happen, we can see from\n \\eqref{eq:rf\\_loc\\_recurrence\\_final\\_center\\_effective} that two conditions must be\n satisfied:\n \n\n\n\n \\(\\begin{align}\n S\\_l^{(i)}&= S\\_l^{(j)} \\label{eq:align\\_crit\\_1} \\\\\n -P\\_l^{(i)} + \\left(\\frac{r\\_l^{(i)} - 1}{2}\\right)&= -P\\_l^{(j)} + \\left(\\frac{r\\_l^{(j)} - 1}{2}\\right)\n \\label{eq:align\\_crit\\_2}\n \\end{align}\\)\n \n\n\nfor all \\(i,j,l\\).\n\n\n\n**Algorithm for computing receptive field parameters: sketch.**\n It is straightforward to develop an efficient algorithm that computes the receptive\n field size and associated parameters for such computation graphs.\n Naturally, a brute-force approach is to use the expressions presented above to\n compute the receptive field parameters for each route from the input to output independently,\n coupled with some bookkeeping in order to compute the parameters for the entire network.\n This method has a worst-case complexity of\n \\(\\mathcal{O}\\left(\\left|\\mathcal{E}\\right| \\times \\left|\\mathcal{L}\\right|\\right)\\).\n \n\n\n\n\n\n But we can do better. Start by topologically sorting the computation graph.\n The sorted representation arranges the layers in order of dependence: each\n layer’s output only depends on layers that appear before it.\n By visiting layers in reverse topological order, we ensure that all paths\n from a given layer \\(l\\) to the output layer \\(L\\) have been taken into account\n when \\(l\\) is visited. Once the input layer \\(l=0\\) is reached, all paths\n have been considered and the receptive field parameters of the entire model\n are obtained. The complexity of this algorithm is\n \\(\\mathcal{O}\\left(\\left|\\mathcal{E}\\right| + \\left|\\mathcal{L}\\right|\\right)\\),\n which is much better than the brute-force alternative.\n \n\n\n\n As each layer is visited, some bookkeeping must be done in order to keep\n track of the network’s receptive field parameters. In particular, note that\n there might be several different paths from layer \\(l\\) to the output layer\n \\(L\\). In order to handle this situation, we keep track of the parameters\n for \\(l\\) and update them if a new path with larger receptive field is found,\n using expressions \\eqref{eq:rf\\_recurrence}, \\eqref{eq:effective\\_stride\\_recurrence}\n and \\eqref{eq:effective\\_padding\\_recurrence}.\n Similarly, as the graph is traversed, it is important to check that the network is aligned.\n This can be done by making sure that the receptive field parameters of different paths satisfy\n \\eqref{eq:align\\_crit\\_1} and \\eqref{eq:align\\_crit\\_2}.\n \n\n\nDiscussion: receptive fields of modern networks\n-----------------------------------------------\n\n\n\n In this section, we present the receptive field parameters of modern\n convolutional networks\n \n The models used for receptive field computations, as well as the accuracy reported on ImageNet experiments,\n are drawn from the [TF-Slim\n image classification model library](https://github.com/tensorflow/models/tree/master/research/slim).\n \n  , which were computed using the new open-source\n library (script\n [here](https://github.com/google-research/receptive_field/blob/master/receptive_field/python/util/examples/rf_benchmark.py)).\n The pre-computed parameters for\n AlexNet ,\n VGG ,\n ResNet ,\n Inception \n and\n MobileNet \n are presented in the table below.\n For a more\n comprehensive list, including intermediate network end-points, see\n [this\n table](https://github.com/google-research/receptive_field/blob/master/receptive_field/RECEPTIVE_FIELD_TABLE.md).\n \n\n\n\n\n\n\n\n| ConvNet Model | Receptive Field (r) | Effective Stride (S) | Effective Padding (P) | Model Year |\n| --- | --- | --- | --- | --- |\n| alexnet\\_v2 | 195 | 32 | 64 | [2014](https://arxiv.org/abs/1404.5997v2) |\n| vgg\\_16 | 212 | 32 | 90 | [2014](https://arxiv.org/abs/1409.1556) |\n| mobilenet\\_v1 | 315 | 32 | 126 | [2017](https://arxiv.org/abs/1704.04861) |\n| mobilenet\\_v1\\_075 | 315 | 32 | 126 | [2017](https://arxiv.org/abs/1704.04861) |\n| resnet\\_v1\\_50 | 483 | 32 | 239 | [2015](https://arxiv.org/abs/1512.03385) |\n| inception\\_v2 | 699 | 32 | 318 | [2015](https://arxiv.org/abs/1502.03167) |\n| resnet\\_v1\\_101 | 1027 | 32 | 511 | [2015](https://arxiv.org/abs/1512.03385) |\n| inception\\_v3 | 1311 | 32 | 618 | [2015](https://arxiv.org/abs/1512.00567) |\n| resnet\\_v1\\_152 | 1507 | 32 | 751 | [2015](https://arxiv.org/abs/1512.03385) |\n| resnet\\_v1\\_200 | 1763 | 32 | 879 | [2015](https://arxiv.org/abs/1512.03385) |\n| inception\\_v4 | 2071 | 32 | 998 | [2016](https://arxiv.org/abs/1602.07261) |\n| inception\\_resnet\\_v2 | 3039 | 32 | 1482 | [2016](https://arxiv.org/abs/1602.07261) |\n\n\n\n\n As models evolved, from\n AlexNet, to VGG, to ResNet and Inception, the receptive fields increased\n (which is a natural consequence of the increased number of layers).\n In the most recent networks, the receptive field usually covers the entire input image:\n this means that the context used by each feature in the final output feature map\n includes all of the input pixels.\n \n\n\n\n We can also relate the growth in receptive fields to increased\n classification accuracy. The figure below plots ImageNet\n top-1 accuracy as a function of the network’s receptive field size, for\n the same networks listed above. The circle size for each data point is\n proportional to the number of floating-point operations (FLOPs) for each\n architecture.\n \n\n\n\n\n\n google.charts.load('current', { 'packages': ['corechart'] });\n google.charts.setOnLoadCallback(drawSeriesChart);\n\n function drawSeriesChart() {\n\n var data = google.visualization.arrayToDataTable([\n ['ID', 'Receptive field size (pixels)', 'ImageNet top-1 accuracy', 'Family', 'FLOPS (Billion)'],\n ['alexnet\\_v2', 195, 0.5720, 'alexnet', 1.38],\n ['vgg\\_16', 212, 0.7150, 'vgg\\_16', 30.71],\n ['inception\\_v2', 699, 0.7390, 'inception', 3.88],\n ['inception\\_v3', 1311, 0.7800, 'inception', 5.69],\n ['inception\\_v4', 2071, 0.8020, 'inception', 12.27],\n ['inception\\_resnet\\_v2', 3039, 0.8040, 'inception\\_resnet', 12.96],\n ['resnet\\_v1\\_50', 483, 0.7520, 'resnet', 6.97],\n ['resnet\\_v1\\_101', 1027, 0.7640, 'resnet', 14.40],\n ['resnet\\_v1\\_152', 1507, 0.7680, 'resnet', 21.82],\n ['mobilenet\\_v1', 315, 0.7090, 'mobilenet', 1.14]\n ]);\n\n var options = {\n title: '',\n hAxis: { title: 'Receptive field size (pixels)', gridlines: { count: 10 } },\n vAxis: { title: 'ImageNet top-1 accuracy', format: 'percent' },\n sizeAxis: { maxSize: 10 },\n legend: { position: 'bottom', textStyle: { fontSize: 12 } },\n bubble: { textStyle: { fontSize: 11 } }\n };\n\n var chart = new google.visualization.BubbleChart(document.getElementById('series\\_chart\\_div'));\n chart.draw(data, options);\n }\n \n\n\n\n\n\n\n\n We observe a logarithmic relationship between\n classification accuracy and receptive field size, which suggests\n that large receptive fields are necessary for high-level\n recognition tasks, but with diminishing rewards.\n For example, note how MobileNets achieve high recognition performance even\n if using a very compact architecture: with depth-wise convolutions,\n the receptive field is increased with a small compute footprint.\n In comparison, VGG-16 requires 27X more FLOPs than MobileNets, but produces\n a smaller receptive field size; even if much more complex, VGG’s accuracy\n is only slightly better than MobileNet’s.\n This suggests that networks which can efficiently generate large receptive\n fields may enjoy enhanced recognition performance.\n \n\n\n\n Let us emphasize, though, that the receptive field size is not the only factor contributing\n to the improved performance mentioned above. Other factors play a very important\n role: network depth (i.e., number of layers) and width (i.e., number of filters per layer),\n residual connections, batch normalization, to name only a few.\n In other words, while we conjecture that a large receptive field is necessary,\n by no means it is sufficient.\n \n Additional experimentation is needed to confirm this hypothesis: for\n example, researchers may experimentally investigate how classification\n accuracy changes as kernel sizes and strides vary for different\n architectures. This may indicate if, at least for those architectures, a\n large receptive field is necessary.\n \n\n\n\n\n Finally, note that a given feature is not equally impacted by all input pixels within\n its receptive field region: the input pixels near the center of the receptive field have more “paths” to influence\n the feature, and consequently carry more weight.\n The relative importance of each input pixel defines the\n *effective receptive field* of the feature.\n Recent work \n provides a mathematical formulation and a procedure to measure effective\n receptive fields, experimentally observing a Gaussian shape,\n with the peak at the receptive field center. Better understanding the\n relative importance of input pixels in convolutional neural networks is\n an active research topic.", "date_published": "2019-11-04T20:00:00Z", "authors": ["André Araujo", "Wade Norris"], "summaries": ["Detailed derivations and open-source code to analyze the receptive fields of convnets."], "doi": "10.23915/distill.00021", "journal_ref": "distill-pub", "bibliography": [{"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "http://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and Understanding Convolutional Networks"}, {"link": "http://openaccess.thecvf.com/content_cvpr_2017/papers/Haeffele_Global_Optimality_in_CVPR_2017_paper.pdf", "title": "Global Optimality in Neural Network Training"}, {"link": "http://papers.nips.cc/paper/7567-on-the-global-convergence-of-gradient-descent-for-over-parameterized-models-using-optimal-transport.pdf", "title": "On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport"}, {"link": "http://arxiv.org/pdf/1412.6856.pdf", "title": "Object Detectors Emerge in Deep Scene CNNs"}, {"link": "http://arxiv.org/pdf/1711.05611.pdf", "title": "Interpreting Deep Visual Representations via Network Dissection"}, {"link": "http://arxiv.org/pdf/1412.6614.pdf", "title": "In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning"}, {"link": "http://arxiv.org/pdf/1611.03530.pdf", "title": "Understanding Deep Learning Requires Rethinking Generalization"}, {"link": "https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807", "title": "A Guide to Receptive Field Arithmetic for Convolutional Neural Networks"}, {"link": "http://arxiv.org/pdf/1705.07049.pdf", "title": "What are the Receptive, Effective Receptive, and Projective Fields of Neurons in Convolutional Neural Networks?"}, {"link": "https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf", "title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"link": "http://arxiv.org/pdf/1409.1556.pdf", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"link": "http://arxiv.org/pdf/1512.03385.pdf", "title": "Deep Residual Learning for Image Recognition"}, {"link": "http://arxiv.org/pdf/1602.07261.pdf", "title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning"}, {"link": "http://arxiv.org/pdf/1612.06321.pdf", "title": "Large-Scale Image Retrieval with Attentive Deep Local Features"}, {"link": "http://arxiv.org/pdf/1704.04861.pdf", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"}, {"link": "http://arxiv.org/pdf/1701.04128.pdf", "title": "Understanding the Effective Receptive Field in Deep Convolutional Neural Networks"}]} {"id": "a2dde614cbb01653fc9593b4a1482825", "title": "The Paths Perspective on Value Learning", "url": "https://distill.pub/2019/paths-perspective-on-value-learning", "source": "distill", "source_type": "blog", "text": "Introduction\n------------\n\n\n\n In the last few years, reinforcement learning (RL) has made remarkable progress, including [beating world-champion Go players](https://deepmind.com/research/alphago/), [controlling robotic hands](https://blog.openai.com/learning-dexterity/), and even [painting pictures](https://deepmind.com/blog/learning-to-generate-images/).\n \n\n\n\n One of the key sub-problems of RL is value estimation – learning the long-term consequences of being in a state.\n\n This can be tricky because future returns are generally noisy, affected by many things other than the present state. The further we look into the future, the more this becomes true.\n\n But while difficult, estimating value is also essential to many approaches to RL.For many approaches (policy-value iteration), estimating value essentially is the whole problem, while in other approaches (actor-critic models), value estimation is essential for reducing noise.\n\n\n\n\n The natural way to estimate the value of a state is as the average return you observe from that state. We call this Monte Carlo value estimation.\n \n\n\n\n\n**Cliff World**\n is a classic RL example, where the agent learns to\n walk along a cliff to reach a goal.\n \n\n\n![](figures/cliffworld-path1.svg)\n\n Sometimes the agent reaches its goal.\n \n\n\n![](figures/cliffworld-path2.svg)\n\n Other times it falls off the cliff.\n \n\n\n![](figures/cliffworld-mc.svg)\n\n Monte Carlo averages over trajectories where they intersect.\n \n\n\n\n\n If a state is visited by only one episode, Monte Carlo says its value is the return of that episode. If multiple episodes visit a state, Monte Carlo estimates its value as the average over them.\n \n\n\n\n Let’s write Monte Carlo a bit more formally.\n In RL, we often describe algorithms with update rules, which tell us how estimates change with one more episode.\n We’ll use an “updates toward” (↩\\hookleftarrow↩) operator to keep equations simple.\n\n In tabular settings such as the Cliff World example, this “update towards” operator computes a running average. More specifically, the nthn^{th}nth Monte Carlo update is V(st)=V(st−1)+1n[Rn−V(st)] V(s\\_t) = V(s\\_{t-1}) + \\frac{1}{n} \\bigr[ R\\_{n} - V(s\\_t) \\bigl] V(st​)=V(st−1​)+n1​[Rn​−V(st​)] and we could just as easily use the “+=” notation. But when using parameteric function approximators such as neural networks, our “update towards” operator may represent a gradient step, which cannot be written in “+=” notation. In order to keep our notation clean and general, we chose to use the ↩\\hookleftarrow↩ operator throughout.\n\n\n\n\nV(st)  V(s\\_t)~~V(st​)  \n↩  \\hookleftarrow~~↩  \nRtR\\_tRt​\n State value \n Return \n\n\n The term on the right is called the return and we use it to measure the amount of long-term reward an agent earns. The return is just a weighted sum of future rewards rt+γrt+1+γ2rt+2+...r\\_{t} + \\gamma r\\_{t+1} + \\gamma^2 r\\_{t+2} + …rt​+γrt+1​+γ2rt+2​+... where γ\\gammaγ is a discount factor which controls how much short term rewards are worth relative to long-term rewards. Estimating value by updating towards return makes a lot of sense. After all, the *definition* of value is expected return. It might be surprising that we can do better.\n \n\n\nBeating Monte Carlo\n-------------------\n\n\n\n But we can do better! The trick is to use a method called *Temporal Difference (TD) learning*, which bootstraps off of nearby states to make value updates.\n \n\n\n\nV(st)  V(s\\_t)~~V(st​)  \n↩  \\hookleftarrow~~↩  \nrtr\\_{t} rt​\n+++\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n State value \nReward\nNext state value\n\n\n Intersections between two trajectories are handled differently under this update. Unlike Monte Carlo, TD updates merge intersections so that the return flows backwards to all preceding states.\n \n\n\n\n\n\n\n\n\n![](figures/cliffworld-path1.svg)\n\n Sometimes the agent reaches its goal.\n \n\n\n![](figures/cliffworld-path2.svg)\n\n Other times it falls off the cliff.\n \n\n\n![](figures/cliffworld-td.svg)\n\n TD learning merges paths where they intersect.\n \n\n\n\n\n What does it mean to “merge trajectories” in a more formal sense? Why might it be a good idea? One thing to notice is that V(st+1)V(s\\_{t+1})V(st+1​) can be written as the expectation over all of its TD updates:\n \n\n\n\nV(st+1)  V(s\\_{t+1})~~V(st+1​)  \n≃  \\simeq~~≃  \nE[rt+1′ + γV(st+2′)]\\mathop{\\mathbb{E}} \\bigr[ r’\\_{t+1} ~+~ \\gamma V(s’\\_{t+2}) \\bigl] E[rt+1′​ + γV(st+2′​)]\n≃  \\simeq~~≃  \nE[rt+1′]  +  γE[V(st+2′)]\\mathop{\\mathbb{E}} \\bigr[ r’\\_{t+1} \\bigl] ~~+~~ \\gamma \\mathop{\\mathbb{E}} \\bigr[ V(s’\\_{t+2}) \\bigl] E[rt+1′​]  +  γE[V(st+2′​)]\n\n\n Now we can use this equation to expand the TD update rule recursively:\n \n\n\n\nV(st) V(s\\_t)~V(st​) \n↩ \\hookleftarrow~↩ \nrtr\\_{t} rt​\n+++\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n\n\n↩ \\hookleftarrow~↩ \nrtr\\_{t} rt​\n+++\nγE[rt+1′]\\gamma \\mathop{\\mathbb{E}} \\bigr[ r’\\_{t+1} \\bigl]γE[rt+1′​]\n+++\nγ2E[V(st+2′′)]\\gamma^2 \\mathop{\\mathbb{E}} \\bigr[ V(s’’\\_{t+2}) \\bigl]γ2E[V(st+2′′​)]\n\n\n↩ \\hookleftarrow~↩ \nrtr\\_{t} rt​\n+++\nγE [rt+1′]\\gamma \\mathop{\\mathbb{E}} ~ \\bigr[ r’\\_{t+1} \\bigl]γE [rt+1′​]\n+++\nγ2EE [rt+2′′]\\gamma^2 \\mathop{\\mathbb{EE}} ~ \\bigr[ r’’\\_{t+2} \\bigl]γ2EE [rt+2′′​]\n+ +~+ \n...  …~~...  \n\n\n This gives us a strange-looking sum of nested expectation values. At first glance, it’s not clear how to compare them with the more simple-looking Monte Carlo update. More importantly, it’s not clear that we *should* compare the two; the updates are so different that it feels a bit like comparing apples to oranges. Indeed, it’s easy to think of Monte Carlo and TD learning as two entirely different approaches.\n \n\n\n\n But they are not so different after all. Let’s rewrite the Monte Carlo update in terms of reward and place it beside the expanded TD update.\n \n\n\n\n**MC update**\nV(st) V(s\\_t)~V(st​) \n ↩  ~\\hookleftarrow~~ ↩  \nrtr\\_{t}rt​\n+ +~+ \nγ rt+1\\gamma ~ r\\_{t+1}γ rt+1​\n+ +~+ \nγ2 rt+2\\gamma^2 ~ r\\_{t+2}γ2 rt+2​\n+ +~+ \n...…...\nReward from present path.\nReward from present path.\nReward from present path…\n**TD update**\nV(st) V(s\\_t)~V(st​) \n ↩  ~\\hookleftarrow~~ ↩  \nrtr\\_{t}rt​\n+ +~+ \nγE [rt+1′]\\gamma \\mathop{\\mathbb{E}} ~ \\bigr[ r’\\_{t+1} \\bigl]γE [rt+1′​]\n+ +~+ \nγ2EE [rt+2′′]\\gamma^2 \\mathop{\\mathbb{EE}} ~ \\bigr[ r’’\\_{t+2} \\bigl]γ2EE [rt+2′′​]\n+ +~+ \n...…...\nReward from present path.\nExpectation over paths intersecting present path.\nExpectation over paths intersecting *paths intersecting* present path…\n\n\n A pleasant correspondence has emerged. The difference between Monte Carlo and TD learning comes down to the nested expectation operators. It turns out that there is a nice visual interpretation for what they are doing. We call it the *paths perspective* on value learning.\n \n\n\nThe Paths Perspective\n---------------------\n\n\n\n We often think about an agent’s experience as a series of trajectories. The grouping is logical and easy to visualize.\n \n\n\n\n\n![](figures/cliffworld-path-1of4.svg)\n\n Trajectory 1\n \n\n\n![](figures/cliffworld-path-2of4.svg)\n\n Trajectory 2\n \n\n\n\n But this way of organizing experience de-emphasizes relationships *between* trajectories. Wherever two trajectories intersect, both outcomes are valid futures for the agent. So even if the agent has followed Trajectory 1 to the intersection, it could *in theory* follow Trajectory 2 from that point onward. We can dramatically expand the agent’s experience using these simulated trajectories or “paths.”\n \n\n\n\n #cliffworld-paths {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(130px, 1fr));\n grid-gap: 20px;\n }\n \n\n\n![](figures/cliffworld-path-1of4.svg)\n\n Path 1\n \n\n\n![](figures/cliffworld-path-2of4.svg)\n\n Path 2\n \n\n\n![](figures/cliffworld-path-3of4.svg)\n\n Path 3\n \n\n\n![](figures/cliffworld-path-4of4.svg)\n\n Path 4\n \n\n\n\n**Estimating value.** It turns out that Monte Carlo is averaging over real trajectories whereas TD learning is averaging over all possible paths. The nested expectation values we saw earlier correspond to the agent averaging across *all possible future paths*.\n \n\n\n\n #compare-mctd {\n display: grid;\n grid-gap: 40px;\n grid-template-columns: repeat(auto-fit, minmax(320px, 1fr));\n }\n #compare-mctd .subfigure {\n display: grid;\n grid-gap: 20px;\n grid-template-columns: repeat(2, minmax(160px, 1fr));\n /\\* grid-auto-rows: min-content; \\*/\n /\\* grid-template-rows: min-content auto; \\*/\n }\n #compare-mctd .subfigure .column-heading {\n grid-column: 1 / -1;\n }\n\n #compare-mctd .figcaption {\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n padding-top: 5px;\n margin-top: 5px;\n /\\* min-height: 179px; \\*/\n /\\* min-width: 179px; \\*/\n /\\* flex: 1; \\*/\n }\n\n\n \n\n\n#### Monte Carlo Estimation\n\n\n\n![](figures/traj-thumbnails.svg)\n\n Averages over **real trajectories**\n\n\n\n![](figures/cliffworld-mc.svg)\n\n Resulting MC estimate\n \n\n\n\n#### Temporal Difference Estimation\n\n\n\n![](figures/path-thumbnails.svg)\n\n Averages over **possible paths**\n\n\n\n![](figures/cliffworld-td.svg)\n\n Resulting TD estimate\n \n\n\n\n\n**Comparing the two.** Generally speaking, the best value estimate is the one with the lowest variance. Since tabular TD and Monte Carlo are empirical averages, the method that gives the better estimate is the one that averages over more items. This raises a natural question: Which estimator averages over more items?\n \n\n\n\nVar[V(s)]  Var[V(s)]~~Var[V(s)]  \n∝  \\propto~~∝  \n1N\\frac{1}{N} N1​\n Variance of estimate \nInverse of the number of items in the average\n\n\n First off, TD learning never averages over fewer trajectories than Monte Carlo because there are never fewer simulated trajectories than real trajectories. On the other hand, when there are *more* simulated trajectories, TD learning has the chance to average over more of the agent’s experience.\n\n This line of reasoning suggests that TD learning is the better estimator and helps explain why TD tends to outperform Monte Carlo in tabular environments.\n \n\n\n\nIntroducing Q-functions\n-----------------------\n\n\n\n An alternative to the value function is the Q-function. Instead of estimating the value of a state, it estimates the value of a state and an action. The most obvious reason to use Q-functions is that they allow us to compare different actions.\n \n\n\n\n #qlearning-intro {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(130px, 1fr));\n grid-column-gap: 30px;\n }\n \n\n\n![](figures/policy.svg)\n\n Many times we’d like to compare the value of actions under a policy.\n \n\n\n![](figures/value.svg)\n\n It’s hard to do this with a value function.\n \n\n\n![](figures/qvalue.svg)\n\n It’s easier to use Q-functions, which estimate joint state-action values.\n \n\n\n\n There are some other nice properties of Q-functions. In order to see them, let’s write out the Monte Carlo and TD update rules.\n\n \n\n\n\n\n**Updating Q-functions.** The Monte Carlo update rule looks nearly identical to the one we wrote down for V(s)V(s)V(s):\n \n\n\n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n↩  \\hookleftarrow~~↩  \nRtR\\_tRt​\n State-action value \n Return \n\n\n We still update towards the return. Instead of updating towards the return of being in some state, though, we update towards the return of being in some state *and* selecting some action.\n \n\n\n\n Now let’s try doing the same thing with the TD update:\n \n\n\n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n↩  \\hookleftarrow~~↩  \nrtr\\_{t} rt​\n+++\nγQ(st+1,at+1)\\gamma Q(s\\_{t+1}, a\\_{t+1})γQ(st+1​,at+1​)\n State-action value \nReward\nNext state value\n\n\n\n This version of the TD update rule requires a tuple of the form (st,at,rt,st+1,at+1)(s\\_t, a\\_t, r\\_{t}, s\\_{t+1}, a\\_{t+1})(st​,at​,rt​,st+1​,at+1​), so we call it the *Sarsa* algorithm.\n Sarsa may be the simplest way to write this TD update, but it’s not the most efficient.\n The problem with Sarsa is that it uses Q(st+1,at+1)Q(s\\_{t+1},a\\_{t+1})Q(st+1​,at+1​) for the next state value when it really should be using V(st+1)V(s\\_{t+1})V(st+1​).\n\n\n What we need is a better estimate of V(st+1)V(s\\_{t+1})V(st+1​).\n \n\n\n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n↩  \\hookleftarrow~~↩  \nrtr\\_{t} rt​\n+++\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n State-action value \nReward\nNext state value\nV(st+1)  V(s\\_{t+1})~~V(st+1​)  \n=  =~~=  \n?? ?\n\n\n There are many ways to recover V(st+1)V(s\\_{t+1})V(st+1​) from Q-functions. In the next section, we’ll take a close look at four of them.\n \n\n\nLearning Q-functions with reweighted paths\n------------------------------------------\n\n\n\n**Expected Sarsa.**\n A better way of estimating the next state’s value is with a weighted sumAlso written as an expectation value, hence “Expected Sarsa”. over its Q-values. We call this approach Expected Sarsa:\n \n\n\n\n\n\n**Sarsa** uses the Q-value associated with at+1a\\_{t+1}at+1​ to estimate the next state’s value.\n \n![](figures/sarsa.svg)\n\n**Expected Sarsa** uses an expectation over Q-values to estimate the next state’s value.\n \n![](figures/expected-sarsa.svg)\n\n\n\n Here’s a surprising fact about Expected Sarsa: the value estimate it gives is often *better* than a value estimate computed straight from the experience. This is because the expectation value weights the Q-values by the true policy distribution rather than the empirical policy distribution. In doing this, Expected Sarsa *corrects for the difference between the empirical policy distribution and the true policy distribution.*\n\n\n\n\n**Off-policy value learning.** We can push this idea even further. Instead of weighting Q-values by the true policy distribution, we can weight them by an arbitrary policy, πoff\\pi^{off}πoff:\n \n\n\n\n\n\n**Off-policy value learning** weights Q-values by an arbitrary policy.\n \n![](figures/off-policy.svg)\n\n\n\n This slight modification lets us estimate value under any policy we like. It’s interesting to think about Expected Sarsa as a special case of off-policy learning that’s used for on-policy estimation.\n \n\n\n\n**Re-weighting path intersections.** What does the paths perspective say about off-policy learning? To answer this question, let’s consider some state where multiple paths of experience intersect.\n \n\n\n\n #reweighting {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(160px, 1fr));\n grid-gap: 30px;\n }\n /\\* #reweighting .wrapper {\n min-width: 160px;\n } \\*/\n #reweighting-full .cls-1 {\n fill: #2d307b;\n }\n\n #reweighting-full .cls-2 {\n fill: #e7ebe8;\n }\n\n #reweighting-full .cls-3 {\n fill: #cac9cc;\n }\n\n #reweighting-full .cls-4 {\n fill: #bd5f35;\n }\n\n #reweighting-full .cls-5, .cls-6, .cls-7 {\n fill: none;\n stroke-width: 10px;\n }\n\n #reweighting-full .cls-5 {\n stroke: #bd5f35;\n }\n\n #reweighting-full .cls-5, .cls-7 {\n stroke-miterlimit: 10;\n }\n\n #reweighting-full .cls-6 {\n stroke: #2d307b;\n stroke-linecap: round;\n stroke-linejoin: round;\n }\n\n #reweighting-full .cls-7 {\n stroke: #8191c9;\n }\n\n #reweighting-full .cls-8 {\n fill: #8191c9;\n }\n\n #reweighting-full .cls-9 {\n font-size: 24.25341px;\n fill: #d1d3d4;\n font-family: Arial-BoldMT, Arial;\n font-weight: 700;\n }\n \n\n\n![](figures/reweighting-1.svg)\n\n Multiple paths of experience intersect at this state.\n \n\n\n![](figures/reweighting-2.svg)\n\n Paths that exit the state through different actions get associated with different Q-values.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n+2\n-1\n\n\n\nWeight of upward path: \n\n Whenever we re-weight Q-values, we also re-weight the paths that pass through them.\n \n\n\n\n\n Wherever intersecting paths are re-weighted, the paths that are most representative of the off-policy distribution end up making larger contributions to the value estimate. Meanwhile, paths that have low probability make smaller contributions.\n \n\n\n\n**Q-learning.** There are many cases where an agent needs to collect experience under a sub-optimal policy (e.g. to improve exploration) while estimating value under an optimal one. In these cases, we use a version of off-policy learning called Q-learning.\n \n\n\n\n\n\n\n\n**Q-learning** estimates value under the optimal policy by choosing the max Q-value.\n \n![](figures/q-learning.svg)\n\n\n\n Q-learning prunes away all but the highest-valued paths. The paths that remain are the paths that the agent will follow at test time; they are the only ones it needs to pay attention to. This sort of value learning often leads to faster convergence than on-policy methodsTry using the Playground at the end of this article to compare between approaches..\n \n\n\n\n**Double Q-Learning.** The problem with Q-learning is that it gives biased value estimates. More specifically, it is over-optimistic in the presence of noisy rewards. Here’s an example where Q-learning fails:\n \n\n\n\n*You go to a casino and play a hundred slot machines. It’s your lucky day: you hit the jackpot on machine 43. Now, if you use Q-learning to estimate the value of being in the casino, you will choose the best outcome over the actions of playing slot machines. You’ll end up thinking that the value of the casino is the value of the jackpot…and decide that the casino is a great place to be!*\n\n\n\n\n Sometimes the largest Q-value of a state is large *just by chance*; choosing it over others makes the value estimate biased.\n One way to reduce this bias is to have a friend visit the casino and play the same set of slot machines. Then, ask them what their winnings were at machine 43 and use their response as your value estimate. It’s not likely that you both won the jackpot on the same machine, so this time you won’t end up with an over-optimistic estimate. We call this approach *Double Q-learning*.\n\n\n\n\n**Putting it together.** It’s easy to think of Sarsa, Expected Sarsa, Q-learning, and Double Q-learning as different algorithms. But as we’ve seen, they are simply different ways of estimating V(st+1)V(s\\_{t+1})V(st+1​) in a TD update.\n \n\n\n\n\n#### On-policy methods\n\n\n\n\n**Sarsa** uses the Q-value associated with at+1a\\_{t+1}at+1​ to estimate the next state’s value.\n \n![](figures/sarsa.svg)\n\n**Expected Sarsa** uses an expectation over Q-values to estimate the next state’s value.\n \n![](figures/expected-sarsa.svg)\n#### Off-policy methods\n\n\n\n\n**Off-policy value learning** weights Q-values by an arbitrary policy.\n \n![](figures/off-policy.svg)\n\n**Q-learning** estimates value under the optimal policy by choosing the max Q-value.\n \n![](figures/q-learning.svg)\n\n**Double Q-learning** selects the best action with QAQ\\_AQA​ and then estimates the value of that action with\n QBQ\\_BQB​.\n \n![](figures/double-q-learning.svg)\n\n\n\n The intuition behind all of these approaches is that they re-weight path intersections.\n \n\n\n\n**Re-weighting paths with Monte Carlo.** At this point, a natural question is: Could we accomplish the same re-weighting effect with Monte Carlo? We could, but it would be messier and involve re-weighting all of the agent’s experience. By working at intersections, TD learning re-weights individual transitions instead of episodes as a whole. This makes TD methods much more convenient for off-policy learning.\n \n\n\nMerging Paths with Function Approximators\n-----------------------------------------\n\n\n\n Up until now, we’ve learned one parameter — the value estimate — for every state or every state-action pair. This works well for the Cliff World example because it has a small number of states. But most interesting RL problems have a large or infinite number of states. This makes it hard to store value estimates for each state.\n \n\n\n\n\n #figure-fnapprox-intro {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));\n grid-column-gap: 20px;\n grid-row-gap: 10px;\n /\\* grid-auto-flow: column; \\*/\n }\n \n\n\n![](figures/large-cliffworld-states.svg)\n\n**Large or infinite state spaces** are a characteristic of many interesting RL problems. Value estimation\n in these spaces often requires function approximation.\n \n\n\n![](figures/large-cliffworld-path.svg)\n\n**Tabular value functions** keep value estimates for each individual state. They consume a great deal of\n memory and don’t generalize.\n \n\n\n![](figures/large-cliffworld-approx.svg)\n\n**Euclidean averagers** – which are a type of function approximator – save memory and let agents\n generalize to states they haven’t visited yet.\n \n\n\n\n Instead, we must force our value estimator to have fewer parameters than there are states. We can do this with machine learning methods such as linear regression, decision trees, or neural networks. All of these methods fall under the umbrella of function approximation.\n \n\n\n\n\n**Merging nearby paths.** From the paths perspective, we can interpret function approximation as a way of merging nearby paths. But what do we mean by “nearby”? In the figure above, we made an implicit decision to measure “nearby” with Euclidean distance. This was a good idea because the Euclidean distance between two states is highly correlated with the probability that the agent will transition between them.\n \n\n\n\n However, it’s easy to imagine cases where this implicit assumption breaks down. By adding a single long barrier, we can construct a case where the Euclidean distance metric leads to bad generalization. The problem is that we have merged the wrong paths.\n \n\n\n\n #fnapprox-barrier .wrapper {\n margin-left: auto;\n margin-right: auto;\n display: grid;\n grid-template-columns: 1fr 1.464fr;\n grid-template-rows: auto auto;\n grid-column-gap: 20px;\n grid-row-gap: 10px;\n grid-auto-flow: column;\n }\n \n\n\n![](figures/large-cliffworld-barrier-intro.svg)\n\n Imagine changing the Cliff World setup by adding a long barrier.\n \n![](figures/large-cliffworld-barrier.svg)\n\n Now, using the Euclidean averager leads to bad value updates.\n \n\n\n\n**Merging the wrong paths.** The diagram below shows the effects of merging the wrong paths a bit more explicitly. Since the Euclidean averager is to blame for poor generalization, both Monte Carlo and TD make bad value updates. However, TD learning amplifies these errors dramatically whereas Monte Carlo does not.\n \n\n\n\n![](figures/compare-function-approx.svg)\n\n\n We’ve seen that TD learning makes more efficient value updates. The price we pay is that these updates end up being much more sensitive to bad generalization.\n \n\n\nImplications for deep reinforcement learning\n--------------------------------------------\n\n\n\n**Neural networks.** Deep neural networks are perhaps the most popular function approximators for reinforcement learning. These models are exciting for many reasons, but one particularly nice property is that they don’t make implicit assumptions about which states are “nearby.”\n \n\n\n\n Early in training, neural networks, like averagers, tend to merge the wrong paths of experience. In the Cliff Walking example, an untrained neural network might make the same bad value updates as the Euclidean averager.\n \n\n\n\n But as training progresses, neural networks can actually learn to overcome these errors. They learn which states are “nearby” from experience. In the Cliff World example, we might expect a fully-trained neural network to have learned that value updates to states *above* the barrier should never affect the values of states *below* the barrier. This isn’t something that most other function approximators can do. It’s one of the reasons deep RL is so interesting!\n \n\n\n\n![](figures/latent-distance.png)\n\n A distance metric learned by a neural network . **Lighter blue →\\rightarrow→ more distant**. The agent, which was trained to grasp objects using the robotic arm, takes into account obstacles and arm length when it measures the distance between two states.\n \n\n\n**TD or not TD?** So far, we’ve seen how TD learning can outperform Monte Carlo by merging paths of experience where they intersect. We’ve also seen that merging paths is a double-edged sword: when function approximation causes bad value updates, TD can end up doing worse.\n \n\n\n\n\n Over the last few decades, most work in RL has preferred TD learning to Monte Carlo. Indeed, many approaches to RL use TD-style value updates. With that being said, there are many other ways to use Monte Carlo for reinforcement learning. Our discussion centers around Monte Carlo for value estimation in this article, but it can also be used for policy selection as in Silver et al.\n\n\n\n\n\n Since Monte Carlo and TD learning both have desirable properties, why not try building a value estimator that is a mixture of the two? That’s the reasoning behind TD(λ\\lambdaλ) learning. It’s a technique that simply interpolates (using the coefficient λ\\lambdaλ) between Monte Carlo and TD updatesIn the limit λ=0\\lambda=0λ=0, we recover the TD update rule. Meanwhile, when λ=1\\lambda=1λ=1, we recover Monte Carlo.. Often, TD(λ\\lambdaλ) works better than either Monte Carlo or TD learning aloneResearchers often keep the λ\\lambdaλ coefficient constant as they train a deep RL model. However, if we think Monte Carlo learning is best early in training (before the agent has learned a good state representation) and TD learning is best later on (when it’s easier to benefit from merging paths), maybe the best approach is to anneal λ\\lambdaλ over the course of training..\n \n\n\nConclusion\n----------\n\n\n\n In this article we introduced a new way to think about TD learning. It helps us see why TD learning can be beneficial, why it can be effective for off-policy learning, and why there can be challenges in combining TD learning with function approximators.\n \n\n\n\n We encourage you to use the playground below to build on these intuitions, or to try an experiment of your own.\n \n\n\n#### Gridworld playground", "date_published": "2019-09-30T20:00:00Z", "authors": ["Sam Greydanus", "Chris Olah"], "summaries": ["A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency"], "doi": "10.23915/distill.00020", "journal_ref": "distill-pub", "bibliography": [{"link": "http://incompleteideas.net/book/the-book-2nd.html", "title": "Reinforcement Learning: An Introduction"}, {"link": "https://papers.nips.cc/paper/3964-double-q-learning.pdf", "title": "Double Q-learning"}, {"link": "http://arxiv.org/pdf/1804.00645.pdf", "title": "Universal Planning Networks"}]} {"id": "4dab2fb3c6b0867cd11d51b1de9f1ecf", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'", "url": "https://distill.pub/2019/advex-bugs-discussion", "source": "distill", "source_type": "blog", "text": "On May 6th, Andrew Ilyas and colleagues [published a paper](http://gradientscience.org/adv/)\n outlining two sets of experiments.\n Firstly, they showed that models trained on adversarial examples can transfer to real data,\n and secondly that models trained on a dataset derived from the representations of robust neural networks\n seem to inherit non-trivial robustness.\n They proposed an intriguing interpretation for their results:\n adversarial examples are due to “non-robust features” which are highly predictive but imperceptible to\n humans.\n \n\n\n\n The paper was received with intense interest and discussion\n on social media, mailing lists, and reading groups around the world.\n How should we interpret these experiments?\n Would they replicate?\n Adversarial example research is particularly vulnerable to a certain kind of non-replication among\n disciplines of machine learning,\n because it requires researchers to play both attack and defense.\n It’s easy for even very rigorous researchers to accidentally use a weak attack.\n However, as we’ll see, Ilyas et al’s results have held up to initial scrutiny.\n \n And if non-robust features exist… what are they?\n \n\n\n\n To explore these questions, Distill decided to run an experimental “discussion article.”\n Running a discussion article is something Distill has wanted to try for several years.\n It was originally suggested to us by Ferenc Huszár, who writes many lovely discussions of papers on [his blog](https://www.inference.vc/).\n \n \n\n Why not just have everyone write private blog posts like Ferenc?\n Distill hopes that providing a more organized forum for many people to participate\n can give more researchers license to invest energy in discussing other’s work\n and make sure there’s an opportunity for all parties to comment and respond before the final version is\n published.\n \n We invited a number of researchers\n to write comments on the paper and organized discussion and responses from the original authors.\n \n\n\n\n The Machine Learning community\n [sometimes](https://www.machinelearningdebates.com/program)\n[worries](https://medium.com/syncedreview/cvpr-paper-controversy-ml-community-reviews-peer-review-79bf49eb0547)\n that peer review isn’t thorough enough.\n In contrast to this, we were struck by how deeply respondents engaged.\n Some respondents literally invested weeks in replicating results, running new experiments, and thinking\n deeply about the original paper.\n We also saw respondents update their views on non-robust features as they ran experiments — sometimes back\n and forth!\n The original authors similarly deeply engaged in discussing their results, clarifying misunderstandings, and\n even running new experiments in response to comments.\n \n\n\n\n We think this deep engagement and discussion is really exciting, and hope to experiment with more such\n discussion articles in the future.\n \n\n\nDiscussion Themes\n-----------------\n\n\n\n**Clarifications**:\n Discussion between the respondents and original authors was able\n to surface several misunderstandings or opportunities to sharpen claims.\n The original authors summarize this in their rebuttal.\n \n\n\n\n**Successful Replication**:\n Respondents successfully reproduced many of the experiments in Ilyas et al and had no unsuccessful replication attempts.\n This was significantly facilitated by the release of code, models, and datasets by the original authors.\n Gabriel Goh and Preetum Nakkiran both independently reimplemented and replicated\n the non-robust dataset experiments.\n Preetum reproduced the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ non-robust dataset experiment as described in the\n paper, for L∞L\\_\\inftyL∞​ and L2L\\_2L2​ attacks.\n \n\n Gabriel repproduced both D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ and D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ for L2L\\_2L2​\n attacks.\n \n Preetum also replicated part of the robust dataset experiment by\n training models on the provided robust dataset and finding that they seemed non-trivially robust.\n It seems epistemically notable that both Preetum and Gabriel were initially skeptical.\n Preetum emphasizes that he found it easy to make the phenomenon work and that it was robust to many variants\n and hyperparameters he tried.\n \n\n\n\n**Exploring the Boundaries of Non-Robust Transfer**:\n Three of the comments focused on variants of the “non-robust dataset” experiment,\n where training on adversarial examples transfers to real data.\n When, how, and why does it happen?\n Gabriel Goh explores an alternative mechanism for the results,\n Preetum Nakkiran shows a special construction where it doesn’t happen,\n and Eric Wallace shows that transfer can happen for other kinds of incorrectly labeled data.\n \n\n\n\n**Properties of Robust and Non-Robust Features**:\n The other three comments focused on the properties of robust and non-robust models.\n Gabriel Goh explores what non-robust features might look like in the case of linear models,\n while Dan Hendrycks and Justin Gilmer discuss how the results relate to the broader problem of robustness to\n distribution shift,\n and Reiichiro Nakano explores the qualitative differences of robust models in the context of style transfer.\n \n\n\nComments\n--------\n\n\n\n Distill collected six comments on the original paper.\n They are presented in alphabetical order by the author’s last name,\n with brief summaries of each comment and the corresponding response from the original authors.\n \n\n\n\n\n\n\n### \n[Adversarial Example Researchers Need to Expand What is Meant by\n “Robustness”](response-1/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Justin Gilmer](https://www.linkedin.com/in/jmgilmer)\n\n\n\n\n[Google Brain Team](https://g.co/brain)\n\n\n\n\n[Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/)\n\n\n\n\n[UC Berkeley](https://www.berkeley.edu/)\n\n\n\n\n\n\n Justin and Dan discuss “non-robust features” as a special case\n of models being non-robust because they latch on to superficial correlations,\n a view often found in the distributional robustness literature.\n As an example, they discuss recent analysis of how neural networks behave in frequency space.\n They emphasize we should think about a broader notion of robustness.\n [Read Full Article](response-1/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n The demonstration of models that learn from only high-frequency components of the data is\n an interesting finding that provides us with another way our models can learn from data that\n appears “meaningless” to humans.\n The authors fully agree that studying a wider notion of robustness will become increasingly\n important in ML, and will help us get a better grasp of features we actually want our models\n to rely on.\n \n\n\n\n\n\n\n\n### \n[Robust Feature Leakage](response-2/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Gabriel Goh](https://gabgoh.github.io)\n\n\n\n\n[OpenAI](https://openai.com)\n\n\n\n\n\n\n Gabriel explores an alternative mechanism that could contribute to the non-robust transfer\n results.\n He establishes a lower-bound showing that this mechanism contributes a little bit to the\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ experiment,\n but finds no evidence for it effecting the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ experiment.\n [Read Full Article](response-2/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n This is a nice in-depth investigation that highlights (and neatly visualizes) one of the\n motivations for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset.\n \n\n\n\n\n\n\n\n### \n[Two Examples of Useful, Non-Robust Features](response-3/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Gabriel Goh](https://gabgoh.github.io)\n\n\n\n\n[OpenAI](https://openai.com)\n\n\n\n\n\n\n Gabriel explores what non-robust useful features might look like in the linear case.\n He provides two constructions:\n “contaminated” features which are only non-robust due to a non-useful feature being mixed in,\n and “ensembles” that could be candidates for true useful non-robust features.\n [Read Full Article](response-3/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n These experiments with linear models are a great first step towards visualizing non-robust\n features for real datasets (and thus a neat corroboration of their existence).\n Furthermore, the theoretical construction of “contaminated” non-robust features opens an\n interesting direction of developing a more fine-grained definition of features.\n \n\n\n\n\n\n\n\n### \n[Adversarially Robust Neural Style Transfer](response-4/)\n\n\n\n### Authors\n\n\n### \n\n\n\n[Reiichiro Nakano](https://reiinakano.com/)\n\n\n\n\n\n\n Reiichiro shows that adversarial robustness makes neural style transfer\n work by default on a non-VGG architecture.\n He finds that matching robust features makes style transfer’s outputs look perceptually better\n to humans.\n [Read Full Article](response-4/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n Very interesting results that highlight the potential role of non-robust features and the\n utility of robust models for downstream tasks. We’re excited to see what kind of impact robustly\n trained models will have in neural network art!\n Inspired by these findings, we also take a deeper dive into (non-robust) VGG, and find some\n interesting links between robustness and style transfer.\n \n\n\n\n\n\n\n\n### \n[Adversarial Examples are Just Bugs, Too](response-5/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Preetum Nakkiran](https://preetum.nakkiran.org/)\n\n\n\n\n[OpenAI](https://openai.com) &\n [Harvard University](https://www.harvard.edu/)\n\n\n\n\n\n\n Preetum constructs a family of adversarial examples with no transfer to real data,\n suggesting that some adversarial examples are “bugs” in the original paper’s framing.\n Preetum also demonstrates that adversarial examples can arise even if the underlying distribution\n has no “non-robust features”.\n [Read Full Article](response-5/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n A fine-grained look at adversarial examples that neatly our thesis (i.e. that non-robust\n features exist and adversarial examples arise from them, see Takeaway #1) while providing an\n example of adversarial examples that arise from “bugs”.\n The fact that the constructed “bugs”-based adversarial examples don’t transfer constitutes\n another evidence for the link between transferability and (non-robust) features.\n \n\n\n\n\n\n\n\n### \n[Learning from Incorrectly Labeled Data](response-6/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Eric Wallace](https://www.ericswallace.com/)\n\n\n\n\n[Allen Institute for AI](https://allenai.org/)\n\n\n\n\n\n\n Eric shows that training on a model’s training errors,\n or on how it predicts examples form an unrelated dataset,\n can both transfer to the true test set.\n These experiments are analogous to the original paper’s non-robust transfer results — all three results are examples of a kind of “learning from incorrectly labeled data.”\n [Read Full Article](response-6/) \n\n\n\n\n#### Comment from original authors:\n\n\n\n These experiments are a creative demonstration of the fact that the underlying phenomenon of\n learning features from “human-meaningless” data can actually arise in a broad range of\n settings.\n \n\n\n\n\n\nOriginal Author Discussion and Responses\n----------------------------------------\n\n\n\n\n\n\n### \n[Discussion and Author Responses](original-authors/)\n\n\n\n### Authors\n\n\n### Affiliations\n\n\n\n[Logan Engstrom](http://loganengstrom.com/),\n [Andrew Ilyas](http://andrewilyas.com/),\n [Aleksander Madry](https://people.csail.mit.edu/madry/),\n [Shibani Santurkar](http://people.csail.mit.edu/shibani/),\n Brandon Tran,\n [Dimitris Tsipras](http://people.csail.mit.edu/tsipras/)\n\n\n\n\nMIT\n\n\n\n\n\n\n The original authors describe their takeaways and some clarifcations that resulted from the\n conversation.\n This article also contains their responses to each comment.\n [Read Full Article](original-authors/)", "date_published": "2019-08-06T20:00:00Z", "authors": ["Logan Engstrom", "Justin Gilmer", "Gabriel Goh", "Dan Hendrycks", "Andrew Ilyas", "Aleksander Madry", "Reiichiro Nakano", "Shibani Santurkar", "Dimitris Tsipras", "Eric Wallace", "Justin Gilmer", "Dan Hendrycks", "Gabriel Goh", "Gabriel Goh", "Reiichiro Nakano", "Preetum Nakkiran", "Eric Wallace", "Logan Engstrom", "Andrew Ilyas", "Aleksander Madry", "Shibani Santurkar", "Dimitris Tsipras"], "summaries": ["Six comments from the community and responses from the original authors"], "doi": "10.23915/distill.00019", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1905.02175.pdf", "title": "Adversarial examples are not bugs, they are features"}]} {"id": "9af84a92d724245f276f38cfe61d7b12", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'", "url": "https://distill.pub/2019/advex-bugs-discussion/response-1", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n The hypothesis in Ilyas et. al. is a special case of a more general principle that is well accepted in the\n distributional robustness literature — models lack robustness to distribution shift because they latch onto\n superficial correlations in the data. Naturally, the same principle also explains adversarial examples\n because they arise from a worst-case analysis of distribution shift. To obtain a more complete understanding\n of robustness, adversarial example researchers should connect their work to the more general problem of\n distributional robustness rather than remaining solely fixated on small gradient perturbations.\n \n\n\nDetailed Response\n-----------------\n\n\n\n The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is\n commonly accepted in the robustness to distributional shift literature \n: a model’s lack of\n robustness is largely because the model latches onto superficial statistics in the data. In the image\n domain, these statistics may be unused by — and unintuitive to — humans, yet they may be useful for\n generalization in i.i.d. settings. Separate experiments eschewing gradient perturbations and studying\n robustness beyond adversarial perturbations show similar results. For example, a recent work \n demonstrates that models can generalize to the test examples by learning from high-frequency information\n that is both naturally occurring and also inconspicuous. Concretely, models were trained and tested with an\n extreme high-pass filter applied to the data. The resulting high-frequency features appear completely\n grayscale to humans, yet models are able to achieve 50% top-1 accuracy on ImageNet-1K solely from these\n natural features that usually are “invisible.” These hard-to-notice features can be made conspicuous by\n normalizing the filtered image to have unit variance pixel statistics in the figure below.\n \n\n\n\n![](images/figure-1-cropped.png)\n\n[1](#figure-1)\n Models can achieve high accuracy using information from the input that would be unrecognizable\n to humans. Shown above are models trained and tested with aggressive high and low pass filtering applied\n to the inputs. With aggressive low-pass filtering, the model is still above 30% on ImageNet when the\n images appear to be simple globs of color. In the case of high-pass (HP) filtering, models can achieve\n above 50% accuracy using features in the input that are nearly invisible to humans. As shown on the\n right hand side, the high pass filtered images needed be normalized in order to properly visualize the\n high frequency features.\n \n\n\n Given the plethora of useful correlations that exist in natural data, we should expect that our models will\n learn to exploit them. However, models relying on superficial statistics can poorly generalize should these\n same statistics become corrupted after deployment. To obtain a more complete understanding of model\n robustness, measured test error after perturbing every image in the test set by a\n Fourier basis vector,\n as shown in Figure 2. The naturally trained model is robust to low-frequency perturbations, but,\n interestingly, lacks robustness in the mid to high frequencies. In contrast, adversarial training improves\n robustness to mid- and high-frequency perturbations, while sacrificing performance on low frequency\n perturbations. For instance adversarial training degrades performance on the low-frequency fog corruption\n from 85.7% to 55.3%. Adversarial training similarly degrades robustness to\n contrast and low-pass\n filtered noise. By taking a broader view of robustness beyond tiny ℓp\\ell\\_pℓp​ norm perturbations, we discover\n that adversarially trained models are actually not “robust.” They are instead biased towards different kinds\n of superficial statistics. As a result, adversarial training can sacrifice robustness in real-world\n settings.\n\n \n\n\n\n![](images/figure-2-cropped.png)\n\n[2](#figure-2)\n Model sensitivity to additive noise aligned with different Fourier basis vectors on CIFAR-10.\n We fix the additive noise to have ℓ2\\ell\\_2ℓ2​ norm 4 and evaluate three models: a naturally trained model,\n an\n adversarially trained model, and a model trained with Gaussian data augmentation. Error rates are\n averaged over 1000 randomly sampled images from the test set. In the bottom row we show images perturbed\n with noise along the corresponding Fourier basis vector. The naturally trained model is highly sensitive\n to additive noise in all but the lowest frequencies. Both adversarial training and Gaussian data\n augmentation dramatically improve robustness in the higher frequencies while sacrificing the robustness\n of the naturally trained model in the lowest frequencies (i.e. in both models, blue area in the middle\n is smaller compared to that of the naturally trained model).\n \n\n\n How, then, can the research community create models that robustly generalize in the real world, given that\n adversarial training can harm robustness to distributional shift? To do so, the research community must take\n a broader view of robustness and accept that ℓp\\ell\\_pℓp​ adversarial robustness is highly limited and mostly\n detached from security and real-world robustness . While often thought an\n idiosyncratic quirk of deep\n neural network classifiers, adversarial examples are not a counterintuitive mystery plaguing otherwise\n superhuman classifiers. Instead, adversarial examples are in fact expected of models which lack robustness\n to noise . They should not be surprising given the brittleness observed in\n numerous synthetic — and even\n natural  — conditions. Models reliably exhibit poor performance when they are\n evaluated on distributions\n slightly different from the training distribution. For all that, current benchmarks do not expose these\n failure modes. The upshot is that we need to design harder and more diverse test sets, and we should not\n continue to be singularly fixated on studying specific gradient perturbations. As we move forward in\n robustness research, we should focus on the various ways in which models are fragile, and design more\n comprehensive benchmarks accordingly . As long as models lack\n robustness to\n distributional shift, there will always be errors to find adversarially.\n\n \n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n\n\n**Response Summary**: The demonstration of models that learn from\n high-frequency components of the data is interesting and nicely aligns with our\n findings. Now, even though susceptibility to noise could indeed arise from\n non-robust useful features, this kind of brittleness (akin to adversarial examples)\n of ML models has been so far predominantly viewed as a consequence of model\n “bugs” that will be eliminated by “better” models. Finally, we agree that our\n models need to be robust to a much broader set of perturbations — expanding the\n set of relevant perturbations will help identify even more non-robust features\n and further distill the useful features we actually want our models to rely on.\n \n\n\n**Response**: The fact that models can learn to classify correctly based\n purely on the high-frequency component of the training set is neat! This nicely\n complements one of our [takeaways](/2019/advex-bugs-responses/rebuttal/#takeaway1): models\n will rely on useful features even if these features appear incomprehensible to humans.\n\n\n Also, while non-robustness to noise can be an indicator of models using\n non-robust useful features, this is not how the phenomenon was predominantly viewed.\n More often than not, the brittleness of ML models to noise was instead regarded\n as an innate shortcoming of the models, e.g., due to poor margins. (This view is\n even more prevalent in the adversarial robustness community.) Thus, it was often\n expected that progress towards “better”/”bug-free” models will lead to them\n being more robust to noise and adversarial examples.\n\n\n Finally, we fully agree that the set of LpL\\_pLp​-bounded perturbations is a very\n small subset of the perturbations we want our models to be robust to. Note,\n however, that the focus of our work is human-alignment — to that end, we\n demonstrate that models rely on features sensitive to patterns that are\n imperceptible to humans. Thus, the existence of other families of\n incomprehensible but useful features would provide even more support for our\n thesis — identifying and characterizing such features is an interesting area for\n future research.\n\n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Justin Gilmer", "Dan Hendrycks"], "summaries": ["The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is commonly accepted in the robustness to distributional shift literature"], "doi": "10.23915/distill.00019.1", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1903.12261.pdf", "title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations"}, {"link": "http://arxiv.org/pdf/1711.11561.pdf", "title": "Measuring the tendency of CNNs to Learn Surface Statistical Regularities"}, {"link": "http://doi.acm.org/10.1145/1143844.1143889", "title": "Nightmare at Test Time: Robust Learning by Feature Deletion"}, {"link": "https://doi.org/10.1162/153244303321897726", "title": "A Robust Minimax Approach to Classification"}, {"link": "http://arxiv.org/pdf/1808.08750.pdf", "title": "Generalisation in humans and deep neural networks"}, {"link": "http://arxiv.org/pdf/1906.08988.pdf", "title": "A Fourier Perspective on Model Robustness in Computer Vision"}, {"link": "http://arxiv.org/pdf/1807.06732.pdf", "title": "Motivating the Rules of the Game for Adversarial Example Research"}, {"link": "http://arxiv.org/pdf/1901.10513.pdf", "title": "Adversarial Examples Are a Natural Consequence of Test Error in Noise"}, {"link": "http://papers.nips.cc/paper/6331-robustness-of-classifiers-from-adversarial-to-random-noise.pdf", "title": "Robustness of classifiers: from adversarial to random noise"}, {"link": "http://arxiv.org/pdf/1907.07174.pdf", "title": "Natural Adversarial Examples"}, {"link": "http://arxiv.org/pdf/1906.02337.pdf", "title": "{MNIST-C:} {A} Robustness Benchmark for Computer Vision"}, {"link": "http://arxiv.org/pdf/1906.02899.pdf", "title": "{NICO:} {A} Dataset Towards Non-I.I.D. Image Classification"}, {"link": "http://arxiv.org/pdf/1902.10811.pdf", "title": "Do ImageNet Classifiers Generalize to ImageNet?"}, {"link": "http://arxiv.org/pdf/1808.03305.pdf", "title": "The Elephant in the Room"}, {"link": "http://arxiv.org/pdf/1904.10076.pdf", "title": "Using Videos to Evaluate Image Model Robustness"}]} {"id": "109a9ad019ae9b2c07d878ac8650f88e", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Robust Feature Leakage", "url": "https://distill.pub/2019/advex-bugs-discussion/response-2", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n Ilyas et al. report a surprising result: a model trained on\n adversarial examples is effective on clean data. They suggest this transfer is driven by adverserial\n examples containing geuinely useful non-robust cues. But an alternate mechanism for the transfer could be a\n kind of “robust feature leakage” where the model picks up on faint robust cues in the attacks.\n \n\n\n\n\n We show that at least 23.5% (out of 88%) of the accuracy can be explained by robust features in\n DrandD\\_\\text{rand}Drand​. This is a weak lower bound, established by a linear model, and does not perclude the\n possibility of further leakage. On the other hand, we find no evidence of leakage in DdetD\\_\\text{det}Ddet​.\n \n\n\n### Lower Bounding Leakage\n\n\n\n Our technique for quantifying leakage consisting of two steps:\n \n\n\n\n1. First, we construct features fi(x)=wiTxf\\_i(x) = w\\_i^Txfi​(x)=wiT​x that are provably robust, in a sense we will soon\n specify.\n2. Next, we train a linear classifier as per ,\n Equation 3 on the datasets D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ and\n D^rand\\hat{\\mathcal{D}}\\_{\\text{rand}}D^rand​ (Defined , Table 1) on\n these robust features *only*.\n\n\n\n\n Since Ilyas et al. only specify robustness in the two class\n case, we propose two possible specifications for what constitutes a *robust feature* in the multiclass\n setting:\n\n \n\n\n **Specification 1** \nFor at least one of the\n classes, the feature is γ\\gammaγ-robustly useful with\n γ=0\\gamma = 0γ=0, and the set of valid perturbations equal to an L2L\\_2L2​ norm ball with radius 0.25.\n \n\n **Specification 2** \n\n The feature comes from a robust model for which at least 80% of points in the test set have predictions\n that remain static in a neighborhood of radius 0.25 on the L2L\\_2L2​ norm ball.\n \n\n\n We find features that satisfy *both* specifications by using the 10 linear features of a robust linear\n model trained on CIFAR-10. Because the features are linear, the above two conditions can be certified\n analytically. We leave the reader to inspect the weights corresponding to the features manually:\n \n\n\n\n\n\n 10 Features, FCF\\_CFC​, of robust linear classifier CCC. Each feature is γi\\gamma\\_iγi​-robustly-useful with\n respect to label iii. Visualized are the weights wiw\\_iwi​ of features fi(x)=wiTxf\\_i(x) = w\\_i^Txfi​(x)=wiT​x.\n \n\n\n Training a linear model on the above robust features on D^rand\\hat{\\mathcal{D}}\\_{\\text{rand}}D^rand​ and testing on the\n CIFAR test set incurs an accuracy of **23.5%** (out of 88%). Doing the same on\n D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ incurs an accuracy of **6.81%** (out of 44%).\n \n\n\n\n The contrasting results suggest that the the two experiements should be interpreted differently. The\n transfer results of D^rand\\hat{\\mathcal{D}}\\_{\\text{rand}}D^rand​ in Table 1 of \n should approached with caution: A non-trivial portion of the accuracy can be attributed to robust\n features. Note that this bound is weak: this bound could be possibly be improved if we used nonlinear\n features, e.g. from a robust deep neural network.\n \n\n\n\n The results of D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ in Table 1 of \n however, are on stronger footing. We find no evidence of feature leakage (in fact, we find negative leakage — an influx!). We thus conclude that it is plausible the majority of the accuracy is driven by\n non-robust features, exactly the thesis of .\n \n\n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n\n\n**Response Summary**: This\n is a valid concern that was actually one of our motivations for creating the\n D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset (which, as the comment notes, actually\n has *misleading* robust features). The provided experiment further\n improves our understanding of the underlying phenomenon. \n**Response**: This comment raises a valid concern which was in fact one of\n the primary reasons for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset.\n In particular, recall the construction of the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​\n dataset: assign each input a random target label and do PGD towards that label.\n Note that unlike the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset (in which the\n target class is deterministically chosen), the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​\n dataset allows for robust features to actually have a (small) positive\n correlation with the label. \n\n\nTo see how this can happen, consider the following simple setting: we have a\n single feature f(x)f(x)f(x) that is 111 for cats and −1-1−1 for dogs. If ϵ=0.1\\epsilon = 0.1ϵ=0.1\n then f(x)f(x)f(x) is certainly a robust feature. However, randomly assigning labels\n (as in the dataset D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​) would make this feature\n uncorrelated with the assigned label, i.e., we would have that E[f(x)⋅y]=0E[f(x)\\cdot y] = 0E[f(x)⋅y]=0. Performing a\n targeted attack might in this case induce some correlation with the\n assigned label, as we could have E[f(x+η⋅∇f(x))⋅y]>E[f(x)⋅y]=0\\mathbb{E}[f(x+\\eta\\cdot\\nabla\n f(x))\\cdot y] > \\mathbb{E}[f(x)\\cdot y] = 0E[f(x+η⋅∇f(x))⋅y]>E[f(x)⋅y]=0, allowing a model to learn\n to correctly classify new inputs. \n\n\nIn other words, starting from a dataset with no features, one can encode\n robust features within small perturbations. In contrast, in the\n D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, the robust features are *correlated\n with the original label* (since the labels are permuted) and since they are\n robust, they cannot be flipped to correlate with the newly assigned (wrong)\n label. Still, the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ dataset enables us to show\n that (a) PGD-based adversarial examples actually alter features in the data and\n (b) models can learn from human-meaningless/mislabeled training data. The\n D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, on the other hand, illustrates that the\n non-robust features are actually sufficient for generalization and can be\n preferred over robust ones in natural settings.\n\n\nThe experiment put forth in the comment is a clever way of showing that such\n leakage is indeed possible. However, we want to stress (as the comment itself\n does) that robust feature leakage does *not* have an impact on our main\n thesis — the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset explicitly controls\n for robust\n feature leakage (and in fact, allows us to quantify the models’ preference for\n robust features vs non-robust features — see Appendix D.6 in the\n [paper](https://arxiv.org/abs/1905.02175)).\n\n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Gabriel Goh"], "summaries": ["An example project using webpack and svelte-loader and ejs to inline SVGs"], "doi": "10.23915/distill.00019.2", "journal_ref": "distill-pub", "bibliography": []} {"id": "90edb4745db80192530bcfcdb853ebb3", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features", "url": "https://distill.pub/2019/advex-bugs-discussion/response-3", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n Ilyas et al. define a *feature* as a function fff that\n takes xxx from the *data distribution* (x,y)∼D(x,y) \\sim \\mathcal{D}(x,y)∼D into a real number, restricted to have\n mean zero and unit variance. A feature is said to be *useful* if it has high correlation with the\n label. But in the presence of an adversary Ilyas et al. argues\n the metric that truly matters is a feature’s *robust usefulness*,\n \n\n\n\nE[inf∥δ∥≤ϵyf(x+δ)],\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right],\n E[∥δ∥≤ϵinf​yf(x+δ)],\n\n\n its correlation with the label while under attack. Ilyas et al. \n suggests that in addition to the pedestrian, robust features we know and love (such as the color of the\n sky), our models may also be taking advantage of useful, non-robust features, some of which may even lie\n beyond the threshold of human intuition. This begs the question: what might such non-robust features look\n like?\n \n\n\n### Non-Robust Features in Linear Models\n\n\n\n\n Our search is simplified when we realize the following: non-robust features are not unique to the complex,\n nonlinear models encountered in deep learning. As Ilyas et al \n observe, they arise even in the humblest of models — the linear one. Thus, we restrict our attention\n to linear features of the form:\n\n \n\n\n\n\nf(x)=aTx∥a∥ΣwhereΣ=E[xxT]andE[x]=0.f(x) = \\frac{a^Tx}{\\|a\\|\\_\\Sigma}\\qquad \\text{where} \\qquad \\Sigma = \\mathbf{E}[xx^T] \\quad\n \\text{and} \\quad \\mathbf{E}[x] = 0.\n f(x)=∥a∥Σ​aTx​whereΣ=E[xxT]andE[x]=0.\n\n\n The robust usefulness of a linear feature admits an elegant decomposition\n This\n E[inf∥δ∥≤ϵyf(x+δ)]=E[yf(x)+inf∥δ∥≤ϵyf(δ)]=E[yf(x)+inf∥δ∥≤ϵyaTδ∥a∥Σ]=E[yf(x)+inf∥δ∥≤ϵaTδ∥a∥Σ]=E[yf(x)]−ϵ∥a∥∗∥a∥Σ\n \\begin{aligned}\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right] &\n =\\mathbf{E}\\left[yf(x)+\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(\\delta)\\right]\\\\\n & =\\mathbf{E}\\left[yf(x)+\\inf\\_{\\|\\delta\\|\\leq\\epsilon}y\\frac{a^{T}\\delta}{\\|a\\|\\_{\\Sigma}}\\right]\\\\\n &\n =\\mathbf{E}\\left[yf(x)+\\frac{\\inf\\_{\\|\\delta\\|\\leq\\epsilon}a^{T}\\delta}{\\|a\\|\\_{\\Sigma}}\\right]=\\mathop{\\mathbf{E}[yf(x)]}-\\epsilon\\frac{\\|a\\|\\_{\\*}}{\\|a\\|\\_{\\Sigma}}\n \\end{aligned}\n E[∥δ∥≤ϵinf​yf(x+δ)]​=E[yf(x)+∥δ∥≤ϵinf​yf(δ)]=E[yf(x)+∥δ∥≤ϵinf​y∥a∥Σ​aTδ​]=E[yf(x)+∥a∥Σ​inf∥δ∥≤ϵ​aTδ​]=E[yf(x)]−ϵ∥a∥Σ​∥a∥∗​​​\n into two terms:\n\n \n\n\n\n .undomargin {\n position: relative;\n left: -1em;\n top: 0.2em;\n }\n \n\n\n\n\n\nE[inf∥δ∥≤ϵyf(x+δ)]\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right]\n E[∥δ∥≤ϵinf​yf(x+δ)]\n\n\n\n\n===\n\n\n\n\nE[yf(x)]\\mathop{\\mathbf{E}[yf(x)]}E[yf(x)]\n\n\n\n\n−-−\n\n\n\n\nϵ∥a∥∗∥a∥Σ\\epsilon\\frac{\\|a\\|\\_{\\*}}{\\|a\\|\\_{\\Sigma}}ϵ∥a∥Σ​∥a∥∗​​\n\n\n\n\n\n The robust usefulness of a feature\n \n\n\n the correlation of the feature with the label\n \n\n\n the feature’s non-robustness\n \n\n\n\n In the above equation ∥⋅∥∗\\|\\cdot\\|\\_\\*∥⋅∥∗​ deontes the dual norm of ∥⋅∥\\|\\cdot\\|∥⋅∥.\n This decomposition gives us an instrument for visualizing any set of linear features aia\\_iai​ in a two\n dimensional plot.\n \n\n\n\n Plotted below is the binary classification task of separating *truck* and *frog* in CIFAR-10 on\n the set of features aia\\_iai​ corresponding to the ithi^{th}ith singular vector of the data.\n \n\n\n\n\n\n The elusive non-robust useful features, however, seem conspicuously absent in the above plot.\n Fortunately, we can construct such features by strategically combining elements of this basis.\n \n\n\n\n We demonstrate two constructions:\n \n\n\n\n\n\n\n\n\n\n\n It is surprising, thus, that the experiments of Madry et al. \n (with deterministic perturbations) *do* distinguish between the non-robust useful\n features generated from ensembles and containments. A succinct definition of a robust feature that peels\n these two worlds apart is yet to exist, and remains an open problem for the machine learning community.\n \n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n\n\n**Response Summary**: The construction of explicit non-robust features is\n very interesting and makes progress towards the challenge of visualizing some of\n the useful non-robust features detected by our experiments. We also agree that\n non-robust features arising as “distractors” is indeed not precluded by our\n theoretical framework, even if it is precluded by our experiments.\n This simple theoretical framework sufficed for reasoning about and\n predicting the outcomes of our experiments\n We also presented a theoretical setting where we can\n analyze things fully rigorously in Section 4 of our paper..\n However, this comment rightly identifies finding a more comprehensive\n definition of feature as an important future research direction.\n \n\n\n**Response**: These experiments (visualizing the robustness and\n usefulness of different linear features) are very interesting! They both further\n corroborate the existence of useful, non-robust features and make progress\n towards visualizing what these non-robust features actually look like. \n\n\nWe also appreciate the point made by the provided construction of non-robust\n features (as defined in our theoretical framework) that are combinations of\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n enables such a scenario, even if — as the commenter already notes — our\n experimental results do not. (In this sense, the experimental results and our [main takeaway](/2019/advex-bugs-discussion/rebuttal/#takeaway1) are actually stronger than our theoretical\n framework technically captures.) Specifically, in such a scenario, during the\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, only the non-robust\n and useless term of the feature would be flipped. Thus, a classifier trained on\n such a dataset would associate the predictive robust feature with the\n *wrong* label and would thus not generalize on the test set. In contrast,\n our experiments show that classifiers trained on D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​\n do generalize.\n\n\nOverall, our focus while developing our theoretical framework was on\n enabling us to formally describe and predict the outcomes of our experiments. As\n the comment points out, putting forth a theoretical framework that captures\n non-robust features in a very precise way is an important future research\n direction in itself. \n\n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Gabriel Goh"], "summaries": ["An example project using webpack and svelte-loader and ejs to inline SVGs"], "doi": "10.23915/distill.00019.3", "journal_ref": "distill-pub", "bibliography": []} {"id": "929bb52cf50e0b8d42c3547eb0bb34e0", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarially Robust Neural Style Transfer", "url": "https://distill.pub/2019/advex-bugs-discussion/response-4", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n A figure in Ilyas, et. al. that struck me as particularly\n interesting\n was the following graph showing a correlation between adversarial transferability between architectures and\n their\n tendency to learn similar non-robust features.\n \n\n\n\n![](images/transferability.png)\n\n Adversarial transferability vs test accuracy of different architectures trained on ResNet-50′s\n non-robust features.\n \n\n\n One way to interpret this graph is that it shows how well a particular architecture is able to capture\n non-robust features in an image.\n Since the non-robust features are defined by the non-robust features ResNet-50 captures,\n NRFresnetNRF\\_{resnet}NRFresnet​, what this graph really shows is how well an architecture captures NRFresnetNRF\\_{resnet}NRFresnet​.\n \n\n\n\n\n Notice how far back VGG is compared to the other models.\n \n\n\n\n In the unrelated field of neural style transfer, VGG-based neural networks are also quite special since non-VGG architectures are\n known to not work very well This phenomenon is discussed at length in [this\n Reddit thread](https://www.reddit.com/r/MachineLearning/comments/7rrrk3/d_eat_your_vggtables_or_why_does_neural_style/). without some sort of parameterization trick .\n The above interpretation of the graph provides an alternative explanation for this phenomenon.\n **Since VGG is unable to capture non-robust features as well as other architectures, the outputs for style\n transfer actually look more correct to humans!**\nTo follow this argument, note that the perceptual losses used in neural style transfer are\n dependent on matching features learned by a separately trained image classifier. If these learned\n features don’t make sense to humans (non-robust features), the outputs for neural style transfer won’t\n make sense either.\n\n\n\n\n Before proceeding, let’s quickly discuss the results obtained by Mordvintsev, et. al. in [Differentiable Image\n Parameterizations](https://distill.pub/2018/differentiable-parameterizations/), where they show that non-VGG architectures can work for style transfer by using a\n simple technique previously established in feature visualization.\n In their experiment, instead of optimizing the output image in RGB space, they optimize it in Fourier space,\n and run the image through a series of transformations (e.g jitter, rotation, scaling) before passing it\n through the neural network.\n \n\n\n\n Can we reconcile this result with our hypothesis linking neural style transfer and non-robust features?\n \n\n\n\n One possible theory is that all of these image transformations *weaken* or even *destroy*\n non-robust features.\n Since the optimization can no longer reliably manipulate non-robust features to bring down the loss, it is\n forced to use robust features instead, which are presumably more resistant to the applied image\n transformations (a rotated and jittered flappy ear still looks like a flappy ear).\n \n\n\nA quick experiment\n------------------\n\n\n\n Testing our hypothesis is fairly straightforward:\n Use an adversarially robust classifier for neural style transfer and see\n what happens.\n \n\n\n\n I evaluated a regularly trained (non-robust) ResNet-50 with a robustly trained ResNet-50 from Engstrom, et.\n al. on their performance on neural style transfer.\n For comparison, I performed the same algorithm with a regular VGG-19\n  .\n \n\n\n\n To ensure a fair comparison despite the different networks having different optimal hyperparameters, I\n performed a small grid search for each image and manually picked the best output per network.\n Further details can be read in a footnote\n \n L-BFGS was used for optimization as it showed faster convergence\n over Adam.\n For ResNet-50, the style layers used were the ReLu outputs after each of the 4 residual blocks,\n [relu2\\_x,relu3\\_x,relu4\\_x,relu5\\_x][relu2\\\\_x, relu3\\\\_x, relu4\\\\_x, relu5\\\\_x][relu2\\_x,relu3\\_x,relu4\\_x,relu5\\_x] while the content layer used was relu4\\_xrelu4\\\\_xrelu4\\_x.\n For VGG-19, style layers [relu1\\_1,relu2\\_1,relu3\\_1,relu4\\_1,relu5\\_1][relu1\\\\_1,relu2\\\\_1,relu3\\\\_1,relu4\\\\_1,relu5\\\\_1][relu1\\_1,relu2\\_1,relu3\\_1,relu4\\_1,relu5\\_1] were used with a content layer\n relu4\\_2relu4\\\\_2relu4\\_2.\n In VGG-19, max pooling layers were replaced with avg pooling layers, as stated in Gatys, et. al.\n \n or observed in the accompanying Colaboratory notebook.\n \n\n\n\n The results of this experiment can be explored in the diagram below.\n \n\n\n\n #style-transfer-slider.juxtapose {\n max-height: 512px;\n max-width: 512px;\n }\n \n\n\n**Content image**\n\n\n\n\n\n\n**Style image**\n\n\n\n\n\n\n  Compare VGG or Robust\n ResNet\n\n\n\n\n'use strict';\n\n// I don't know how to write JavaScript without a bundler. Please someone save me.\n\n(function () {\n\n // Initialize slider\n var currentContent = 'ben';\n var currentStyle = 'scream';\n var currentLeft = 'nonrobust';\n\n var compareVGGCheck = document.getElementById(\"check-compare-vgg\");\n var styleTransferSliderDiv = document.getElementById(\"style-transfer-slider\");\n\n function refreshSlider() {\n while (styleTransferSliderDiv.firstChild) {\n styleTransferSliderDiv.removeChild(styleTransferSliderDiv.firstChild);\n }\n var imgPath1 = 'images/style-transfer/' + currentContent + '\\_' + currentStyle + '\\_' + currentLeft + '.jpg';\n var imgPath2 = 'images/style-transfer/' + currentContent + '\\_' + currentStyle + '\\_robust.jpg';\n new juxtapose.JXSlider('#style-transfer-slider', [{\n src: imgPath1, // TODO: Might need to use absolute\\_url?\n label: currentLeft === 'nonrobust' ? 'Non-robust ResNet50' : 'VGG'\n }, {\n src: imgPath2,\n label: 'Robust ResNet50'\n }], {\n animate: true,\n showLabels: true,\n showCredits: false,\n startingPosition: \"50%\",\n makeResponsive: true\n });\n }\n\n refreshSlider();\n\n compareVGGCheck.onclick = function (evt) {\n currentLeft = evt.target.checked ? 'vgg' : 'nonrobust';\n refreshSlider();\n };\n\n // Initialize selector\n $(\"#content-select\").imagepicker({\n changed: function changed(oldVal, newVal, event) {\n currentContent = newVal;\n refreshSlider();\n }\n });\n $(\"#style-select\").imagepicker({\n changed: function changed(oldVal, newVal, event) {\n currentStyle = newVal;\n refreshSlider();\n }\n });\n})();\n\n Success!\n The robust ResNet shows drastic improvement over the regular ResNet.\n Remember, all we did was switch the ResNet’s weights, the rest of the code for performing style transfer is\n exactly the same!\n \n\n\n\n A more interesting comparison can be done between VGG-19 and the robust ResNet.\n At first glance, the robust ResNet’s outputs seem on par with VGG-19.\n Looking closer, however, the ResNet’s outputs seem slightly noisier and exhibit some artifacts\n This is more obvious when the output image is initialized not with the content image, but with\n Gaussian noise..\n \n\n\n\n\n\n\n![](images/zoom/vgg_texture.jpg)\n\n\n\n![](images/zoom/vgg_texture.jpg)\n\n\n Texture synthesized with VGG. \n\n*Mild artifacts.*\n\n\n\n\n![](images/zoom/resnet_texture.jpg)\n\n\n\n![](images/zoom/resnet_texture.jpg)\n\n\n Texture synthesized with robust ResNet. \n\n*Severe artifacts.*\n\n\n\n\n\n A comparison of artifacts between textures synthesized by VGG and ResNet.\n Interact by hovering around the images.\n This diagram was repurposed from\n [Deconvolution and Checkerboard Artifacts](https://distill.pub/2016/deconv-checkerboard/) \n by Odena, et. al.\n \n\n It is currently unclear exactly what causes these artifacts.\n One theory is that they are checkerboard artifacts\n caused by\n non-divisible kernel size and stride in the convolution layers.\n They could also be artifacts caused by the presence of max pooling layers\n in ResNet.\n An interesting implication is that these artifacts, while problematic, seem orthogonal to the\n problem that\n adversarial robustness solves in neural style transfer.\n \n\n\nVGG remains a mystery\n---------------------\n\n\n\n Although this experiment started because of an observation about a special characteristic of VGG\n nets, it\n did not provide an explanation for this phenomenon.\n Indeed, if we are to accept the theory that adversarial robustness is the reason VGG works out of\n the box\n with neural style transfer, surely we’d find some indication in existing literature that VGG is\n naturally\n more robust than other architectures.\n \n\n\n\n A few papers\n indeed show\n that VGG architectures are slightly more robust than ResNet.\n However, they also show that AlexNet, not known to work well\n for\n neural style transferAs shown by Dávid Komorowicz\n in\n this [blog post](https://dawars.me/neural-style-transfer-deep-learning/).\n , is\n *above* VGG in terms of this “natural robustness”.\n \n\n\n\n Perhaps adversarial robustness just happens to incidentally fix or cover up the true reason non-VGG\n architectures fail at style transfer (or other similar algorithms\n \n In fact, neural style transfer is not the only pretrained classifier-based iterative image\n optimization\n technique that magically works better with adversarial robustness. In Engstrom, et. al., they show that feature visualization via activation\n maximization works on robust classifiers *without*\n enforcing\n any priors or regularization (e.g. image transformations and decorrelated parameterization) used\n by\n previous work. In a recent chat with Chris\n Olah, he\n pointed out that the aforementioned feature visualization techniques actually work well on VGG\n *without* these priors, just like style transfer!\n \n ) i.e. adversarial robustness is a sufficient but unnecessary condition for good style transfer.\n Whatever the reason, I believe that further examination of VGG is a very interesting direction for\n future\n work.\n \n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n\n\n**Response Summary**: Very interesting\n results, highlighting the effect of non-robust features and the utility of\n robust models for downstream tasks. We’re excited to see what kind of impact\n robustly trained models will have in neural network art! We were also really\n intrigued by the mysteriousness of VGG in the context of style transfer\n. As such, we took a\n deeper dive which found some interesting links between robustness and style\n transfer that suggest that perhaps robustness does indeed play a role here. \n\n\n**Response**: These experiments are really cool! It is interesting that\n preventing the reliance of a model on non-robust features improves performance\n on style transfer, even without an explicit task-related objective (i.e. we\n didn’t train the networks to be better for style transfer). \n\n\n We also found the discussion of VGG as a “mysterious network” really\n interesting — it would be valuable to understand what factors drive style transfer\n performance more generally. Though not a complete answer, we made a couple of\n observations while investigating further: \n\n\n*Style transfer does work with AlexNet:* One wrinkle in the idea that\n robustness is the “secret ingredient” to style transfer could be that VGG is not\n the most naturally robust network — AlexNet is. However, based on our own\n testing, style transfer does seem to work with AlexNet out-of-the-box, as\n long as we use a few early layers in the network (in a similar manner to\n VGG): \n\n\n\n![](images/alexnetworks.png)\n\n Style transfer using AlexNet, using conv\\_1 through conv\\_4.\n \n\n\n Observe that even though style transfer still works, there are checkerboard\n patterns emerging — this seems to be a similar phenomenon to the one noticed\n in the comment in the context of robust models.\n This might be another indication that these two phenomena (checkerboard\n patterns and style transfer working) are not as intertwined as previously\n thought.\n \n\n\n*From prediction robustness to layer robustness:* Another\n potential wrinkle here is that both AlexNet and VGG are not that\n much more robust than ResNets (for which style transfer completely fails),\n and yet seem to have dramatically better performance. To try to\n explain this, recall that style transfer is implemented as a minimization of a\n combined objective consisting of a style loss and a content loss. We found,\n however, that the network we use to compute the\n style loss is far more important\n than the one for the content loss. The following demo illustrates this — we can\n actually use a non-robust ResNet for the content loss and everything works just\n fine:\n\n\n\n![](images/stylematters.png)\n\nStyle transfer seems to be rather\n invariant to the choice of content network used, and very sensitive\n to the style network used.\n\n\nTherefore, from now on, we use a fixed ResNet-50 for the content loss as a\n control, and only worry about the style loss. \n\n\nNow, note that the way that style loss works is by using the first few\n layers of the relevant network. Thus, perhaps it is not about the robustness of\n VGG’s predictions, but instead about the robustness of the layers that we actually use\n for style transfer? \n\n\n To test this hypothesis, we measure the robustness of a layer fff as:\n \n\n\nR(f)=Ex1∼D[maxx′∥f(x′)−f(x1)∥2]Ex1,x2∼D[∥f(x1)−f(x2)∥2]\n R(f) = \\frac{\\mathbb{E}\\_{x\\_1\\sim D}\\left[\\max\\_{x’} \\|f(x’) - f(x\\_1)\\|\\_2 \\right]}\n {\\mathbb{E}\\_{x\\_1, x\\_2 \\sim D}\\left[\\|f(x\\_1) - f(x\\_2)\\|\\_2\\right]}\n R(f)=Ex1​,x2​∼D​[∥f(x1​)−f(x2​)∥2​]Ex1​∼D​[maxx′​∥f(x′)−f(x1​)∥2​]​\n Essentially, this quantity tells us how much we can change the\n output of that layer f(x)f(x)f(x) within a small ball, normalized by how far apart\n representations are between images in general. We’ve plotted this value for\n the first few layers in a couple of different networks below: \n\n\n\n![](images/robustnesses.png)\n\nThe robustness R(f)R(f)R(f) of the first\n four layers of VGG16, AlexNet, and robust/standard ResNet-50\n trained on ImageNet.\n\n\n Here, it becomes clear that, the first few layers of VGG and AlexNet are\n actually almost as robust as the first few layers of the robust ResNet!\n This is perhaps a more convincing indication that robustness might have\n something to with VGG’s success in style transfer after all.\n \n\n\n Finally, suppose we restrict style transfer to only use a single layer of\n the network when computing the style lossUsually style transfer uses\n several layers in the loss function to get the most visually appealing results — here we’re only interested in whether or not style transfer works (i.e.\n actually confers some style onto the image).. Again, the more\n robust layers seem to indeed work better for style transfer! Since all of the\n layers in the robust ResNet are robust, style transfer yields non-trivial\n results even using the last layer alone. Conversely, VGG and AlexNet seem to\n excel in the earlier layers (where they are non-trivially robust) but fail when\n using exclusively later (non-robust) layers: \n\n\n\n![](images/styletransfer.png)\n\n\nStyle transfer using a single layer. The\n names of the layers and their robustness R(f)R(f)R(f) are printed below\n each style transfer result. We find that for both networks, the robust\n layers seem to work (for the robust ResNet, every layer is robust).\n\n\n Of course, there is much more work to be done here, but we are excited\n to see further work into understanding the role of both robustness and the VGG\n in network-based image manipulation. \n\n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Reiichiro Nakano"], "summaries": ["An experiment showing adversarial robustness makes neural style transfer work on a non-VGG architecture"], "doi": "10.23915/distill.00019.4", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1508.06576.pdf", "title": "A Neural Algorithm of Artistic Style"}, {"link": "https://doi.org/10.23915/distill.00012", "title": "Differentiable Image Parameterizations"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "http://distill.pub/2016/deconv-checkerboard/", "title": "Deconvolution and checkerboard artifacts"}, {"link": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf", "title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"link": "https://dawars.me/neural-style-transfer-deep-learning/", "title": "Neural Style transfer with Deep Learning"}, {"link": "https://doi.org/10.23915/distill.00010", "title": "The Building Blocks of Interpretability"}]} {"id": "6e90076e03f74714d63d7b09c7ad8e59", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too", "url": "https://distill.pub/2019/advex-bugs-discussion/response-5", "source": "distill", "source_type": "blog", "text": ".comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n color: hsl(0, 0%, 0.33);\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n \n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[All Responses](/2019/advex-bugs-discussion/#articles)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n We demonstrate that there exist adversarial examples which are just “bugs”:\n aberrations in the classifier that are not intrinsic properties of the data distribution.\n In particular, we give a new method for constructing adversarial examples which:\n \n\n\n1. Do not transfer between models, and\n2. Do not leak “non-robust features” which allow for learning, in the\n sense of Ilyas-Santurkar-Tsipras-Engstrom-Tran-Madry\n .\n\n\n\n We replicate the Ilyas et al.\n experiment of training on mislabeled adversarially-perturbed images\n (Section 3.2 of ),\n and show that it fails for our construction of adversarial perturbations.\n \n\n\n\n The message is, whether adversarial examples are features or bugs depends\n on how you find them — standard PGD finds features, but bugs are abundant as well.\n \n\n\n\n We also give a toy example of a data distribution which has no “non-robust features”\n (under any reasonable definition of feature), but for which standard training yields a highly non-robust\n classifier.\n This demonstrates, again, that adversarial examples can occur even if the data distribution does not\n intrinsically have any vulnerable directions.\n \n\n\n### Background\n\n\n\n Many have understood Ilyas et al. \n to claim that adversarial examples are not “bugs”, but are “features”.\n Specifically, Ilyas et al. postulate the following two worlds:\n As communicated to us by the original authors.\n\n\n\n* **World 1: Adversarial examples exploit directions irrelevant for classification (“bugs”).** \n In this world, adversarial examples occur because classifiers behave\n poorly off-distribution,\n when they are evaluated on inputs that are not natural images.\n Here, adversarial examples would occur in arbitrary directions,\n having nothing to do with the true data distribution.\n* **World 2: Adversarial examples exploit useful directions for classification (“features”).**\n In this world, adversarial examples occur in directions that are still “on-distribution”,\n and which contain features of the target class.\n For example, consider the perturbation that\n makes an image of a dog to be classified as a cat.\n In World 2, this perturbation is not purely random, but has something to do with cats.\n Moreover, we expect that this perturbation transfers to other classifiers trained to distinguish cats\n vs. dogs.\n\n\n\n Our main contribution is demonstrating that these worlds are not mutually exclusive — and in fact, we\n are in both.\n Ilyas et al. \n show that there exist adversarial examples in World 2, and we show there exist\n examples in World 1.\n \n\n\n\n Constructing Non-transferrable Targeted Adversarial Examples\n--------------------------------------------------------------\n\n\n\n\n We propose a method to construct targeted adversarial examples for a given classifier\n $f$,\n which do not transfer to other classifiers trained for the same problem.\n \n\n\n\n Recall that for a classifier $f$, an input example $(x, y)$, and target class $y\\_{targ}$,\n a *targeted adversarial example* is an $x’$ such that $||x - x’||\\leq \\eps$ and\n $f(x’) = y\\_{targ}$.\n \n\n\n\n The standard method of constructing adversarial examples is via Projected Gradient Descent (PGD)\n \n PGD is described in the appendix.\n \n which starts at input $x$, and iteratively takes steps $\\{x\\_t\\}$\n to minimize the loss $L(f, x\\_t, y\\_{targ})$.\n That is, we take steps in the direction\n $$-\\nabla\\_x L(f, x\\_t, y\\_{targ})$$\n where $L(f, x, y)$ is the loss of $f$ on input $x$, label $y$.\n \n\n\n\n Note that since PGD steps in the gradient direction towards the target class,\n we may expect these adversarial examples have *feature leakage* from the target class.\n For example, suppose we are perturbing an image of a dog into a plane (which usually appears against a blue\n background).\n It is plausible that the gradient direction tends to make the dog image more blue,\n since the “blue” direction is correlated with the plane class.\n In our construction below, we attempt to eliminate such feature leakage.\n\n \n\n\n![](./manifold.svg)\n\n\n An illustration of the image-manifold for adversarially perturbing a dog to a plane.\n The gradient of the loss can be thought of as having an on-manifold “feature component”\n and an off-manifold “random component”.\n PGD steps along both components, hence causing feature-leakage in adversarial examples.\n Our construction below attempts to step only in the off-manifold direction.\n \n\n### Our Construction\n\n\n\n Let $\\{f\\_i : \\R^n \\to \\cY\\}\\_i$ be an ensemble of classifiers\n for the same classification problem as $f$.\n For example, we can let $\\{f\\_i\\}$ be a collection of ResNet18s trained from\n different random initializations.\n \n\n\n\n For input example $(x, y)$ and target class $y\\_{targ}$,\n we perform iterative updates to find adversarial attacks — as in PGD.\n However, instead of stepping directly in the gradient direction, we\n step in the direction\n \n Formally, we replace the iterative step with\n $$x\\_{t+1} \\gets \\Pi\\_\\eps\\left( x\\_t\n - \\alpha( \\nabla\\_x L(f, x\\_t, y\\_{targ}) + \\E\\_i[ \\nabla\\_x L(f\\_i, x\\_t, y)]) \\right)$$\n where $\\Pi\\_\\eps$ is the projection onto the $\\eps$-ball around $x$.\n \n $$-\\left( \\nabla\\_x L(f, x\\_t, y\\_{targ}) + \\E\\_i[ \\nabla\\_x L(f\\_i, x\\_t, y)] \\right)$$\n\n That is, instead of taking gradient steps to minimize $L(f, x, y\\_{targ})$,\n we minimize the “disentangled loss”\n \n We could also consider explicitly using the ensemble to decorrelate,\n by stepping in direction\n $\\nabla\\_x L(f, x, y\\_{targ}) - \\E\\_i[ \\nabla\\_x L(f\\_i, x, y\\_{targ})]$.\n This works well for small $\\epsilon$,\n but the given loss has better optimization properties for larger $\\epsilon$.\n \n $$L(f, x, y\\_{targ}) + \\E\\_i[L(f\\_i, x, y)]$$\n This loss encourages finding an $x\\_t$ which is adversarial for $f$,\n but not for the ensemble $\\{f\\_i\\}$.\n \n\n\n\n These adversarial examples will not be adversarial for the ensemble $\\{f\\_i\\}$. But perhaps surprisingly,\n these examples are also not adversarial for\n *new* classifiers trained for the same problem.\n \n\n\n### Experiments\n\n\n\n We train a ResNet18 on CIFAR10 as our target classifier $f$.\n For our ensemble, we train 10 ResNet18s on CIFAR10, from fresh random initializations.\n We then test the probability that\n a targeted attack for $f$\n transfers to a new (freshly-trained) ResNet18, with the same targeted class.\n Our construction yields adversarial examples which do not transfer well to new models.\n \n\n\n\n For $L\\_{\\infty}$ attacks:\n \n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Attack Success | Transfer Success |\n| PGD | 99.6% | 52.1% |\n| Ours | 98.6% | 0.8% |\n\n\n\n For $L\\_2$ attacks:\n \n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Attack Success | Transfer Success |\n| PGD | 99.9% | 82.5% |\n| Ours | 99.3% | 1.7% |\n\n\nAdversarial Examples With No Features\n-------------------------------------\n\n\n\n Using the above, we can construct adversarial examples\n which *do not suffice* for learning.\n Here, we replicate the Ilyas et al. experiment\n that “Non-robust features suffice for standard classification”\n (Section 3.2 of ),\n but show that it fails for our construction of adversarial examples.\n \n\n\n\n To review, the Ilyas et al. non-robust experiment was:\n \n\n1. Train a standard classifier $f$ for CIFAR.\n2. From the CIFAR10 training set $S = \\{(X\\_i, Y\\_i)\\}$,\n construct an alternate train set $S’ = \\{(X\\_i^{Y\\_i \\to (Y\\_i + 1)}, Y\\_i + 1)\\}$,\n where $X\\_i^{Y\\_i \\to (Y\\_i +1)}$ denotes an adversarial example for\n $f$, perturbing $X\\_i$ from its true class $Y\\_i$ towards target class $Y\\_i+1 (\\text{mod }10)$.\n Note that $S’$ appears to humans as “mislabeled examples”.\n3. Train a new classifier $f’$ on train set $S’$.\n Observe that this classifier has non-trivial accuracy on the original CIFAR distribution.\n\n\n\n\n Ilyas et al. use Step (3) to argue that\n adversarial examples have a meaningful “feature” component.\n\n However, for adversarial examples constructed using our method, Step (3) fails.\n In fact, $f’$ has good accuracy with respect to the “label-shifted” distribution\n $(X, Y+1)$, which is intuitively what we trained on.\n \n\n\nFor $L\\_{\\infty}$ attacks:\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Test Acc on CIFAR: $(X, Y)$ | Test Acc on Shifted-CIFAR: $(X, Y+1)$ |\n| PGD | 23.7% | 40.4% |\n| Ours | 2.5% | 75.9% |\n\n\n\n\n Table: Test Accuracies of $f’$\n \n\n\nFor $L\\_2$ attacks:\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Test Acc on CIFAR: $(X, Y)$ | Test Acc on Shifted-CIFAR: $(X, Y+1)$ |\n| PGD | 33.2% | 27.3% |\n| Ours | 2.8% | 70.8% |\n\n\n\n\n Table: Test Accuracies of $f’$\n \n\n\nAdversarial Squares: Adversarial Examples from Robust Features\n--------------------------------------------------------------\n\n\n\n To further illustrate that adversarial examples can be “just bugs”,\n we show that they can arise even when the true data distribution has no “non-robust features” — that is, no intrinsically vulnerable directions.\n \n We are unaware of a satisfactory definition of “non-robust feature”, but we claim that for any\n reasonable\n *intrinsic* definition, this problem has no non-robust features.\n Intrinsic here meaning, a definition which depends only on geometric properties of the data\n distribution, and not on the family of classifiers, or the finite-sample training set.\n\n \n\n We do not use the Ilyas et al. definition of “non-robust features,” because we believe it is vacuous.\n In particular, by the Ilyas et al. definition, **every** distribution\n has “non-robust features” — so the definition does not discern structural properties of the\n distribution.\n Moreover, for every “robust feature” $f$, there exists a corresponding “non-robust feature” $f’$, such\n that $f$ and $f’$ agree on the data distribution — so the definition depends strongly on the\n family of classifiers being considered.\n \n In the following toy problem, adversarial vulnerability arises as a consequence of finite-sample\n overfitting, and\n label noise.\n \n\n\n\n The problem is to distinguish between CIFAR-sized images that are either all-black or all-white,\n with a small amount of random pixel noise and label noise.\n \n\n\n\n\n![](./twosquares.png)\n\n\n\n A sample of images from the distribution.\n \n\n\n\n\n![](./data.png)\n\n Formally, let the distribution be as follows.\n Pick label $Y \\in \\{\\pm 1\\}$ uniformly,\n and let $$X :=\n \\begin{cases}\n (+\\vec{\\mathbb{1}} + \\vec\\eta\\_\\eps) \\cdot \\eta & \\text{if $Y=1$}\\\\\n (-\\vec{\\mathbb{1}} + \\vec\\eta\\_\\eps) \\cdot \\eta & \\text{if $Y=-1$}\\\\\n \\end{cases}$$\n \n where $\\vec\\eta\\_\\eps \\sim [-0.1, +0.1]^d$ is uniform $L\\_\\infty$ pixel noise,\n and\n $\\eta \\in \\{\\pm 1\\} \\sim Bernoulli(0.1)$ is the 10% label noise.\n\n \n\n \n\n A plot of samples from a 2D-version of this distribution is shown to the right.\n\n \n\n\n\n Notice that there exists a robust linear classifier for this problem which achieves perfect robust\n classification, with up to $\\eps = 0.9$ magnitude $L\\_\\infty$ attacks.\n However, if we sample 10000 training images from this distribution, and train\n a ResNet18 to 99.9% train accuracy,\n \n We optimize using Adam with learning-rate $0.00001$ and batch size $128$ for 20 epochs.\n \n the resulting classifier is highly non-robust:\n an $\\eps=0.01$ perturbation suffices to flip the class of almost all test examples.\n \n\n\n\n The input-noise and label noise are both essential for this construction.\n One intuition for what is happening is: in the initial stage of training\n the optimization learns the “correct” decision boundary (indeed, stopping after 1 epoch results in a robust\n classifier).\n However, optimizing for close to 0 train-error requires a network with high Lipshitz constant\n to fit the label-noise, which hurts robustness.\n\n \n\n\n\n![](./data.png)\n![](./step10.png)\n![](./step10000.png)\n\n\n\n Left: The training set (labels color-coded). Middle: The classifier after 10 SGD steps.\n Right: The classifier at the end of training. Note that it is overfit, and not robust.\n \n Figure adapted from .\n \n\n\n\n\n\n\nAddendum: Data Poisoning via Adversarial Examples\n-------------------------------------------------\n\n\n\n As an addendum, we observe that the “non-robust features”\n experiment of (Section 3.2)\n directly implies data-poisoning attacks:\n An adversary that is allowed to imperceptibly change every image in the training set can destroy the\n accuracy of the learnt classifier — and can moreover apply an arbitrary permutation\n to the classifier output labels (e.g. swapping cats and dogs).\n \n\n\n\n To see this, recall that the original “non-robust features” experiment shows:Using our previous\n notation, and also using vanilla PGD to find adversarial examples.\n\n\n\n\n 1. If we train on distribution $(X^{Y \\to (Y+1)}, Y+ 1)$ the classifier learns to predict well\n on distribution $(X, Y)$.\n \n\n\n\n By permutation-symmetry of the labels, this implies that:\n \n\n\n\n 2. If we train on distribution $(X^{Y \\to (Y+1)}, Y)$ the classifier learns to predict well\n on distribution $(X, Y-1)$.\n \n\n\n\n Note that in case (2), we are training with correct labels, just perturbing the inputs imperceptibly,\n but the classifier learns to predict the cyclically-shifted labels.\n Concretely, using the original numbers of\n Table 1 in , this reduction implies that\n **an adversary can perturb the CIFAR10 train set by $\\eps=0.5$ in $L\\_2$,\n and cause the learnt classifier to output shifted-labels\n 43.7% of the time\n (cats classified as birds, dogs as deers, etc).** \n\n\n\n\n This should extend to attacks that force arbitrary desired permutations of the labels.\n \n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n \n\n**Response Summary**: We note that as discussed in\n more detail in [Takeaway #1](/2019/advex-bugs-discussion/original-authors/#takeaway1), the\n mere existence of adversarial\n examples\n that are “features” is sufficient to corroborate our main thesis. This comment\n illustrates, however, that we can indeed craft adversarial examples that are\n based on “bugs” in realistic settings. Interestingly, such examples don’t\n transfer, which provides further support for the link between transferability\n and non-robust features.\n\n \n\n\n**Response**: As mentioned [above](/2019/advex-bugs-discussion/original-authors/#nonclaim1),\n we did not intend to claim\n that adversarial examples arise *exclusively* from (useful) features but rather\n that useful non-robust features exist and are thus (at least\n partially) responsible for adversarial vulnerability. In fact,\n prior work already shows how in theory adversarial examples can arise from\n insufficient samples or finite-sample overfitting\n , and the experiments\n presented here (particularly, the adversarial squares) constitute a neat\n real-world demonstration of these facts. \n\n\n Our main thesis that “adversarial examples will not just go away as we fix\n bugs in our models” is not contradicted by the existence of adversarial examples\n stemming from “bugs.” As long as adversarial examples can stem from non-robust\n features (which the commenter seems to agree with), fixing these bugs will not\n solve the problem of adversarial examples. \n\n\nMoreover, with regards to feature “leakage” from PGD, recall that in\n or D\\_det dataset, the non-robust features are associated with the\n correct label whereas the robust features are associated with the wrong\n one. We wanted to emphasize that, as\n [shown in Appendix D.6](https://arxiv.org/abs/1905.02175) ,\n models trained on our $D\\_{det}$ dataset actually generalize *better* to\n the non-robust feature-label association that to the robust\n feature-label association. In contrast, if PGD introduced a small\n “leakage” of non-robust features, then we would expect the trained model\n would still predominantly use the robust feature-label association. \n\n\n That said, the experiments cleverly zoom in on some more fine-grained\n nuances in our understanding of adversarial examples. One particular thing that\n stood out to us is that by creating a set of adversarial examples that are\n *explicitly* non-transferable, one also prevents new classifiers from learning\n features from that dataset. This finding thus makes the connection between\n transferability of adversarial examples and their containing generalizing\n features even stronger! Indeed, we can add the constructed dataset into our\n “$\\widehat{\\mathcal{D}}\\_{det}$ learnability vs transferability” plot\n (Figure 3 in the paper) — the point\n corresponding to this dataset fits neatly onto the trendline! \n\n\n\n![](transfer.png)\n\n Relationship between models reliance on non-robust features and their susceptibility to transfer\n attacks\n \n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Preetum Nakkiran"], "summaries": ["Refining the source of adversarial examples"], "doi": "10.23915/distill.00019.5", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1905.02175v3.pdf", "title": "Adversarial examples are not bugs, they are features"}, {"link": "https://arxiv.org/pdf/1905.11604.pdf", "title": "SGD on Neural Networks Learns Functions of Increasing Complexity"}]} {"id": "da53fa168db07e9ac4b6a68c2a0b63c9", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Learning from Incorrectly Labeled Data", "url": "https://distill.pub/2019/advex-bugs-discussion/response-6", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n\n Section 3.2 of Ilyas et al. (2019) shows that training a model on only adversarial errors leads to\n non-trivial generalization on the original test set. We show that these experiments are a specific case of\n learning from errors. We start with a counterintuitive result — we take a completely mislabeled training set\n (without modifying the inputs) and use it to train a model that generalizes to the original test set. We\n then show that this result, and the results of Ilyas et al. (2019), are a special case of model\n distillation. In particular, since the incorrect labels are generated using a trained model, information\n about the trained model is being “leaked” into the dataset.\n We begin with the following question: what if we took the images in the training set (without any\n adversarial perturbations) and mislabeled them? Since the inputs are unmodified and mislabeled, intuition\n says that a model trained on this dataset should not generalize to the correctly-labeled test set.\n Nevertheless, we show that this intuition fails — a model *can* generalize.\n We first train a ResNet-18 on the CIFAR-10 training set for two epochs. The model reaches a training\n accuracy of 62.5% and a test accuracy of 63.1%. Next, we run the model on all of the 50,000 training data\n points and relabel them according to the model’s predictions. Then, we filter out *all the correct\n predictions*. We are now left with an incorrectly labeled training set of size 18,768. We show four\n examples on the left of the Figure below:\n \n\n\n\n![](images/image1.png)\n\n[1](#figure-1)\n\n\n\n We then randomly initialize a new ResNet-18 and train it only on this mislabeled dataset. We train for 50\n epochs and reach an accuracy of 49.7% on the *original* test set. The new model has only ever seen\n incorrectly labeled, unperturbed images but can still non-trivially generalize.\n \n\n\nThis is Model Distillation Using Incorrect Predictions\n------------------------------------------------------\n\n\n\n How can this model and the models in Ilyas et al. (2019) generalize without seeing any correctly labeled\n data? Here, we show that since the incorrect labels are generated using a trained model, information is\n being “leaked” about that trained model into the mislabeled examples. In particular, this an indirect form\n of model distillation — training on this dataset allows a new\n model to somewhat recover the features of the original model.\n \n\n\n\n We first illustrate this distillation phenomenon using a two-dimensional problem. Then, we explore other\n peculiar forms of distillation for neural networks — -we transfer knowledge despite the inputs being from\n another task.\n \n\n\n### Two-dimensional Illustration of Model Distillation\n\n\n\n We construct a dataset of adversarial examples using a two-dimensional binary classification problem. We\n generate 32 random two-dimensional data points in [0,1]2[0,1]^2[0,1]2 and assign each point a random binary label. We\n then train a small feed-forward neural network on these examples, predicting 32/32 of the examples correctly\n (panel (a) in the Figure below).\n \n\n\n\n![](images/image2.png)\n\n[2](#figure-2)\n\n\n\n Next, we create adversarial examples for the original model using an l∞l\\_{\\infty}l∞​ ball of radius\n ϵ=0.12\\epsilon=0.12ϵ=0.12. In panel (a) of the Figure above, we display the ϵ\\epsilonϵ-ball around each training\n point. In panel (b), we show the adversarial examples which cause the model to change its prediction (from\n correct to incorrect). We train a new feed-forward neural network on this dataset, resulting in the model in\n panel (c).\n \n\n\n\n Although this new model has never seen a correctly labeled example, it is able to perform non-trivially on\n the original dataset, predicting 23/3223/3223/32 of the inputs correctly (panel (d) in the Figure). The new model’s\n decision boundary loosely matches the original model’s decision boundary, i.e., the original model has been\n somewhat distilled after training on its adversarial examples. This two-dimensional problem presents an\n illustrative version of the intriguing result that distillation can be performed using incorrect\n predictions.\n\n \n\n\n### \n Other Peculiar Forms of Distillation\n\n\n\n Our experiments show that we can distill models using mislabeled examples. In what other peculiar ways can\n we learn about the original model? Can we use only *out-of-domain* data?\n \n\n\n\n We train a simple CNN model on MNIST, reaching 99.1% accuracy. We then run this model on the FashionMNIST\n training set and save its argmax predictions. The resulting dataset is nonsensical to humans — a “dress” is\n labeled as an “8″.\n \n\n\n\n![](images/image3.png)\n\n[3](#figure-3)\n\n\n\n We then initialize a new CNN model and train it on this mislabeled FashionMNIST data. The resulting model\n reaches 91.04% accuracy on the MNIST test set. Furthermore, if we normalize the FashionMNIST images using\n the mean and variance statistics for MNIST, the model reaches 94.5% accuracy on the MNIST test set. This is\n another instance of recovering a functionally similar model to the original despite the new model only\n training on erroneous predictions.\n \n\n\n### \n Summary\n\n\n\n These results show that training a model using mislabeled adversarial examples is a special case of learning\n from prediction errors. In other words, the perturbations added to adversarial examples in Section 3.2 of\n Ilyas et al. (2019) are not necessary to enable learning.\n \n\n\n\n To cite Ilyas et al.’s response, please cite their\n [collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).\n\n\n**Response\n Summary**: Note that since our experiments work across different architectures,\n “distillation” in weight space does not occur. The only distillation that can\n arise is “feature space” distillation, which is actually exactly our hypothesis.\n In particular, feature-space distillation would not work in [World 1](/2019/advex-bugs-discussion/original-authors/#world1) — if the\n adversarial examples we generated did not exploit useful features, we should not\n have been able to “distill” a useful model from them. (In fact, one might think\n of normal model training as just “feature distillation” of the humans that\n labeled the dataset.) Furthermore, the hypothesis that all we need is enough\n model-consistent points in order to recover a model, seems to be disproven by\n Preetum’s [“bugs-only dataset”](/2019/advex-bugs-discussion/response-5)\n and other (e.g. ) settings. \n**Response**: Since our experiments work across different architectures,\n “distillation” in weight space cannot arise. Thus, from what we understand, the\n “distillation” hypothesis suggested here is referring to “feature distillation”\n (i.e. getting models which use the same features as the original), which is\n actually precisely our hypothesis too. Notably, this feature distillation would\n not be possible if adversarial examples did not rely on “flipping” features that\n are good for classification (see [World\n 1](/2019/advex-bugs-discussion/original-authors/#world1) and\n [World 2](/2019/advex-bugs-discussion/original-authors/#world2)) — in that case, the distilled\n model would only use features that generalize poorly, and would thus generalize\n poorly itself. \n\n\n Moreover, we would argue that in the experiments presented (learning from\n mislabeled data), the same kind of distillation is happening. For instance, a\n moderately accurate model might associate “green background” with “frog” thus\n labeling “green” images as “frogs” (e.g., the horse in the comment’s figure).\n Training a new model on this dataset will thus associate “green” with “frog”\n achieving non-trivial accuracy on the test set (similarly for the “learning MNIST\n from Fashion-MNIST” experiment in the comment). This corresponds exactly to\n learning features from labels, akin to how deep networks “distill” a good\n decision boundary from human annotators. In fact, we find these experiments\n a very interesting illustration of feature distillation that complements\n our findings. \n\n\n We also note that an analogy to logistic regression here is only possible\n due to the low VC-dimension of linear classifiers (namely, these classifiers\n have dimension ddd). In particular, given any classifier with VC-dimension\n kkk, we need at least kkk points to fully specify the classifier. Conversely, neural\n networks have been shown to have extremely large VC-dimension (in particular,\n bigger than the size of the training set ). So even though\n labelling d+1d+1d+1 random\n points model-consistently is sufficient to recover a linear model, it is not\n necessarily sufficient to recover a deep neural network. For instance, Milli et\n al. are not able to reconstruct a ResNet-18\n using only its predictions on random Gaussian inputs. (Note that we are using a\n ResNet-50 in our experiments.) \n\n\n Finally, it seems that the only potentially problematic explanation for\n our experiments (namely, that enough model-consistent points can recover a\n classifier) is [disproved by Preetum’s experiment](/2019/advex-bugs-discussion/response-5).\n In particular, Preetum is able to design a\n dataset where training on mislabeled inputs *that are model-consistent*\n does not at all recover the decision boundary of the original model. More\n generally, the “model distillation” perspective raised here is unable to\n distinguish between the dataset created by Preetum below, and those created\n with standard PGD (as in our D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ and\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ datasets).\n \n\n\n\n\n You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Eric Wallace"], "summaries": ["Section 3.2 of Ilyas et al. (2019) shows that training a model on only adversarial errors leads to non-trivial generalization on the original test set. We show that these experiments are a specific case of learning from errors."], "doi": "10.23915/distill.00019.6", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1503.02531.pdf", "title": "Distilling the Knowledge in a Neural Network"}]} {"id": "bda18cad07b62a0dff8e37b69a396da0", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Discussion and Author Responses", "url": "https://distill.pub/2019/advex-bugs-discussion/original-authors", "source": "distill", "source_type": "blog", "text": "#rebuttal,\n .comment-info {\n background-color: hsl(54, 78%, 96%);\n border-left: solid hsl(54, 33%, 67%) 1px;\n padding: 1em;\n color: hsla(0, 0%, 0%, 0.67);\n }\n\n #header-info {\n margin-top: 0;\n margin-bottom: 1.5rem;\n display: grid;\n grid-template-columns: 65px max-content 1fr;\n grid-template-areas:\n \"icon explanation explanation\"\n \"icon back comment\";\n grid-column-gap: 1.5em;\n }\n\n #header-info .icon-multiple-pages {\n grid-area: icon;\n padding: 0.5em;\n content: url(images/multiple-pages.svg);\n }\n\n #header-info .explanation {\n grid-area: explanation;\n font-size: 85%;\n }\n\n #header-info .back {\n grid-area: back;\n }\n\n #header-info .back::before {\n\n content: \"←\";\n margin-right: 0.5em;\n }\n\n #header-info .comment {\n grid-area: comment;\n scroll-behavior: smooth;\n }\n\n #header-info .comment::before {\n content: \"↓\";\n margin-right: 0.5em;\n }\n\n #header-info a.back,\n #header-info a.comment {\n font-size: 80%;\n font-weight: 600;\n border-bottom: none;\n text-transform: uppercase;\n color: #2e6db7;\n display: block;\n margin-top: 0.25em;\n letter-spacing: 0.25px;\n }\n\n\n\n\n This article is part of a discussion of the Ilyas et al. paper\n *“Adversarial examples are not bugs, they are features”.*\n You can learn more in the\n [main discussion article](/2019/advex-bugs-discussion/) .\n \n\n\n[Other Comments](/2019/advex-bugs-discussion/#commentaries)\n[Comment by Ilyas et al.](#rebuttal)\n\n We want to thank all the commenters for the discussion and for spending time\n designing experiments analyzing, replicating, and expanding upon our results.\n These comments helped us further refine our understanding of adversarial\n examples (e.g., by visualizing useful non-robust features or illustrating how\n robust models are successful at downstream tasks), but also highlighted aspects\n of our exposition that could be made more clear and explicit. \n\n\n Our response is organized as follows: we first recap the key takeaways from\n our paper, followed by some clarifications that this discussion brought to\n light. We then address each comment individually, prefacing each longer response\n with a quick summary. \n\n\n\n We also recall some terminology from\n [our paper](https://arxiv.org/abs/1905.02175) that features in our responses:\n \n\n\n *Datasets*: Our experiments involve the following variants of the given\n dataset DDD (consists of sample-label pairs (xxx, yyy)) The\n exact details for construction of the datasets can be found in our\n [paper](https://arxiv.org/abs/1905.02175), and\n the datasets themselves can be downloaded at [http://git.io/adv-datasets](http://git.io/adv-\ndatasets) :\n\n \n\n* D^R\\widehat{\\mathcal{D}}\\_{R}D\nR​: Restrict each sample xxx to features that are used by a *robust*\n model.\n* D^NR\\widehat{\\mathcal{D}}\\_{NR}D\nNR​: Restrict each sample xxx to features that are used by a *standard*\n model.\n* D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​: Adversarially perturb each sample xxx using a standard model in a\n *consistent manner* towards class y+1modCy + 1\\mod Cy+1modC.\n* D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​: Adversarially perturb each sample xxx using a standard model\n towards a\n *uniformly random* class.\n\n\n\nMain points\n-----------\n\n\n### *Takeaway #1:* Adversarial examples as innate\n brittleness vs. useful features (sensitivity vs reliance)\n\n\nThe goal of our experiments with non-robust features is to understand\n how adversarial examples fit into the following two worlds:\n \n\n* **World 1: Adversarial examples exploit directions irrelevant for\n classification.** In this world, adversarial examples arise from\n sensitivity to a signal that is unimportant for classification. For\n instance, suppose there is a feature f(x)f(x)f(x) that is not generalizing\n on the dataNote that f(x)f(x)f(x) could be correlated with the label\n in the training set but not in expectation/on the test\n set., but the model for some reason puts a lot of weight on\n it, *i.e., this sensitivity is an aberration “hallucinated” by the\n model*. Adversarial examples correspond to perturbing the input\n to change this feature by a small amount. This perturbation, however,\n would be orthogonal to how the model actually typically makes\n predictions (on natural data). (Note that this is just a single\n illustrative example — the key characteristic of this world is that\n features “flipped” when making adversarial examples are separate from the\n ones actually used to classify inputs.)\n* **World 2: Adversarial examples exploit features that are useful for\n classification.** In this world, adversarial perturbations\n can correspond to changes in the input that manipulate features relevant to\n classification. Thus, models base their (mostly correct) predictions on\n features that can be altered via small perturbations.\n\n\n\n\n Recent works provide some theoretical evidence that adversarial examples\n can arise from finite-sample overfitting\n or\n other concentration of\n measure-based phenomena, thus\n supporting\n the “World 1” viewpoint on\n adversarial examples. The question is: is “World 1” the right way to\n think about adversarial examples? If so, this would be good news — under\n this mindset, adversarial robustness might just be a matter of getting\n better, “bug-free” models (for example, by reducing overfitting).\n \n\n\n\n Our findings show, however, that the “World 1” mindset alone does not\n fully capture adversarial vulnerability; “World 2“ must be taken into\n account. Adversarial examples can — and do, if generated via standard\n methods — rely on “flipping” features that are actually useful for\n classification. Specifically, we show that by relying *only* on\n perturbations corresponding to standard first-order adversarial attacks\n one can learn models that generalize to the test set. This means that\n these perturbations truly correspond to directions that are relevant for\n classifying new, unmodified inputs from the dataset. In summary, our\n message is: \n\n\n**Adversarial vulnerability can arise from\n flipping features in the data that are useful for\n classification of *correct* inputs.**\n\n\nIn particular, note that our experiments (training on the\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ and D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​\n datasets) would not have the same result in World 1. Concretely, in\n the “cartoon example” of World 1 presented above, the classifier puts large weight www on a feature\n coordinate f(x)f(x)f(x) that is not generalizing for “natural images.” Then,\n adversarial examples towards either class can be made by simply making\n f(x)f(x)f(x) slightly positive or slightly negative. However, a classifier\n learned from these adversarial examples would *not* generalize to\n the true dataset (since it would learn to depend on a feature that is not\n useful on natural images).\n\n\n### *Takeaway #2*: Learning from “meaningless” data\n\n\nAnother implication of our experiments is that models may not even\n *need* any information which we as humans view as “meaningful” in order\n to do well (in the generalization sense) on standard image datasets. (Our\n D^NR\\widehat{\\mathcal{D}}\\_{NR}D\nNR​ dataset is a perfect example of this.)\n\n\n### *Takeaway #3*: Cannot fully attribute adversarial\n examples to X\n\n\nWe also show that we cannot\n conclusively fully attribute adversarial examples to any specific aspect of the\n standard training framework (BatchNorm, ResNets, SGD, etc.). In particular, our\n “robust dataset” D^R\\widehat{\\mathcal{D}}\\_{R}D\nR​ is a counterexample to any claim of the form “given any\n dataset, training with BatchNorm/SGD/ResNets/overparameterization/etc. leads to\n adversarial vulnerability” (as classifiers with all of these components,\n when trained on D^R\\widehat{\\mathcal{D}}\\_{R}D\nR​, generalize robustly to\n CIFAR-10). In that sense, the dataset clearly plays a role in\n the emergence of adversarial examples. (Also, further corroborating this is\n Preetum’s “adversarial squares” dataset [here](#PreetumResponse),\n where standard networks do not become adversarially vulnerable as long as there is no\n label noise or overfitting.) \n\n\nA Few Clarifications\n--------------------\n\n\nIn addition to further refining our understanding of adversarial examples,\n the comments were also very useful in pointing out which aspects of our\n claims could benefit from further clarification. To this end, we make these\n clarifications below in the form of a couple “non-claims” — claims that we did\n *not* intend to make. We’ll also update our paper in order to make\n these clarifications explicit.\n\n\n### Non-Claim #1: “Adversarial examples *cannot* be bugs”\n\n\n\n Our goal is to say that since adversarial examples can arise from\n well-generalizing features, simply patching up the “bugs” in ML models will\n not get rid of adversarial vulnerability — we also need to make sure our\n models learn the right features. This, however, does not mean that\n adversarial vulnerability *cannot* arise from “bugs”. In fact, note\n that several papers \n\n have proven that adversarial vulnerability can\n arise from what we refer to as “bugs,” e.g. finite-sample overfitting,\n concentration of measure, high dimensionality, etc. Furthermore,\n We would like to thank Preetum for pointing out that this issue may be a\n natural misunderstanding, and for exploring this point in even more depth\n in his response below.\n \n\n\n### Non-Claim #2: “Adversarial examples are purely a result of the dataset”\n\n\n Even though we [demonstrated](#cannotpin) that datasets do\n play a role in the emergence of adversarial examples, we do not intend to\n claim that this role is exclusive. In particular, just because the data\n *admits* non-robust functions that are well-generalizing (useful\n non-robust features), doesn’t mean that *any* model will learn to\n pick up these features. For example, it could be that the well-generalizing\n features that cause adversarial examples are only learnable by certain\n architectures. However, we do show that there is a way, via only\n altering the dataset, to induce robust models — thus, our results indicate\n that adversarial vulnerability indeed cannot be completely disentangled\n from the dataset (more on this in [Takeaway #3](#cannotpin)).\n\n\n\n The following responses may also be viewed in context with the comment they’re addressing. [Return to the discussion article](/2019/advex-bugs-discussion/#commentaries) for a list of\n summaries.\n \nResponses to comments\n---------------------\n\n\n### Adversarial Example Researchers Need to Expand What is Meant by\n “Robustness” (Dan Hendrycks, Justin Gilmer)\n\n\n**Response Summary**:\n The demonstration of models that learn from only high-frequency components of the data is\n an interesting finding that provides us with another way our models can learn from data that\n appears “meaningless” to humans.\n The authors fully agree that studying a wider notion of robustness will become increasingly\n important in ML, and will help us get a better grasp of features we actually want our models\n to rely on.\n \n\n\n**Response**: The fact that models can learn to classify correctly based\n purely on the high-frequency component of the training set is neat! This nicely\n complements one of our [takeaways](#takeaway1): models will rely on\n useful features even if these features appear incomprehensible to humans. \n\n\n Also, while non-robustness to noise can be an indicator of models using\n non-robust useful features, this is not how the phenomenon was predominantly viewed.\n More often than not, the brittleness of ML models to noise was instead regarded\n as an innate shortcoming of the models, e.g., due to poor margins. (This view is\n even more prevalent in the adversarial robustness community.) Thus, it was often\n expected that progress towards “better”/”bug-free” models will lead to them\n being more robust to noise and adversarial examples. \n\n\n Finally, we fully agree that the set of LpL\\_pLp​-bounded perturbations is a very\n small subset of the perturbations we want our models to be robust to. Note,\n however, that the focus of our work is human-alignment — to that end, we\n demonstrate that models rely on features sensitive to patterns that are\n imperceptible to humans. Thus, the existence of other families of\n incomprehensible but useful features would provide even more support for our\n thesis — identifying and characterizing such features is an interesting area for\n future research.\n\n \n\n\n### Robust Feature Leakage (Gabriel Goh)\n\n\n**Response Summary**:\n This is a nice in-depth investigation that highlights (and neatly visualizes) one of\n the motivations for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset.\n \n\n\n**Response**: This comment raises a valid concern which was in fact one of\n the primary reasons for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset.\n In particular, recall the construction of the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​\n dataset: assign each input a random target label and do PGD towards that label.\n Note that unlike the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset (in which the\n target class is deterministically chosen), the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​\n dataset allows for robust features to actually have a (small) positive\n correlation with the label. \n\n\nTo see how this can happen, consider the following simple setting: we have a\n single feature f(x)f(x)f(x) that is 111 for cats and −1-1−1 for dogs. If ϵ=0.1\\epsilon = 0.1ϵ=0.1\n then f(x)f(x)f(x) is certainly a robust feature. However, randomly assigning labels\n (as in the dataset D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​) would make this feature\n uncorrelated with the assigned label, i.e., we would have that E[f(x)⋅y]=0E[f(x)\\cdot y] = 0E[f(x)⋅y]=0. Performing a\n targeted attack might in this case induce some correlation with the\n assigned label, as we could have E[f(x+η⋅∇f(x))⋅y]>E[f(x)⋅y]=0\\mathbb{E}[f(x+\\eta\\cdot\\nabla\n f(x))\\cdot y] > \\mathbb{E}[f(x)\\cdot y] = 0E[f(x+η⋅∇f(x))⋅y]>E[f(x)⋅y]=0, allowing a model to learn\n to correctly classify new inputs. \n\n\nIn other words, starting from a dataset with no features, one can encode\n robust features within small perturbations. In contrast, in the\n D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, the robust features are *correlated\n with the original label* (since the labels are permuted) and since they are\n robust, they cannot be flipped to correlate with the newly assigned (wrong)\n label. Still, the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ dataset enables us to show\n that (a) PGD-based adversarial examples actually alter features in the data and\n (b) models can learn from human-meaningless/mislabeled training data. The\n D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, on the other hand, illustrates that the\n non-robust features are actually sufficient for generalization and can be\n preferred over robust ones in natural settings.\n\n\nThe experiment put forth in the comment is a clever way of showing that such\n leakage is indeed possible. However, we want to stress (as the comment itself\n does) that robust feature leakage does *not* have an impact on our main\n thesis — the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset explicitly controls\n for robust\n feature leakage (and in fact, allows us to quantify the models’ preference for\n robust features vs non-robust features — see Appendix D.6 in the\n [paper](https://arxiv.org/abs/1905.02175)).\n\n\n### Two Examples of Useful, Non-Robust Features (Gabriel Goh)\n\n\n**Response Summary**: These experiments with linear models are a great first step towards visualizing\n non-robust features for real datasets (and thus a neat corroboration of their existence).\n Furthermore, the theoretical construction of “contaminated” non-robust features opens an\n interesting direction of developing a more fine-grained definition of features.\n \n\n\n**Response**: These experiments (visualizing the robustness and\n usefulness of different linear features) are very interesting! They both further\n corroborate the existence of useful, non-robust features and make progress\n towards visualizing what these non-robust features actually look like. \n\n\nWe also appreciate the point made by the provided construction of non-robust\n features (as defined in our theoretical framework) that are combinations of\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n enables such a scenario, even if — as the commenter already notes — our\n experimental results do not. (In this sense, the experimental results and our [main\n takeaway](#takeaway1) are actually stronger than our theoretical\n framework technically captures.) Specifically, in such a scenario, during the\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ dataset, only the non-robust\n and useless term of the feature would be flipped. Thus, a classifier trained on\n such a dataset would associate the predictive robust feature with the\n *wrong* label and would thus not generalize on the test set. In contrast,\n our experiments show that classifiers trained on D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​\n do generalize.\n\n\nOverall, our focus while developing our theoretical framework was on\n enabling us to formally describe and predict the outcomes of our experiments. As\n the comment points out, putting forth a theoretical framework that captures\n non-robust features in a very precise way is an important future research\n direction in itself. \n\n\n### Adversarially Robust Neural Style Transfer\n (Reiichiro Nakano)\n\n\n**Response Summary**:\n Very interesting results that highlight the potential role of non-robust features and the\n utility of robust models for downstream tasks. We’re excited to see what kind of impact robustly\n trained models will have in neural network art!\n Inspired by these findings, we also take a deeper dive into (non-robust) VGG, and find some\n interesting links between robustness and style transfer. \n\n\n**Response**: These experiments are really cool! It is interesting that\n preventing the reliance of a model on non-robust features improves performance\n on style transfer, even without an explicit task-related objective (i.e. we\n didn’t train the networks to be better for style transfer). \n\n\n We also found the discussion of VGG as a “mysterious network” really\n interesting — it would be valuable to understand what factors drive style transfer\n performance more generally. Though not a complete answer, we made a couple of\n observations while investigating further: \n\n\n*Style transfer does work with AlexNet:* One wrinkle in the idea that\n robustness is the “secret ingredient” to style transfer could be that VGG is not\n the most naturally robust network — AlexNet is. However, based on our own\n testing, style transfer does seem to work with AlexNet out-of-the-box, as\n long as we use a few early layers in the network (in a similar manner to\n VGG): \n\n\n\n\nStyle transfer using AlexNet, using\n conv\\_1 through conv\\_4.\n\n\n![](images/alexnetworks.png)\n\n\n\n Observe that even though style transfer still works, there are checkerboard\n patterns emerging — this seems to be a similar phenomenon to the one noticed\n in the comment in the context of robust models.\n This might be another indication that these two phenomena (checkerboard\n patterns and style transfer working) are not as intertwined as previously\n thought.\n \n\n\n*From prediction robustness to layer robustness:* Another\n potential wrinkle here is that both AlexNet and VGG are not that\n much more robust than ResNets (for which style transfer completely fails),\n and yet seem to have dramatically better performance. To try to\n explain this, recall that style transfer is implemented as a minimization of a\n combined objective consisting of a style loss and a content loss. We found,\n however, that the network we use to compute the\n style loss is far more important\n than the one for the content loss. The following demo illustrates this — we can\n actually use a non-robust ResNet for the content loss and everything works just\n fine:\n\n\n\n\nStyle transfer seems to be rather\n invariant to the choice of content network used, and very sensitive\n to the style network used.\n\n\n![](images/stylematters.png)\n\n\nTherefore, from now on, we use a fixed ResNet-50 for the content loss as a\n control, and only worry about the style loss. \n\n\nNow, note that the way that style loss works is by using the first few\n layers of the relevant network. Thus, perhaps it is not about the robustness of\n VGG’s predictions, but instead about the robustness of the layers that we actually use\n for style transfer? \n\n\n To test this hypothesis, we measure the robustness of a layer fff as:\n \n\n\nR(f)=Ex1∼D[maxx′∥f(x′)−f(x1)∥2]Ex1,x2∼D[∥f(x1)−f(x2)∥2]\n R(f) = \\frac{\\mathbb{E}\\_{x\\_1\\sim D}\\left[\\max\\_{x’} \\|f(x’) - f(x\\_1)\\|\\_2 \\right]}\n {\\mathbb{E}\\_{x\\_1, x\\_2 \\sim D}\\left[\\|f(x\\_1) - f(x\\_2)\\|\\_2\\right]}\n R(f)=Ex1​,x2​∼D​[∥f(x1​)−f(x2​)∥2​]Ex1​∼D​[maxx′​∥f(x′)−f(x1​)∥2​]​\n Essentially, this quantity tells us how much we can change the\n output of that layer f(x)f(x)f(x) within a small ball, normalized by how far apart\n representations are between images in general. We’ve plotted this value for\n the first few layers in a couple of different networks below: \n\n\n\n\nThe robustness R(f)R(f)R(f) of the first\n four layers of VGG16, AlexNet, and robust/standard ResNet-50\n trained on ImageNet.\n\n\n![](images/robustnesses.png)\n\n\n Here, it becomes clear that, the first few layers of VGG and AlexNet are\n actually almost as robust as the first few layers of the robust ResNet!\n This is perhaps a more convincing indication that robustness might have\n something to with VGG’s success in style transfer after all.\n \n\n\n Finally, suppose we restrict style transfer to only use a single layer of\n the network when computing the style lossUsually style transfer uses\n several layers in the loss function to get the most visually appealing results — here we’re only interested in whether or not style transfer works (i.e.\n actually confers some style onto the image).. Again, the more\n robust layers seem to indeed work better for style transfer! Since all of the\n layers in the robust ResNet are robust, style transfer yields non-trivial\n results even using the last layer alone. Conversely, VGG and AlexNet seem to\n excel in the earlier layers (where they are non-trivially robust) but fail when\n using exclusively later (non-robust) layers: \n\n\n\n\n\nStyle transfer using a single layer. The\n names of the layers and their robustness R(f)R(f)R(f) are printed below\n each style transfer result. We find that for both networks, the robust\n layers seem to work (for the robust ResNet, every layer is robust).\n\n\n![](images/styletransfer.png)\n\n\n Of course, there is much more work to be done here, but we are excited\n to see further work into understanding the role of both robustness and the VGG\n in network-based image manipulation. \n\n\n### Adversarial Examples are Just Bugs, Too (Preetum\n Nakkiran)\n\n\n**Response Summary**:\n A fine-grained look at adversarial examples that neatly complements our thesis (i.e. that non-robust\n features exist and adversarial examples arise from them, see [Takeaway #1](#takeaway1)) while\n providing an\n example of adversarial examples that arise from “bugs”.\n The fact that the constructed “bugs”-based adversarial examples don’t transfer constitutes\n another evidence for the link between transferability and (non-robust) features.\n \n\n\n**Response**: As mentioned [above](#nonclaim1),\n we did not intend to claim\n that adversarial examples arise *exclusively* from (useful) features but rather\n that useful non-robust features exist and are thus (at least\n partially) responsible for adversarial vulnerability. In fact,\n prior work already shows how in theory adversarial examples can arise from\n insufficient samples or finite-sample overfitting , and the experiments\n presented here (particularly, the adversarial squares) constitute a neat\n real-world demonstration of these facts. \n\n\n Our main thesis that “adversarial examples will not just go away as we fix\n bugs in our models” is not contradicted by the existence of adversarial examples\n stemming from “bugs.” As long as adversarial examples can stem from non-robust\n features (which the commenter seems to agree with), fixing these bugs will not\n solve the problem of adversarial examples. \n\n\nMoreover, with regards to feature “leakage” from PGD, recall that in\n or D\\_det dataset, the non-robust features are associated with the\n correct label whereas the robust features are associated with the wrong\n one. We wanted to emphasize that, as shown in [Appendix D.6](LINK),\n models trained on our D\\_det dataset actually generalize *better* to\n the non-robust feature-label association that to the robust\n feature-label association. In contrast, if PGD introduced a small\n “leakage” of non-robust features, then we would expect the trained model\n would still predominantly use the robust feature-label association. \n\n\n That said, the experiments cleverly zoom in on some more fine-grained\n nuances in our understanding of adversarial examples. One particular thing that\n stood out to us is that by creating a set of adversarial examples that are\n *explicitly* non-transferable, one also prevents new classifiers from learning\n features from that dataset. This finding thus makes the connection between\n transferability of adversarial examples and their containing generalizing\n features even stronger! Indeed, we can add the constructed dataset into our\n “D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ learnability vs transferability” plot\n (Figure 3 in the paper) — the point\n corresponding to this dataset fits neatly onto the trendline! \n\n\n\n\nRelationship between models reliance on non-robust features and their susceptibility to transfer\n attacks\n\n\n![](diagrams/transfer.png)\n\n\n### Learning from Incorrectly Labeled Data (Eric Wallace)\n\n\n**Response Summary**:\n These experiments are a creative demonstration of the fact that the underlying phenomenon of\n learning features from “human-meaningless” data can actually arise in a broad range of\n settings.\n \n\n\n**Response**: Since our experiments work across different architectures,\n “distillation” in weight space cannot arise. Thus, from what we understand, the\n “distillation” hypothesis suggested here is referring to “feature distillation”\n (i.e. getting models which use the same features as the original), which is\n actually precisely our hypothesis too. Notably, this feature distillation would\n not be possible if adversarial examples did not rely on “flipping” features that\n are good for classification (see [World 1](#world1) and\n [World 2](#world2)) — in that case, the distilled\n model would only use features that generalize poorly, and would thus generalize\n poorly itself. \n\n\n Moreover, we would argue that in the experiments presented (learning from\n mislabeled data), the same kind of distillation is happening. For instance, a\n moderately accurate model might associate “green background” with “frog” thus\n labeling “green” images as “frogs” (e.g., the horse in the comment’s figure).\n Training a new model on this dataset will thus associate “green” with “frog”\n achieving non-trivial accuracy on the test set (similarly for the “learning MNIST\n from Fashion-MNIST” experiment in the comment). This corresponds exactly to\n learning features from labels, akin to how deep networks “distill” a good\n decision boundary from human annotators. In fact, we find these experiments\n a very interesting illustration of feature distillation that complements\n our findings. \n\n\n We also note that an analogy to logistic regression here is only possible\n due to the low VC-dimension of linear classifiers (namely, these classifiers\n have dimension ddd). In particular, given any classifier with VC-dimension\n kkk, we need at least kkk points to fully specify the classifier. Conversely, neural\n networks have been shown to have extremely large VC-dimension (in particular,\n bigger than the size of the training set ). So even though\n labelling\n d+1d+1d+1 random\n points model-consistently is sufficient to recover a linear model, it is not\n necessarily sufficient to recover a deep neural network. For instance, Milli et\n al. are not able to reconstruct a ResNet-18\n using only its predictions on random Gaussian inputs. (Note that we are using a\n ResNet-50 in our experiments.) \n\n\n Finally, it seems that the only potentially problematic explanation for\n our experiments (namely, that enough model-consistent points can recover a\n classifier) is disproved by the experiment done by Preetum (see [LINK](#PreetumResponse)). In\n particular, Preetum is able to design a\n dataset where training on mislabeled inputs *that are model-consistent*\n does not at all recover the decision boundary of the original model. More\n generally, the “model distillation” perspective raised here is unable to\n distinguish between the dataset created by Preetum below, and those created\n with standard PGD (as in our D^det\\widehat{\\mathcal{D}}\\_{det}D\ndet​ and\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\nrand​ datasets).", "date_published": "2019-08-06T20:00:00Z", "authors": ["Logan Engstrom", "Andrew Ilyas", "Aleksander Madry", "Shibani Santurkar", "Dimitris Tsipras"], "summaries": [], "doi": "10.23915/distill.00019.7", "journal_ref": "distill-pub", "bibliography": []} {"id": "a9e837be3cded3d2c70b5d5864747eb6", "title": "Open Questions about Generative Adversarial Networks", "url": "https://distill.pub/2019/gan-open-problems", "source": "distill", "source_type": "blog", "text": "By some metrics, research on Generative Adversarial Networks (GANs) has progressed substantially in the past 2 years.\n Practical improvements to image synthesis models are being made almost too quickly to keep up with:\n \n\n\n\n![](images/gan-progress.png)\nOdena et al., 2016\nMiyato et al., 2017\nZhang et al., 2018\nBrock et al., 2018\n\n\n However, by other metrics, less has happened. For instance, there is still widespread disagreement about how GANs should be evaluated.\n Given that current image synthesis benchmarks seem somewhat saturated, we think now is a good time to reflect on research goals for this sub-field.\n \n\n\n\n Lists of open problems have helped other fields with this.\n This article suggests open research problems that we’d be excited for other researchers to work on.\n We also believe that writing this article has clarified our thinking about\n GANs, and we would encourage other researchers to write similar articles about their own sub-fields.\n We assume a fair amount of background (or willingness to look things up)\n because we reference too many results to explain all those results in detail.\n \n\n\nWhat are the Trade-Offs Between GANs and other Generative Models?\n-----------------------------------------------------------------\n\n\n\n In addition to GANs, two other types of generative model are currently popular: Flow Models and Autoregressive Models\n \n This statement shouldn’t be taken too literally.\n Those are useful terms for describing fuzzy clusters in ‘model-space’, but there are models\n that aren’t easy to describe as belonging to just one of those clusters.\n I’ve also left out VAEs entirely;\n they’re arguably no longer considered state-of-the-art at any tasks of record.\n .\n Roughly speaking, Flow Models apply a\n stack of invertible transformations to a sample from a prior\n so that exact log-likelihoods of observations can be computed.\n On the other hand, Autoregressive Models factorize the\n distribution over observations into conditional distributions\n and process one component of the observation at a time (for images, they may process one pixel\n at a time.)\n Recent research suggests that these models have different\n performance characteristics and trade-offs.\n We think that accurately characterizing these trade-offs and deciding whether they are intrinsic\n to the model families is an interesting open question.\n \n\n\n\n For concreteness, let’s temporarily focus on the difference in computational cost between GANs and Flow Models.\n At first glance, Flow Models seem like they might make GANs unnecessary.\n Flow Models allow for exact log-likelihood computation and exact inference,\n so if training Flow Models and GANs had the same computational cost, GANs might not be useful.\n A lot of effort is spent on training GANs,\n so it seems like we should care about whether Flow Models make GANs obsolete\n \n Even in this case, there might still be other reasons to use adversarial training in contexts like\n image-to-image translation.\n It also might still make sense to combine adversarial training with maximum-likelihood training.\n .\n \n\n\n\n However, there seems to be a substantial gap between the computational cost of training GANs and Flow Models.\n To estimate the magnitude of this gap, we can consider two models trained on datasets of human faces.\n The GLOW model is trained to generate 256x256 celebrity faces using\n 40 GPUs for 2 weeks and about 200 million parameters.\n In contrast, progressive GANs are trained on a similar face dataset\n with 8 GPUs for 4 days, using about 46 million parameters, to generate 1024x1024 images.\n Roughly speaking, the Flow Model took 17 times more GPU days and 4 times more parameters\n to generate images with 16 times fewer pixels.\n This comparison isn’t perfect,\n \n For instance, it’s possible that the progressive growing\n technique could be applied to Flow Models as well.\n \n but it gives you a sense of things.\n \n\n\n\n Why are the Flow Models less efficient?\n We see two possible reasons:\n First, maximum likelihood training might be computationally harder to do than adversarial training.\n In particular, if any element of your training set is assigned zero probability by your generative model,\n you will be penalized infinitely harshly!\n A GAN generator, on the other hand, is only penalized indirectly for assigning zero probability to training set elements,\n and this penalty is less harsh.\n\n Second, normalizing flows might be an inefficient way to represent certain functions.\n Section 6.1 of does some small experiments on expressivity, but at\n present we’re not aware of any in-depth analysis of this question.\n \n\n\n\n We’ve discussed the trade-off between GANs and Flow Models, but what about Autoregressive Models?\n It turns out that Autoregressive Models can be expressed as Flow Models\n (because they are both reversible) that are not parallelizable.\n \n Parallelizable is somewhat imprecise in this context.\n We mean that sampling from Flow Models must in general be done sequentially, one observation at a time.\n There may be ways around this limitation though.\n \n It also turns out that Autoregressive Models are more time and parameter efficient than Flow Models.\n Thus, GANs are parallel and efficient but not reversible,\n Flow Models are reversible and parallel but not efficient, and\n Autoregressive models are reversible and efficient, but not parallel.\n \n\n\n\n\n| | Parallel | Efficient | Reversible |\n| --- | --- | --- | --- |\n| GANs | Yes | Yes | No |\n| Flow Models | Yes | No | Yes |\n| Autoregressive Models | No | Yes | Yes |\n\n\n\n This brings us to our first open problem:\n \n\n\n\nProblem 1\n\nWhat are the fundamental trade-offs between GANs and other generative models?\n\n\nIn particular, can we make some sort of CAP Theorem type statement about reversibility, parallelism, and parameter/time efficiency?\n\n\n\n\n\n One way to approach this problem could be to study more models that are a hybrid of multiple model families.\n This has been considered for hybrid GAN/Flow Models, but we think that\n this approach is still underexplored.\n \n\n\n\n We’re also not sure about whether maximum likelihood training is necessarily harder than GAN training.\n It’s true that placing zero mass on a training data point is not explicitly prohibited under the\n GAN training loss, but it’s also true that a sufficiently powerful discriminator will be able\n to do better than chance if the generator does this.\n It does seem like GANs are learning distributions of low support in practice though.\n \n\n\n\n Ultimately, we suspect that Flow Models are fundamentally less expressive per-parameter than\n arbitrary decoder functions, and we suspect that this is provable under certain assumptions.\n \n\n\nWhat Sorts of Distributions Can GANs Model?\n-------------------------------------------\n\n\n\n Most GAN research focuses on image synthesis.\n In particular, people train GANs on a handful of standard (in the Deep Learning community) image datasets:\n MNIST,\n CIFAR-10,\n STL-10,\n CelebA,\n and Imagenet.\n\n There is some folklore about which of these datasets is ‘easiest’ to model.\n In particular, MNIST and CelebA are considered easier than Imagenet, CIFAR-10, or STL-10 due to being\n ‘extremely regular’.\n Others have noted that ‘a high number of classes is what makes\n ImageNet synthesis difficult for GANs’.\n These observations are supported by the empirical fact that the state-of-the-art image\n synthesis model on CelebA generates images that seem\n substantially more convincing than the state-of-the-art image synthesis model on\n Imagenet.\n \n\n\n\n However, we’ve had to come to these conclusions through the laborious and noisy process of\n trying to train GANs on ever larger and more complicated datasets.\n In particular, we’ve mostly studied how GANs perform on the datasets that\n happened to be laying around for object recognition.\n \n\n\n\n As with any science, we would like to have a simple theory that explains our experimental observations.\n Ideally, we could look at a dataset, perform some computations without ever actually\n training a generative model, and then say something like ‘this dataset will be\n easy for a GAN to model, but not a VAE’.\n There has been some progress on this topic,\n but we feel that more can be done.\n We can now state the problem:\n \n\n\n\nProblem 2\n\nGiven a distribution, what can we say about how hard it will be for a GAN to model that distribution?\n\n\n\n\n\n We might ask the following related questions as well:\n What do we mean by ‘model the distribution’? Are we satisfied with a low-support representation, or do we want a true density model?\n Are there distributions that a GAN can never learn to model?\n Are there distributions that are learnable for a GAN in principle, but are not\n efficiently learnable, for some reasonable model of resource-consumption?\n Are the answers to these questions actually any different for GANs than they are for other\n generative models?\n \n\n\n\n We propose two strategies for answering these questions:\n \n\n\n* **Synthetic Datasets** - We can study synthetic datasets to probe what traits affect learnability.\n For example, in the authors create a dataset of synthetic triangles.\n We feel that this\n angle is under-explored.\n Synthetic datasets can even be parameterized by quantities of interest, such as connectedness or smoothness,\n allowing for systematic study.\n Such a dataset could also be useful for studying other types of generative models.\n* **Modify Existing Theoretical Results** - We can take existing theoretical results and try to\n modify the assumptions to account\n for different properties of the dataset.\n For instance, we could take results about GANs that apply given unimodal data distributions and see\n what happens to them when the data distribution becomes multi-modal.\n\n\nHow Can we Scale GANs Beyond Image Synthesis?\n---------------------------------------------\n\n\n\n Aside from applications like image-to-image\n translation\n and domain-adaptation\n most GAN successes have been in image synthesis.\n Attempts to use GANs beyond images have focused on three domains:\n \n\n\n* **Text** -\n The discrete nature of text makes it difficult to apply GANs.\n This is because GANs rely on backpropagating a signal from the discriminator through the generated content into the generator.\n There are two approaches to addressing this difficulty.\n The first is to have the GAN act only on continuous\n representations of the discrete data, as in.\n The second is use an actual discrete model and attempt to train the GAN using\n gradient estimation as in.\n Other, more sophisticated treatments exist,\n but as far as we can tell, none of them produce results that are competitive (in terms of perplexity)\n with likelihood-based language models.\n* **Structured Data** -\n What about other non-euclidean structured data, like graphs?\n The study of this type of data is called geometric deep\n learning.\n GANs have had limited success here, but so have other deep learning techniques,\n so it’s hard to tell how much the GAN aspect matters.\n\n We’re aware of one attempt to use GANs in this space,\n which has the generator produce (and the discriminator ‘critique’) random walks\n that are meant to resemble those sampled from a source graph.\n* **Audio** -\n Audio is the domain in which GANs are closest to achieving the success\n they’ve enjoyed with images.\n The first serious attempt at applying GANs to unsupervised audio synthesis\n is, in which the authors\n make a variety of special allowances for the fact that they are operating on audio.\n More recent work suggests GANs can even\n outperform autoregressive models on some perceptual metrics.\n\n\n\n Despite these attempts, images are clearly the easiest domain for GANs.\n This leads us to the statement of the problem:\n \n\n\n\nProblem 3\n\nHow can GANs be made to perform well on non-image data?\n\n\nDoes scaling GANs to other domains require new training techniques,\n or does it simply require better implicit priors for each domain?\n\n\n\n\n\n We expect GANs to eventually achieve image-synthesis-level success on other continuous data,\n but that it will require better implicit priors.\n Finding these priors will require thinking hard about what makes sense and is computationally feasible\n in a given domain.\n \n\n\n\n For structured data or data that is not continuous, we’re less sure.\n One approach might be to make both the generator and discriminator\n be agents trained with reinforcement learning. Making this approach work could require\n large-scale computational resources.\n Finally, this problem may just require fundamental research progress.\n \n\n\nWhat can we Say About the Global Convergence of GAN Training?\n-------------------------------------------------------------\n\n\n\n Training GANs is different from training other neural networks because we simultaneously optimize\n the generator and discriminator for opposing objectives.\n Under certain assumptions\n \n These assumptions are very strict.\n The referenced paper assumes (roughly speaking) that\n the equilibrium we are looking for exists and that\n we are already very close to it.\n ,\n this simultaneous optimization\n is locally asymptotically stable.\n \n\n\n\n Unfortunately, it’s hard to prove interesting things about the fully general case.\n This is because the discriminator/generator’s loss is a non-convex function of its parameters.\n But all neural networks have this problem!\n We’d like some way to focus on just the problems created by simultaneous optimization.\n This brings us to our question:\n \n\n\n\nProblem 4\n\nWhen can we prove that GANs are globally convergent?\n\n\nWhich neural network convergence results can be applied to GANs?\n\n\n\n\n\n There has been nontrivial progress on this question.\n Broadly speaking, there are 3 existing techniques, all of which have generated\n promising results but none of which have been studied to completion:\n \n\n\n* **Simplifying Assumptions** -\n The first strategy is to make simplifying assumptions about the generator and discriminator.\n For example, the simplified LGQ GAN — linear generator, Gaussian data, and quadratic discriminator — can be shown to be globally convergent, if optimized with a special technique\n and some additional assumptions.\n Among other things, it’s assumed that we can first learn the means of the Gaussian and then learn the variances.\n \n As another example, show under different simplifying assumptions that\n certain types of GANs perform a mixture of moment matching and maximum likelihood estimation.\n\n It seems promising to gradually relax those assumptions to see what happens.\n For example, we could move away from unimodal distributions.\n This is a natural relaxation to study because ‘mode collapse’ is a standard GAN pathology.\n* **Use Techniques from Normal Neural Networks** -\n The second strategy is to apply techniques for analyzing normal neural networks (which are also non-convex)\n to answer questions about convergence of GANs.\n For instance, it’s argued in that the non-convexity\n of deep neural networks isn’t a problem,\n \n A fact that practitioners already kind of suspected.\n \n because low-quality local minima of the loss function\n become exponentially rare as the network gets larger.\n Can this analysis be ‘lifted into GAN space’?\n In fact, it seems like a generally useful heuristic to take analyses of deep neural networks used as classifiers and see if they apply to GANs.\n* **Game Theory** -\n The final strategy is to model GAN training using notions from game theory.\n These techniques yield training procedures that provably converge to some kind of approximate Nash equilibrium,\n but do so using unreasonably large resource constraints.\n The ‘obvious’ next step in this case is to try and reduce those resource constraints.\n\n\nHow Should we Evaluate GANs and When Should we Use Them?\n--------------------------------------------------------\n\n\n\n When it comes to evaluating GANs, there are many proposals but little consensus.\n Suggestions include:\n \n\n\n* **Inception Score and FID** -\n Both these scores\n use a pre-trained image classifier and both have\n known issues .\n A common criticism is that these scores measure\n ‘sample quality’ and don’t really capture ‘sample diversity’.\n* **MS-SSIM** -\n propose using MS-SSIM to\n separately evaluate diversity, but this technique has some issues and hasn’t really caught on.\n* **AIS** -\n propose putting a Gaussian observation model on the outputs\n of a GAN and using annealed importance sampling to estimate\n the log likelihood under this model, but show that\n estimates computed this way are inaccurate in the case where the GAN generator is also a flow model\n \n The generator being a flow model allows for computation of exact log-likelihoods in this case.\n .\n* **Geometry Score** -\n suggest computing geometric properties of the generated data manifold\n and comparing those properties to the real data.\n* **Precision and Recall** -\n attempt to measure both the ‘precision’ and ‘recall’ of GANs.\n* **Skill Rating** -\n have shown that trained GAN discriminators can contain useful information\n with which evaluation can be performed.\n\n\n\n Those are just a small fraction of the proposed GAN evaluation schemes.\n Although the Inception Score and FID are relatively popular, GAN evaluation is clearly not a settled issue.\n Ultimately, we think that confusion about *how to evaluate* GANs stems from confusion about\n *when to use GANs*.\n Thus, we have bundled those two questions into one:\n \n\n\n\nProblem 5\n\nWhen should we use GANs instead of other generative models?\n\n\nHow should we evaluate performance in those contexts?\n\n\n\n\n\n What should we use GANs for?\n If you want an actual density model, GANs probably aren’t the best choice.\n There is now good experimental evidence that GANs learn a ‘low support’ representation of the target dataset\n , which means there may be substantial parts of the test\n set to which a GAN (implicitly) assigns zero likelihood.\n \n\n\n\n Rather than worrying too much about this,\n Though trying to fix this issue is a valid research agenda as well.\n \n\n we think it makes sense to focus GAN research on tasks where this is fine or even helpful.\n GANs are likely to be well-suited to tasks with a perceptual flavor.\n Graphics applications like image synthesis, image translation, image infilling, and attribute manipulation\n all fall under this umbrella.\n \n\n\n\n How should we evaluate GANs on these perceptual tasks?\n Ideally, we would just use a human judge, but this is expensive.\n A cheap proxy is to see if a classifier can distinguish between real and fake examples.\n This is called a classifier two-sample test (C2STs)\n .\n The main issue with C2STs is that if the Generator has even a minor defect that’s systematic across samples\n (e.g., ) this will dominate the evaluation.\n \n\n\n\n Ideally, we’d have a holistic evaluation that isn’t dominated by a single factor.\n One approach might be to make a critic that is blind to the dominant defect.\n But once we do this, some other defect may dominate, requiring a new critic, and so on.\n If we do this iteratively, we could get a kind of ‘Gram-Schmidt procedure for critics’,\n creating an ordered list of the most important defects and\n critics that ignore them.\n Perhaps this can be done by performing PCA on the critic activations and progressively throwing out\n more and more of the higher variance components.\n \n\n\n\n Finally, we could evaluate on humans despite the expense.\n This would allow us to measure the thing that we actually care about.\n This kind of approach can be made less expensive by predicting human answers and only interacting with a real human when\n the prediction is uncertain.\n \n\n\nHow does GAN Training Scale with Batch Size?\n--------------------------------------------\n\n\n\n Large minibatches have helped to scale up image classification — can they also help us scale up GANs?\n Large minibatches may be especially important for effectively using highly parallel hardware accelerators.\n \n\n\n\n At first glance, it seems like the answer should be yes — after all, the discriminator in most GANs is just an image classifier.\n Larger batches can accelerate training if it is bottlenecked on gradient noise.\n However, GANs have a separate bottleneck that classifiers don’t: the training procedure can diverge.\n Thus, we can state our problem:\n \n\n\n\nProblem 6\n\nHow does GAN training scale with batch size?\n\n\nHow big a role does gradient noise play in GAN training?\n\n\nCan GAN training be modified so that it scales better with batch size?\n\n\n\n\n\n There’s some evidence that increasing minibatch size improves quantitative results and reduces training time.\n If this phenomenon is robust, it would suggest that gradient noise is a dominating factor.\n However, this hasn’t been systematically studied, so we believe this question remains open.\n \n\n\n\n Can alternate training procedures make better use of large batches?\n Optimal Transport GANs theoretically have better convergence properties than normal GANs,\n but need a large batch size because they try to align batches of samples and training data.\n As a result, they seem like a promising candidate for scaling to very large batch sizes.\n \n\n\n\n\n\n Finally, asynchronous SGD could be a good alternative for making use of new hardware.\n In this setting, the limiting factor tends to be that gradient updates are computed on ‘stale’ copies of the parameters.\n But GANs seem to actually benefit from training on past parameter snapshots, so we might ask if\n asynchronous SGD interacts in a special way with GAN training.\n \n\n\nWhat is the Relationship Between GANs and Adversarial Examples?\n---------------------------------------------------------------\n\n\n\n It’s well known that image classifiers suffer from adversarial examples:\n human-imperceptible perturbations that cause classifiers to give the wrong output when added to images.\n It’s also now known that there are classification problems which can normally be efficiently learned,\n but are exponentially hard to learn robustly.\n \n\n\n\n Since the GAN discriminator is an image classifier, one might worry about it suffering from adversarial examples.\n Despite the large bodies of literature on GANs and adversarial examples,\n there doesn’t seem to be much work on how they relate.\n \n There is work on using GANs to generate adversarial examples, but this is not quite the same thing.\n \n Thus, we can ask the question:\n \n\n\n\nProblem 7\n\nHow does the adversarial robustness of the discriminator affect GAN training?\n\n\n\n\n\n How can we begin to think about this problem?\n Consider a fixed discriminator **D**.\n An adversarial example for **D** would exist if\n there were a generator sample **G(z)** correctly classified as fake and\n a small perturbation **p** such that **G(z) + p** is classified as real.\n With a GAN, the concern would be that the gradient update for the generator would yield\n a new generator **G’** where **G’(z) = G(z) + p**.\n \n\n\n\n Is this concern realistic?\n shows that deliberate attacks on generative models can work,\n but we are more worried about something you might call an ‘accidental attack’.\n There are reasons to believe that these accidental attacks are less likely.\n First, the generator is only allowed to make one gradient update before\n the discriminator is updated again.\n In contrast, current adversarial attacks are typically run for tens of iterations.\n Second, the generator is optimized given a batch of samples from the prior, and this batch is different\n for every gradient step.\n Finally, the optimization takes place in the space of parameters of the generator rather than in pixel space.\n However, none of these arguments decisively rules out the generator creating adversarial examples.\n We think this is a fruitful topic for further exploration.", "date_published": "2019-04-09T20:00:00Z", "authors": ["Augustus Odena"], "summaries": ["What we'd like to find out about GANs that we don't know yet."], "doi": "10.23915/distill.00018", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1610.09585.pdf", "title": "Conditional Image Synthesis With Auxiliary Classifier GANs"}, {"link": "http://arxiv.org/pdf/1805.08318.pdf", "title": "Self-Attention Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1802.05957.pdf", "title": "Spectral Normalization for Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1809.11096.pdf", "title": "Large Scale GAN Training for High Fidelity Natural Image Synthesis"}, {"link": "http://arxiv.org/pdf/1812.04948.pdf", "title": "A style-based generator architecture for generative adversarial networks"}, {"link": "http://math.ucr.edu/home/baez/physics/General/open_questions.html", "title": "Open Questions in Physics"}, {"link": "https://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand", "title": "Not especially famous, long-open problems which anyone can understand"}, {"link": "https://en.wikipedia.org/wiki/Hilbert%27s_problems", "title": "Hilbert's Problems"}, {"link": "https://en.wikipedia.org/wiki/Smale%27s_problems", "title": "Smale's Problems"}, {"link": "http://arxiv.org/pdf/1312.6114.pdf", "title": "Auto-Encoding Variational Bayes"}, {"link": "http://arxiv.org/pdf/1410.8516.pdf", "title": "NICE: Non-linear Independent Components Estimation"}, {"link": "http://arxiv.org/pdf/1605.08803.pdf", "title": "Density estimation using Real NVP"}, {"link": "http://arxiv.org/pdf/1807.03039.pdf", "title": "Glow: Generative Flow with Invertible 1x1 Convolutions"}, {"link": "https://blog.evjang.com/2018/01/nf1.html", "title": "Normalizing Flows Tutorial"}, {"link": "http://arxiv.org/pdf/1601.06759.pdf", "title": "Pixel Recurrent Neural Networks"}, {"link": "http://arxiv.org/pdf/1606.05328.pdf", "title": "Conditional Image Generation with PixelCNN Decoders"}, {"link": "http://arxiv.org/pdf/1701.05517.pdf", "title": "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications"}, {"link": "http://arxiv.org/pdf/1609.03499.pdf", "title": "WaveNet: A Generative Model for Raw Audio"}, {"link": "http://arxiv.org/pdf/1710.10196.pdf", "title": "Progressive Growing of GANs for Improved Quality, Stability, and Variation"}, {"link": "http://arxiv.org/pdf/1505.05770.pdf", "title": "Variational Inference with Normalizing Flows"}, {"link": "http://arxiv.org/pdf/1711.10433.pdf", "title": "Parallel WaveNet: Fast High-Fidelity Speech Synthesis"}, {"link": "http://arxiv.org/pdf/1705.08868.pdf", "title": "Flow-GAN: Bridging implicit and prescribed learning in generative models"}, {"link": "http://arxiv.org/pdf/1705.05263.pdf", "title": "Comparison of maximum likelihood and gan-based training of real nvps"}, {"link": "http://arxiv.org/pdf/1802.08768.pdf", "title": "Is Generator Conditioning Causally Related to GAN Performance?"}, {"link": "http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf", "title": "Gradient-Based Learning Applied to Document Recognition"}, {"link": "https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf", "title": "Learning Multiple Layers of Features from Tiny Images"}, {"link": "http://proceedings.mlr.press/v15/coates11a/coates11a.pdf", "title": "An analysis of single-layer networks in unsupervised feature learning"}, {"link": "http://arxiv.org/pdf/1411.7766.pdf", "title": "Deep Learning Face Attributes in the Wild"}, {"link": "http://arxiv.org/pdf/1409.0575.pdf", "title": "ImageNet Large Scale Visual Recognition Challenge"}, {"link": "https://twitter.com/colinraffel/status/1030129455409164289", "title": "PSA"}, {"link": "http://arxiv.org/pdf/1711.10337.pdf", "title": "Are GANs Created Equal? A Large-Scale Study"}, {"link": "http://arxiv.org/pdf/1806.00880.pdf", "title": "Disconnected Manifold Learning for Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1703.10593.pdf", "title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1612.05424.pdf", "title": "Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1704.00028.pdf", "title": "Improved training of wasserstein gans"}, {"link": "http://arxiv.org/pdf/1609.05473.pdf", "title": "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient"}, {"link": "http://arxiv.org/pdf/1801.07736.pdf", "title": "MaskGAN: Better Text Generation via Filling in the ______"}, {"link": "http://arxiv.org/pdf/1611.08097.pdf", "title": "Geometric deep learning: going beyond Euclidean data"}, {"link": "http://arxiv.org/pdf/1803.00816.pdf", "title": "NetGAN: Generating Graphs via Random Walks"}, {"link": "http://arxiv.org/pdf/1802.04208.pdf", "title": "Synthesizing Audio with Generative Adversarial Networks"}, {"link": "https://openreview.net/forum?id=H1xQVn09FX", "title": "GANSynth: Adversarial Neural Audio Synthesis"}, {"link": "https://blog.openai.com/openai-five/", "title": "OpenAI Five"}, {"link": "http://arxiv.org/pdf/1706.04156.pdf", "title": "Gradient descent GAN optimization is locally stable"}, {"link": "http://proceedings.mlr.press/v80/mescheder18a.html", "title": "Which Training Methods for GANs do actually Converge?"}, {"link": "https://openreview.net/forum?id=r1CE9GWR-", "title": "Understanding GANs: the LQG Setting"}, {"link": "http://arxiv.org/pdf/1808.01531.pdf", "title": "Global Convergence to the Equilibrium of GANs using Variational Inequalities"}, {"link": "http://arxiv.org/pdf/1802.05642.pdf", "title": "The Mechanics of n-Player Differentiable Games"}, {"link": "http://arxiv.org/pdf/1705.08991.pdf", "title": "Approximation and Convergence Properties of Generative Adversarial Learning"}, {"link": "http://arxiv.org/pdf/1809.04542.pdf", "title": "The Inductive Bias of Restricted f-GANs"}, {"link": "http://proceedings.mlr.press/v80/li18d/li18d.pdf", "title": "On the Limitations of First-Order Approximation in GAN Dynamics"}, {"link": "http://arxiv.org/pdf/1412.0233.pdf", "title": "The Loss Surface of Multilayer Networks"}, {"link": "http://arxiv.org/pdf/1712.00679.pdf", "title": "GANGs: Generative Adversarial Network Games"}, {"link": "http://arxiv.org/pdf/1806.07268.pdf", "title": "Beyond Local Nash Equilibria for Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1706.03269.pdf", "title": "An Online Learning Approach to Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1606.03498.pdf", "title": "Improved Techniques for Training GANs"}, {"link": "http://arxiv.org/pdf/1706.08500.pdf", "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium"}, {"link": "http://arxiv.org/pdf/1801.01973.pdf", "title": "A Note on the Inception Score"}, {"link": "https://openreview.net/forum?id=HkxKH2AcFm", "title": "Towards GAN Benchmarks Which Require Generalization"}, {"link": "http://arxiv.org/pdf/1611.04273.pdf", "title": "On the Quantitative Analysis of Decoder-Based Generative Models"}, {"link": "http://arxiv.org/pdf/1802.02664.pdf", "title": "Geometry Score: A Method For Comparing Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1806.00035.pdf", "title": "Assessing Generative Models via Precision and Recall"}, {"link": "http://arxiv.org/pdf/1808.04888.pdf", "title": "Skill Rating for Generative Models"}, {"link": "http://arxiv.org/pdf/1810.06758.pdf", "title": "Discriminator Rejection Sampling"}, {"link": "http://arxiv.org/pdf/1610.06545.pdf", "title": "Revisiting Classifier Two-Sample Tests"}, {"link": "http://arxiv.org/pdf/1708.02511.pdf", "title": "Parametric Adversarial Divergences are Good Task Losses for Generative Modeling"}, {"link": "http://distill.pub/2016/deconv-checkerboard", "title": "Deconvolution and Checkerboard Artifacts"}, {"link": "http://arxiv.org/pdf/1706.02677.pdf", "title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"}, {"link": "https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-156.pdf", "title": "Scaling sgd batch size to 32k for imagenet training"}, {"link": "http://papers.nips.cc/paper/6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf", "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"}, {"link": "http://arxiv.org/pdf/1711.00489.pdf", "title": "Don't Decay the Learning Rate, Increase the Batch Size"}, {"link": "https://www.nature.com/articles/s41928-017-0005-9", "title": "Science and research policy at the end of Moore’s law"}, {"link": "http://arxiv.org/pdf/1704.04760.pdf", "title": "In-datacenter performance analysis of a tensor processing unit"}, {"link": "http://arxiv.org/pdf/1803.05573.pdf", "title": "Improving GANs using optimal transport"}, {"link": "http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks.pdf", "title": "Large Scale Distributed Deep Networks"}, {"link": "http://arxiv.org/pdf/1412.6651.pdf", "title": "Deep learning with Elastic Averaging SGD"}, {"link": "http://arxiv.org/pdf/1511.05950.pdf", "title": "Staleness-aware Async-SGD for Distributed Deep Learning"}, {"link": "http://arxiv.org/pdf/1601.04033.pdf", "title": "Faster Asynchronous SGD"}, {"link": "http://arxiv.org/pdf/1806.04498.pdf", "title": "The Unusual Effectiveness of Averaging in GAN Training"}, {"link": "http://arxiv.org/pdf/1312.6199.pdf", "title": "Intriguing properties of neural networks"}, {"link": "http://arxiv.org/pdf/1805.10204.pdf", "title": "Adversarial examples from computational constraints"}, {"link": "http://arxiv.org/pdf/1702.06832.pdf", "title": "Adversarial examples for generative models"}, {"link": "http://arxiv.org/pdf/1706.06083.pdf", "title": "Towards deep learning models resistant to adversarial attacks"}]} {"id": "3192f2fd5f178f1ddca372897cecea98", "title": "A Visual Exploration of Gaussian Processes", "url": "https://distill.pub/2019/visual-exploration-gaussian-processes", "source": "distill", "source_type": "blog", "text": "Even if you have spent some time reading about machine learning, chances are that you have never heard of Gaussian processes.\n And if you have, rehearsing the basics is always a good way to refresh your memory.\n With this blog post we want to give an introduction to Gaussian processes and make the mathematical intuition behind them more approachable.\n \n\n\n\n Gaussian processes are a powerful tool in the machine learning toolbox.\n They allow us to make predictions about our data by incorporating prior knowledge.\n Their most obvious area of application is *fitting* a function to the data.\n This is called regression and is used, for example, in robotics or time series forecasting.\n But Gaussian processes are not limited to regression — they can also be extended to classification and clustering tasks.\n\n For a given set of training points, there are potentially infinitely many functions that fit the data.\n Gaussian processes offer an elegant solution to this problem by assigning a probability to each of these functions.\n The mean of this probability distribution then represents the most probable characterization of the data.\n Furthermore, using a probabilistic approach allows us to incorporate the confidence of the prediction into the regression result.\n \n\n\n\n We will first explore the mathematical foundation that Gaussian processes are built on — we invite you to follow along using the interactive figures and hands-on examples.\n They help to explain the impact of individual components, and show the flexibility of Gaussian processes.\n After following this article we hope that you will have a visual intuition on how Gaussian processes work and how you can configure them for different types of data.\n \n\n\nMultivariate Gaussian distributions\n-----------------------------------\n\n\n\n Before we can explore Gaussian processes, we need to understand the mathematical concepts they are based on.\n As the name suggests, the Gaussian distribution (which is often also referred to as *normal* distribution) is the basic building block of Gaussian processes.\n In particular, we are interested in the multivariate case of this distribution, where each random variable is distributed normally and their joint distribution is also Gaussian.\n The multivariate Gaussian distribution is defined by a mean vector μ\\muμ and a covariance matrix Σ\\SigmaΣ.\n You can see an interactive example of such distributions in [the figure below](#Multivariate).\n \n\n\n\n The mean vector μ\\muμ describes the expected value of the distribution.\n Each of its components describes the mean of the corresponding dimension.\n Σ\\SigmaΣ models the variance along each dimension and determines how the different random variables are correlated.\n The covariance matrix is always symmetric and positive semi-definite.\n The diagonal of Σ\\SigmaΣ consists of the variance σi2\\sigma\\_i^2σi2​ of the iii-th random variable.\n And the off-diagonal elements σij\\sigma\\_{ij}σij​ describe the correlation between the iii-th and jjj-th random variable.\n \n\n\nX=[X1X2⋮Xn]∼N(μ,Σ)\n X = \\begin{bmatrix} X\\_1 \\\\ X\\_2 \\\\ \\vdots \\\\ X\\_n \\end{bmatrix} \\sim \\mathcal{N}(\\mu, \\Sigma)\n X=⎣⎢⎢⎡​X1​X2​⋮Xn​​⎦⎥⎥⎤​∼N(μ,Σ)\nWe say XXX follows a normal distribution. The covariance matrix Σ\\SigmaΣ describes the shape of the distribution. It is defined in terms of the expected value EEE:\n\n\nΣ=Cov(Xi,Xj)=E[(Xi−μi)(Xj−μj)T]\n \\Sigma = \\text{Cov}(X\\_i, X\\_j) = E \\left[ (X\\_i - \\mu\\_i)(X\\_j - \\mu\\_j)^T \\right]\n Σ=Cov(Xi​,Xj​)=E[(Xi​−μi​)(Xj​−μj​)T]\nVisually, the distribution is centered around the mean and the covariance matrix defines its shape. The [following figure](#Multivariate) shows the influence of these parameters on a two-dimensional Gaussian distribution. The variances for each random variable are on the diagonal of the covariance matrix, while the other values show the covariance between them.\n\n\n\n\n\n\n Gaussian distributions are widely used to model the real world.\n For example, we can employ them to describe errors of measurements or phenomena under the assumptions of the *central limit theorem*\n\n One of the implications of this theorem is that a collection of independent, identically distributed random variables with finite variance have a mean that is distributed normally.\n A good introduction to the central limit theorem is given by [this video](https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/central-limit-theorem) from [Khan Academy](https://www.khanacademy.org).\n .\n In the next section we will take a closer look at how to manipulate Gaussian distributions and extract useful information from them.\n \n\n\n### Marginalization and Conditioning\n\n\n\n Gaussian distributions have the nice algebraic property of being closed under conditioning and marginalization.\n Being closed under conditioning and marginalization means that the resulting distributions from these operations are also Gaussian, which makes many problems in statistics and machine learning tractable.\n In the following we will take a closer look at both of these operations, as they are the foundation for Gaussian processes.\n \n\n\n*Marginalization* and *conditioning* both work on subsets of the original distribution and we will use the following notation:\n\n\nPX,Y=[XY]∼N(μ,Σ)=N([μXμY],[ΣXXΣXYΣYXΣYY])P\\_{X,Y} = \\begin{bmatrix} X \\\\ Y \\end{bmatrix} \\sim \\mathcal{N}(\\mu, \\Sigma) = \\mathcal{N} \\left( \\begin{bmatrix} \\mu\\_X \\\\ \\mu\\_Y \\end{bmatrix}, \\begin{bmatrix} \\Sigma\\_{XX} \\, \\Sigma\\_{XY} \\\\ \\Sigma\\_{YX} \\, \\Sigma\\_{YY} \\end{bmatrix} \\right)PX,Y​=[XY​]∼N(μ,Σ)=N([μX​μY​​],[ΣXX​ΣXY​ΣYX​ΣYY​​])\nWith XXX and YYY representing subsets of original random variables.\n\n\nThrough *marginalization* we can extract partial information from multivariate probability distributions. In particular, given a normal probability distribution P(X,Y)P(X,Y)P(X,Y) over vectors of random variables XXX and YYY, we can determine their marginalized probability distributions in the following way:\n\n\nX∼N(μX,ΣXX)Y∼N(μY,ΣYY)\n \\begin{aligned}\n X &\\sim \\mathcal{N}(\\mu\\_X, \\Sigma\\_{XX}) \\\\\n Y &\\sim \\mathcal{N}(\\mu\\_Y, \\Sigma\\_{YY})\n \\end{aligned}\n XY​∼N(μX​,ΣXX​)∼N(μY​,ΣYY​)​\n\n The interpretation of this equation is that each partition XXX and YYY only depends on its corresponding entries in μ\\muμ and Σ\\SigmaΣ.\n To marginalize out a random variable from a Gaussian distribution we can simply drop the variables from μ\\muμ and Σ\\SigmaΣ.\n \n\n\npX(x)=∫ypX,Y(x,y)dy=∫ypX∣Y(x∣y)pY(y)dy\n p\\_X(x) = \\int\\_y p\\_{X,Y}(x,y)dy = \\int\\_y p\\_{X|Y}(x|y) p\\_Y(y) dy\n pX​(x)=∫y​pX,Y​(x,y)dy=∫y​pX∣Y​(x∣y)pY​(y)dy\nThe way to interpret this equation is that if we are interested in the probability density of\n X=xX = xX=x, we need to consider all possible outcomes of\n YYY that can jointly lead to the result\n The corresponding [Wikipedia\n article](https://en.wikipedia.org/wiki/Marginal_distribution) has a good description of the marginal distribution, including several examples.\n .\n \n\n\n\n Another important operation for Gaussian processes is *conditioning*.\n It is used to determine the probability of one variable depending on another variable.\n Similar to marginalization, this operation is also closed and yields a modified Gaussian distribution.\n This operation is the cornerstone of Gaussian processes since it allows Bayesian inference, which we will talk about in the [next section](#GaussianProcesses).\n Conditioning is defined by:\n \n\n\nX∣Y∼N(μX+ΣXYΣYY−1(Y−μY),ΣXX−ΣXYΣYY−1ΣYX)Y∣X∼N(μY+ΣYXΣXX−1(X−μX),ΣYY−ΣYXΣXX−1ΣXY)\n \\begin{aligned}\n X|Y &\\sim \\mathcal{N}(\\:\\mu\\_X + \\Sigma\\_{XY}\\Sigma\\_{YY}^{-1}(Y - \\mu\\_Y),\\: \\Sigma\\_{XX}-\\Sigma\\_{XY}\\Sigma\\_{YY}^{-1}\\Sigma\\_{YX}\\:) \\\\\n Y|X &\\sim \\mathcal{N}(\\:\\mu\\_Y + \\Sigma\\_{YX}\\Sigma\\_{XX}^{-1}(X - \\mu\\_X),\\: \\Sigma\\_{YY}-\\Sigma\\_{YX}\\Sigma\\_{XX}^{-1}\\Sigma\\_{XY}\\:) \\\\\n \\end{aligned}\n X∣YY∣X​∼N(μX​+ΣXY​ΣYY−1​(Y−μY​),ΣXX​−ΣXY​ΣYY−1​ΣYX​)∼N(μY​+ΣYX​ΣXX−1​(X−μX​),ΣYY​−ΣYX​ΣXX−1​ΣXY​)​\n\n Note that the new mean only depends on the conditioned variable, while the covariance matrix is independent from this variable.\n \n\n\n\n Now that we have worked through the necessary equations, we will think about how we can understand the two operations visually.\n While marginalization and conditioning can be applied to multivariate distributions of many dimensions, it makes sense to consider the two-dimensional case as shown in the [following figure](#MarginalizationConditioning).\n Marginalization can be seen as integrating along one of the dimensions of the Gaussian distribution, which is in line with the general definition of the marginal distribution.\n Conditioning also has a nice geometric interpretation — we can imagine it as making a cut through the multivariate distribution, yielding a new Gaussian distribution with fewer dimensions.\n \n\n\n\n\n\n A bivariate normal distribution in the center.\n On the left you can see the result of marginalizing this distribution for Y, akin to integrating along the X axis. On the right you can see the distribution conditioned on a given X, which is similar to a cut through the original distribution. The Gaussian distribution and the conditioned variable can be changed by dragging the handles.\n\nGaussian Processes\n------------------\n\n\n\n Now that we have recalled some of the basic properties of multivariate Gaussian distributions, we will combine them together to define Gaussian processes, and show how they can be used to tackle regression problems.\n \n\n\n\n First, we will move from the continuous view to the discrete representation of a function:\n rather than finding an implicit function, we are interested in predicting the function values at concrete points, which we call *test points* XXX.\n So how do we derive this functional view from the multivariate normal distributions that we have considered so far?\n Stochastic processes, such as Gaussian processes, are essentially a set of random variables.\n In addition, each of these random variables has a corresponding index iii.\n We will use this index to refer to the iii-th dimension of our nnn-dimensional multivariate distributions. \n The [following figure](#DimensionSwap) shows an example of this for two dimensions:\n \n\n\n\n\n\n Here, we have a two-dimensional normal distribution.\n Each dimension xix\\_ixi​ is assigned an index i∈{1,2}i \\in \\{1,2\\}i∈{1,2}.\n You can drag the handles to see how a particular sample (left) corresponds to functional values (right).\n This representation also allows us to understand the connection between the covariance and the resulting values:\n the underlying Gaussian distribution has a positive covariance between x1x\\_1x1​ and x2x\\_2x2​ — this means that x2x\\_2x2​ will increases as x1x\\_1x1​ gets larger and vice versa.\n You can also drag the handles in the figure to the right and observe the probability of such a configuration in the figure to the left.\n \n\n\n Now, the goal of Gaussian processes is to learn this underlying distribution from *training data*.\n Respective to the test data XXX, we will denote the training data as YYY.\n As we have mentioned before, the key idea of Gaussian processes is to model the underlying distribution of XXX together with YYY as a multivariate normal distribution.\n That means that the joint probability distribution PX,YP\\_{X,Y}PX,Y​ spans the space of possible function values for the function that we want to predict.\n Please note that this joint distribution of test and training data has ∣X∣+∣Y∣|X| + |Y|∣X∣+∣Y∣ dimensions.\n \n\n\n\n In order to perform regression on the training data, we will treat this problem as *Bayesian inference*.\n The essential idea of Bayesian inference is to update the current hypothesis as new information becomes available. \n In the case of Gaussian processes, this information is the training data.\n Thus, we are interested in the conditional probability PX∣YP\\_{X|Y}PX∣Y​. \n Finally, we recall that Gaussian distributions are closed under conditioning — so PX∣YP\\_{X|Y}PX∣Y​ is also distributed normally.\n \n\n\n\n Now that we have the basic framework of Gaussian processes together, there is only one thing missing:\n how do we set up this distribution and define the mean μ\\muμ and the covariance matrix Σ\\SigmaΣ?\n The covariance matrix Σ\\SigmaΣ is determined by its *covariance function* kkk, which is often also called the *kernel* of the Gaussian process.\n We will talk about this in detail in the next section.\n But before we come to this, let us reflect on how we can use multivariate Gaussian distributions to estimate function values.\n The [following figure](#PriorFigure) shows an example of this using ten test points at which we want to predict our function:\n \n\n\n\n\n\n\n In Gaussian processes we treat each test point as a random variable. \n A multivariate Gaussian distribution has the same number of dimensions as the number of random variables.\n Since we want to predict the function values at\n ∣X∣=N|X|=N∣X∣=N\n test points, the corresponding multivariate Gaussian distribution is also\n NNN\n -dimensional.\n Making a prediction using a Gaussian process ultimately boils down to drawing samples from this distribution. \n We then interpret the iii-th component of the resulting vector as the function value corresponding to the iii-th test point.\n \n\n\n### Kernels\n\n\n\n Recall that in order to set up our distribution, we need to define μ\\muμ and Σ\\SigmaΣ.\n In Gaussian processes it is often assumed that μ=0\\mu = 0μ=0, which simplifies the necessary equations for conditioning.\n We can always assume such a distribution, even if μ≠0\\mu \\neq 0μ≠0, and add μ\\muμ back to the resulting function values after the prediction step.\n This process is also called *centering* of the data.\n So configuring μ\\muμ is straight forward — it gets more interesting when we look at the other parameter of the distribution.\n \n\n\n\n The clever step of Gaussian processes is how we set up the covariance matrix Σ\\SigmaΣ.\n The covariance matrix will not only describe the shape of our distribution, but ultimately determines the characteristics of the function that we want to predict.\n We generate the covariance matrix by evaluating the kernel kkk, which is often also called *covariance function*, pairwise on all the points.\n The kernel receives two points t,t′∈Rnt,t’ \\in \\mathbb{R}^nt,t′∈Rn as an input and returns a similarity measure between those points in the form of a scalar:\n \n\n\nk:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′)\n k: \\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R},\\quad \n \\Sigma = \\text{Cov}(X,X’) = k(t,t’)\n k:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′)\n\n We evaluate this function for each pairwise combination of the test points to retrieve the covariance matrix.\n This step is also depicted in the [figure above](#PriorFigure).\n In order to get a better intuition for the role of the kernel, let’s think about what the entries in the covariance matrix describe.\n The entry Σij\\Sigma\\_{ij}Σij​ describes how much influence the iii-th and jjj-th point have on each other.\n This follows from the definition of the multivariate Gaussian distribution, which states that Σij\\Sigma\\_{ij}Σij​ defines the correlation between the iii-th and the jjj-th random variable.\n Since the kernel describes the similarity between the values of our function, it controls the possible shape that a fitted function can adopt.\n Note that when we choose a kernel, we need to make sure that the resulting matrix adheres to the properties of a covariance matrix.\n \n\n\n\n Kernels are widely used in machine learning, for example in *support vector machines*.\n The reason for this is that they allow similarity measures that go far beyond the standard euclidean distance (L2L2L2-distance).\n Many of these kernels conceptually embed the input points into a higher dimensional space in which they then measure the similarityIf the kernel follows Mercer’s theorem it can be used to define a Hilbert space. More information on this can be found on [Wikipedia](https://en.wikipedia.org/wiki/Kernel_method)..\n The [following figure](#MultipleKernels) shows examples of some common kernels for Gaussian processes.\n For each kernel, the covariance matrix has been created from N=25N=25N=25 linearly-spaced values ranging from [−5,5][-5,5][−5,5]. Each entry in the matrix shows the covariance between points in the range of [0,1][0,1][0,1].\n \n\n\n\n\n\nThis figure shows various kernels that can be used with Gaussian processes. Each kernel has different\n parameters, which can be changed by adjusting the according sliders. When grabbing a slider,\n information on how the current parameter influences the kernel will be shown on the right.\n \n\n Kernels can be separated into *stationary* and *non-stationary* kernels. *Stationary* kernels, such\n as the RBF kernel or the periodic kernel, are functions invariant to translations, and the covariance of two points is only\n dependent on their relative position. *Non-stationary* kernels, such as the linear kernel, do not have this\n constraint and depend on an absolute location. The stationary nature of the RBF kernel can be observed in the\n banding around the diagonal of its covariance matrix (as shown in [this figure](#MultipleKernels)). Increasing the length parameter increases the banding, as\n points further away from each other become more correlated. For the periodic kernel, we have an additional parameter\n PPP that determines the periodicity, which controls the distance between each repetition of the function.\n In contrast, the parameter CCC of the linear kernel allows us to change the point on which all functions hinge.\n \n\n\n\n There are many more kernels that can describe different classes of functions, which can be used to model the desired shape of the function.\n A good overview of different kernels is given by Duvenaud.\n It is also possible to combine several kernels — but we will get to this later.\n \n\n\n### Prior Distribution\n\n\n\n We will now shift our focus back to the original task of regression.\n As we have mentioned earlier, Gaussian processes define a probability distribution over possible functions.\n In [this figure above](#DimensionSwap), we show this connection:\n each sample of our multivariate normal distribution represents one realization of our function values.\n Because this distribution is a multivariate Gaussian distribution, the distribution of functions is normal.\n Recall that we usually assume μ=0\\mu=0μ=0.\n For now, let’s consider the case where we have not yet observed any training data.\n In the context of Bayesian inference, this is called the *prior* distribution PXP\\_XPX​.\n \n\n\n\n If we have not yet observed any training examples, this distribution revolves around μ=0\\mu=0μ=0, according to our original assumption.\n The prior distribution will have the same dimensionality as the number of test points N=∣X∣N = |X|N=∣X∣.\n We will use the kernel to set up the covariance matrix, which has the dimensions N×NN \\times NN×N.\n \n\n\n\n In the previous section we have looked at examples of different kernels.\n The kernel is used to define the entries of the covariance matrix.\n Consequently, the covariance matrix determines which type of functions from the space of all possible functions are more probable.\n As the prior distribution does not yet contain any additional information, it is perfect to visualize the influence of the kernel on the distribution of functions.\n The [following figure](#Prior) shows samples of potential functions from prior distributions that were created using different kernels:\n \n\n\n\n\n\n Clicking on the graph results in continuous samples drawn from a\n Gaussian process using the selected\n kernel. After each draw, the previous sample fades into the background. Over time, it is possible to see that\n functions are distributed normally around the mean µ .\n \n\n\n Adjusting the parameters allows you to control the shape of the resulting functions. \n This also varies the confidence of the prediction.\n When decreasing the variance σ\\sigmaσ, a common parameter for all kernels, sampled functions are more concentrated around the mean μ\\muμ.\n For the *Linear* kernel, setting the variance σb=0\\sigma\\_b=0σb​=0 results in a set of functions constrained to perfectly intersect the offset point ccc.\n If we set σb=0.2\\sigma\\_b=0.2σb​=0.2 we can model uncertainty, resulting in functions that pass close to ccc.\n \n\n\n### Posterior Distribution\n\n\n\n So what happens if we observe training data?\n Let’s get back to the model of Bayesian inference, which states that we can incorporate this additional information into our model, yielding the *posterior* distribution PX∣YP\\_{X|Y}PX∣Y​.\n We will now take a closer look at how to do this for Gaussian processes.\n \n\n\n\n First, we form the joint distribution PX,YP\\_{X,Y}PX,Y​ between the test points XXX and the training points YYY.\n The result is a multivariate Gaussian distribution with dimensions ∣Y∣+∣X∣|Y| + |X|∣Y∣+∣X∣.\n As you can see in the [figure below](#PosteriorFigure), we concatenate the training and the test points to compute the corresponding covariance matrix.\n \n\n\n\n For the next step we need one operation on Gaussian distributions that we have defined earlier.\n Using *conditioning* we can find PX∣YP\\_{X|Y}PX∣Y​ from PX,YP\\_{X,Y}PX,Y​.\n The dimensions of this new distribution matches the number of test points NNN and the distribution is also normal.\n It is important to note that conditioning leads to derived versions of the mean and the standard deviation: X∣Y∼N(μ′,Σ′)X|Y \\sim \\mathcal{N}(\\mu’, \\Sigma’)X∣Y∼N(μ′,Σ′). \n More details can be found in the [related section](#MargCond) on conditioning multivariate Gaussian distributions.\n The intuition behind this step is that the training points constrain the set of functions to those that pass through the training points.\n \n\n\n\n\n\n\n As mentioned before, the conditional distribution PX∣YP\\_{X|Y}PX∣Y​ forces the set of functions to precisely pass through each training point.\n In many cases this can lead to fitted functions that are unnecessarily complex.\n Also, up until now, we have considered the training points YYY to be perfect measurements.\n But in real-world scenarios this is an unrealistic assumption, since most of our data is afflicted with measurement errors or uncertainty.\n Gaussian processes offer a simple solution to this problem by modeling the error of the measurements.\n For this, we need to add an error term ϵ∼N(0,ψ2)\\epsilon \\sim \\mathcal{N}(0, \\psi^2)ϵ∼N(0,ψ2) to each of our training points:\n \n\n\nY=f(X)+ϵ\n Y = f(X) + \\epsilon\n Y=f(X)+ϵ\n\n We do this by slightly modifying the setup of the joint distribution PX,YP\\_{X,Y}PX,Y​:\n \n\n\nPX,Y=[XY]∼N(0,Σ)=N([00],[ΣXXΣXYΣYXΣYY+ψ2I])P\\_{X,Y} = \\begin{bmatrix} X \\\\ Y \\end{bmatrix} \\sim \\mathcal{N}(0, \\Sigma) = \\mathcal{N} \\left( \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}, \\begin{bmatrix} \\Sigma\\_{XX} & \\Sigma\\_{XY} \\\\ \\Sigma\\_{YX} & \\Sigma\\_{YY}+\\psi^2I \\end{bmatrix} \\right)PX,Y​=[XY​]∼N(0,Σ)=N([00​],[ΣXX​ΣYX​​ΣXY​ΣYY​+ψ2I​])\n\n Again, we can use conditioning to derive the predictive distribution PX∣YP\\_{X|Y}PX∣Y​.\n In this formulation, ψ\\psiψ is an additional parameter of our model.\n \n\n\n\n Analogous to the prior distribution, we could obtain a prediction for our function values by sampling from this distribution.\n But, since sampling involves randomness, the resulting fit to the data would not be deterministic and our prediction could end up being an outlier.\n In order to make a more meaningful prediction we can use the other basic operation of Gaussian distributions.\n \n\n\n\n Through the *marginalization* of each random variable, we can extract the respective mean function value μi′\\mu’\\_iμi′​ and standard deviation σi′=Σii′\\sigma’\\_i = \\Sigma’\\_{ii}σi′​=Σii′​ for the iii-th test point.\n In contrast to the prior distribution, where we set the mean to μ=0\\mu=0μ=0, the result of conditioning the joint distribution of test and training data will most likely have a non-zero mean μ′≠0\\mu’ \\neq 0μ′≠0.\n Extracting μ′\\mu’μ′ and σ′\\sigma’σ′ does not only lead to a more meaningful prediction, it also allows us to make a statement about the confidence of the prediction.\n \n\n\n\n The [following figure](#Posterior) shows an example of the conditional distribution.\n At first, no training points have been observed.\n Accordingly, the mean prediction remains at 000 and the standard deviation is the same for each test point.\n By hovering over the covariance matrix you can see the influence of each point on the current test point.\n As long as no training points have been observed, the influence of neighboring points is limited locally.\n \n\n\n\n The training points can be activated by clicking on them, which leads to a constrained distribution.\n This change is reflected in the entries of the covariance matrix, and leads to an adjustment of the mean and the standard deviation of the predicted function.\n As we would expect, the uncertainty of the prediction is small in regions close to the training data and grows as we move further away from those points.\n \n\n\n\n\n\n\n In the constrained covariance matrix, we can see that the correlation of neighbouring points is affected by the\n training data.\n If a predicted point lies on the training data, there is no correlation with other points.\n Therefore, the function must pass directly through it.\n Predicted values further away are also affected by the training data — proportional to their distance.\n \n\n\n### Combining different kernels\n\n\n\n As described earlier, the power of Gaussian processes lies in the choice of the kernel function.\n This property allows experts to introduce domain knowledge into the process and lends Gaussian processes their flexibility to capture trends in the training data.\n For example, by choosing a suitable bandwidth for the RBF kernel, we can control how smooth the resulting function will be.\n \n\n\n\n A big benefit that kernels provide is that they can be combined together, resulting in a more specialized kernel.\n The decision which kernel to use is highly dependent on prior knowledge about the data, e.g. if certain characteristics are expected.\n Examples for this would be stationary nature, or global trends and patterns.\n As introduced in the [section on kernels](#Kernels), stationary means that a kernel is translation invariant and therefore not dependent on the index iii.\n This also means that we cannot model global trends using a strictly stationary kernel.\n Remember that the covariance matrix of Gaussian processes has to be positive semi-definite.\n When choosing the optimal kernel combinations, all methods that preserve this property are allowed.\n The most common kernel combinations would be addition and multiplication.\n \n\n\n\n Let’s consider two kernels, a linear kernel klink\\_{\\text{lin}}klin​ and a periodic kernel kperk\\_{\\text{per}}kper​, for example.\n This is how we would multiply the two:\n \n\n\nk∗(t,t′)=klin(t,t′)⋅kper(t,t′)\n k^{\\ast}(t,t’) = k\\_{\\text{lin}}(t,t’) \\cdot k\\_{\\text{per}}(t,t’)\n k∗(t,t′)=klin​(t,t′)⋅kper​(t,t′)\n\n However, combinations are not limited to the above example, and there are more possibilities such as concatenation or composition with a function.\n To show the impact of a kernel combination and how it might retain qualitative features of the individual kernels, take a look at the [figure below](#KernelCombinationsStatic).\n If we add a periodic and a linear kernel, the global trend of the linear kernel is incorporated into the combined kernel.\n The result is a periodic function that follows a linear trend.\n When combining the same kernels through multiplication instead, the result is a periodic function with a linearly growing amplitude away from linear kernel parameter ccc.\n \n\n\n\n\n\n\n If we draw samples from a combined linear and periodic kernel, we can observe the different retained characteristics in the new sample.\n Addition results in a periodic function with a global trend, while the multiplication increases the periodic amplitude outwards.\n \n\n\n Knowing more about how kernel combinations influence the shape of the resulting distribution, we can move on to a more complex example.\n In the [figure below](#KernelCombinations), the observed training data has an ascending trend with a periodic deviation.\n Using only a linear kernel, we can mimic a normal linear regression of the points.\n At first glance, the RBF kernel accurately approximates the points.\n But since the RBF kernel is stationary it will always return to μ=0\\mu=0μ=0 in regions further away from observed training data.\n This decreases the accuracy for predictions that reach further into the past or the future.\n An improved model can be created by combining the individual kernels through addition, which maintains both the periodic nature and the global ascending trend of the data.\n This procedure can be used, for example, in the analysis of weather data.\n \n\n\n\n\n\n Using the checkboxes, different kernels can be combined to form a new Gaussian process. Only by using a\n combination of kernels, it is possible to capture the characteristics of more complex training data.\n \n\n\n As discussed in the [section about GPs](#GaussianProcesses), a Gaussian process can model uncertain observations.\n This can be seen when only selecting the linear kernel, as it allows us to perform linear regression even if more than two points have been observed, and not all functions have to pass directly through the observed training data.\n \n\n\nConclusion\n----------\n\n\n\n With this article, you should have obtained an overview of Gaussian processes, and developed a deeper\n understanding on how they work.\n As we have seen, Gaussian processes offer a flexible framework for regression and several extensions exist that\n make them even more versatile.\n \n\n\n\n For instance, sometimes it might not be possible to describe the kernel in simple terms.\n To overcome this challenge, learning specialized kernel functions from the underlying data, for example by using deep learning, is an area of ongoing research.\n Furthermore, links between Bayesian inference, Gaussian processes and deep learning have been described in several papers.\n Even though we mostly talk about Gaussian processes in the context of regression, they can be adapted for\n different purposes, e.g. *model-peeling* and hypothesis testing.\n By comparing different kernels on the dataset, domain experts can introduce additional knowledge through\n appropriate combination and parameterization of the kernel.\n \n\n\n\n If we have sparked your interest, we have compiled a list of further [blog posts](#FurtherReading) on the topic of Gaussian processes.\n In addition, we have linked two [Python notebooks](#FurtherReading) that will give you some hands-on experience and help you to get started right away.", "date_published": "2019-04-02T20:00:00Z", "authors": ["Jochen Görtler", "Rebecca Kehlbeck", "Oliver Deussen"], "summaries": ["How to turn a collection of small building blocks into a versatile tool for solving regression problems."], "doi": "10.23915/distill.00017", "journal_ref": "distill-pub", "bibliography": [{"link": "http://www.gaussianprocess.org/gpml/chapters/RW.pdf", "title": "Gaussian Processes in Machine Learning"}, {"link": "https://doi.org/10.1007/s11263-009-0268-3", "title": "Gaussian Processes for Object Categorization"}, {"link": "http://staging.csml.ucl.ac.uk/archive/talks/41e4e55330421a6e230d0ea2b89440ea/Paper_16.pdf", "title": "Clustering Based on Gaussian Processes"}, {"link": "https://doi.org/10.1007/978-1-4757-3264-1", "title": "The Nature of Statistical Learning Theory"}, {"link": "https://www.cs.toronto.edu/~duvenaud/cookbook/", "title": "Automatic model construction with Gaussian processes"}, {"link": "http://papers.nips.cc/paper/3211-using-deep-belief-nets-to-learn-covariance-kernels-for-gaussian-processes.pdf", "title": "Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes"}, {"link": "http://proceedings.mlr.press/v51/wilson16.pdf", "title": "Deep Kernel Learning"}, {"link": "https://arxiv.org/pdf/1711.00165v3.pdf", "title": "Deep Neural Networks as Gaussian Processes"}, {"link": "http://proceedings.mlr.press/v31/damianou13a.pdf", "title": "Deep Gaussian Processes"}]} {"id": "fab7daaf2a9e63a7020830649ac33a1b", "title": "Visualizing memorization in RNNs", "url": "https://distill.pub/2019/memorization-in-rnns", "source": "distill", "source_type": "blog", "text": "Memorization in Recurrent Neural Networks (RNNs) continues to pose a challenge\n in many applications. We’d like RNNs to be able to store information over many\n timesteps and retrieve it when it becomes relevant — but vanilla RNNs often struggle to do this.\n \n\n\n\n Several network architectures have been proposed to tackle aspects of this problem, such\n as Long-Short-Term Memory (LSTM)\n units and Gated Recurrent Units (GRU).\n However, the practical problem of memorization still poses a challenge.\n As such, developing new recurrent units that are better at memorization\n continues to be an active field of research.\n \n\n\n\n To compare a recurrent unit against its alternatives, both past and recent\n papers, such as the Nested LSTM paper by Monzi et al.\n , heavily rely on quantitative\n comparisons. These comparisons often measure accuracy or\n cross entropy loss on standard problems such as Penn Treebank, Chinese\n Poetry Generation, or text8, where the task is to predict the\n next character given existing input.\n \n\n\n\n While quantitative comparisons are useful, they only provide partial\n insight into the how a recurrent unit memorizes. A model can, for example,\n achieve high accuracy and cross entropy loss by just providing highly accurate\n predictions in cases that only require short-term memorization, while\n being inaccurate at predictions that require long-term\n memorization.\n For example, when autocompleting words in a sentence, a model with only short-term understanding could still exhibit high accuracy completing the ends of words once most of the characters are present.\n However, without longer term contextual understanding it won’t be able to predict words when only a few characters are known.\n \n\n\n\n This article presents a qualitative visualization method for comparing\n recurrent units with regards to memorization and contextual understanding.\n The method is applied to the three recurrent units mentioned above: Nested LSTMs, LSTMs, and GRUs.\n \n\n\nRecurrent Units\n---------------\n\n\n\n The networks that will be analyzed all use a simple RNN structure:\n \n\n\n\n\n\n\n Output for layer at time .\n \n\n\n\n Recurrent unit of choice.\n \n\n\n\n\n\n\n\n\n In theory, the time dependency allows it in each iteration to know\n about every part of the sequence that came before. However, this time\n dependency typically causes a vanishing gradient problem that results in\n long-term dependencies being ignored during training\n .\n \n\n\n\n\n**Vanishing Gradient:** where the contribution from the\n earlier steps becomes insignificant in the gradient for the vanilla RNN\n unit.\n \n\n\n Several solutions to the vanishing gradient problem have been proposed over\n the years. The most popular are the aforementioned LSTM and GRU units, but this\n is still an area of active research. Both LSTM and GRU are well known\n and [thoroughly explained in literature](http://colah.github.io/posts/2015-08-Understanding-LSTMs/). Recently, Nested LSTMs have also been proposed\n  — an explanation of Nested LSTMs\n can be found [in the appendix](#appendix-nestedlstm).\n \n\n\n\n![Nested LSTM Diagram](graphics/nlstm-web.svg)\n\n**Recurrent Unit, Nested LSTM:** makes the cell update depend on another\n LSTM unit, supposedly this allows more long-term memory compared to\n stacking LSTM layers.\n \n![Long Short Term Memory Diagram](graphics/lstm-web.svg)\n\n**Recurrent Unit, LSTM:** allows for long-term\n memorization by gateing its update, thereby solving the vanishing gradient\n problem.\n \n![Gated Recurrent Unit Diagram](graphics/gru-web.svg)\n\n**Recurrent Unit, GRU:** solves the vanishing gradient\n problem without depending on an internal memory state.\n \n\n\n It is not entirely clear why one recurrent unit performs better than another\n in some applications, while in other applications it is another type of\n recurrent unit that performs better. Theoretically they all solve the vanishing\n gradient problem, but in practice their performance is highly application\n dependent.\n \n\n\n\n Understanding why these differences occur is likely an opaque and\n challenging problem. The purpose of this article is to demonstrate a\n visualization technique that can better highlight what these differences\n are. Hopefully, such an understanding can lead to a deeper understanding.\n \n\n\nComparing Recurrent Units\n-------------------------\n\n\n\n Comparing different Recurrent Units is often more involved than simply comparing the accuracy or cross entropy\n loss. Differences in these high-level quantitative measures\n can have many explanations and may only be because of some small improvement\n in predictions that only requires short-term contextual understanding,\n while it is often the long-term contextual understanding that is of interest.\n \n\n\n### A problem for qualitative analysis\n\n\n\n Therefore a good problem for qualitatively analyzing contextual\n understanding should have a human-interpretable output and depend both on\n long-term and short-term contextual understanding. The typical problems\n that are often used, such as Penn Treebank, Chinese Poetry Generation, or\n text8 generation do not have outputs that are easy to reason about, as they\n require an extensive understanding of either grammar, Chinese poetry, or\n only output a single letter.\n \n\n\n\n To this end, this article studies the autocomplete problem. Each character is mapped\n to a target that represents the entire word. The space leading up to the word should also map to that target.\n This prediction based on the space character is in particular useful for showing contextual understanding.\n \n\n\n\n The autocomplete problem is quite similar to the text8 generation\n problem: the only difference is that instead of predicting the next letter,\n the model predicts an entire word. This makes the output much more\n interpretable. Finally, because of its close relation to text8 generation,\n existing literature on text8 generation is relevant and comparable,\n in the sense that models that work well on text8 generation should work\n well on the autocomplete problem.\n \n\n\n\nUser types input sequence.\nRecurrent neural network processes the sequence.\nThe output for the last character is used.\nThe most likely suggestions are extracted.\n\n\n\n\n\n\n\n\n**Autocomplete:** An application that has a humanly\n interpretable output, while depending on both short and long-term\n contextual understanding. In this case, the network uses past information\n and understands the next word should be a country.\n \n\n\n\n The output in this figure was produced by the GRU model;\n all model setups are [described in the appendix](#appendix-autocomplete).\n\n \n Try [removing the last letters](javascript:arDemoShort();) to see\n that the network continues to give meaningful suggestions.\n \n\n ([reset](javascript:arDemoReset();)).\n \n\n\n\n\n\n The autocomplete dataset is constructed from the full\n [text8](http://mattmahoney.net/dc/textdata.html) dataset. The\n recurrent neural networks used to solve the problem have two layers, each\n with 600 units. There are three models, using GRU, LSTM, and Nested LSTM.\n See [the appendix](#appendix-autocomplete) for more details.\n \n\n\n### Connectivity in the Autocomplete Problem\n\n\n\n In the recently published Nested LSTM paper\n , they qualitatively compared their\n Nested LSTM unit to other recurrent units, to show how it memorizes in\n comparison, by visualizing individual cell activations.\n \n\n\n\n This visualization was inspired by Karpathy et al.\n where they identify cells\n that capture a specific feature. To identify a specific\n feature, this visualization approach works well. However, it is not a useful\n argument for memorization in general as the output is entirely dependent\n on what feature the specific cell captures.\n \n\n\n\n Instead, to get a better idea of how well each model memorizes and uses\n memory for contextual understanding, the connectivity between the desired\n output and the input is analyzed. This is calculated as:\n \n\n\n\n\n\n\n\n Input time index.\n \n\n\n\n Output time index.\n \n\n\n\n Magnitude of the gradient, between the logits for the desired output and the input\n .\n \n\n\n\n Exploring the connectivity gives a surprising amount of insight into the\n different models’ ability for long-term contextual understanding. Try and\n interact with the figure below yourself to see what information the\n different models use for their predictions.\n \n\n\n\n\n\n\n\n**Connectivity:** the connection strength between\n the target for the selected character and the input characters is highlighted in green\n ([reset](javascript:connectivitySetIndex(null);)).\n *Hover over or tap the text to change the selected character.*\n\n\n\n Let’s highlight three specific situations:\n \n\n\n\n\n\n1\n\n\n Observe how the models predict the word “learning” with [only the first two\n characters as input](javascript:connectivitySetIndex(106);). The Nested LSTM model barely uses past\n information and thus only suggests common words starting with the letter “l”.\n \n\n\n\n In contrast, the LSTM and GRU models both suggests the word “learning”.\n The GRU model shows stronger connectivity with the word “advanced”,\n and we see in the suggestions that it predicts a higher probability for “learning” than the LSTM model.\n \n\n\n\n\n\n2\n\n\n Examine how the models predict the word “grammar”.\n This word appears twice; when it appears for the first time the models have very little context.\n Thus, no model suggests “grammar” until it has\n [seen at least 4 characters](javascript:connectivitySetIndex(32);).\n \n\n\n\n When “grammar” appears for the second time, the models have more context.\n The GRU model is able to predict the word “grammar” with only [1 character from the word itself](javascript:connectivitySetIndex(159);). The LSTM and Nested LSTM again\n need [at least 4 characters](javascript:connectivitySetIndex(162);).\n \n\n\n\n\n\n3\n\n\n Finally, let’s look at predicting the word “schools”\n [given only the first character](javascript:connectivitySetIndex(141);). As in the other cases,\n the GRU model seems better at using past information for\n contextual understanding.\n \n\n\n\n What makes this case noteworthy is how the LSTM model appears to\n use words from almost the entire sentence as context. However,\n its suggestions are far from correct and have little to do\n with the previous words it seems to use in its prediction.\n This suggests that the LSTM model in this setup is capable of\n long-term memorization, but not long-term contextual understanding.\n \n\n\n\n\n\n1\n2\n3\n\n\n\n*The colored number links above change the connectivity figure’s displayed timestep and explanation.*\n\n\n\n These observations show that the connectivity visualization is a powerful tool\n for comparing models in terms of which previous inputs they use for contextual understanding.\n However, it is only possible to compare models on the same dataset, and\n on a specific example. As such, while these observations may show that\n Nested LSTM is not very capable of long-term contextual understanding in this example;\n these observations may not generalize to other datasets or hyperparameters.\n \n\n\n### Future work; quantitative metric\n\n\n\n From the above observations it appears that short-term contextual understanding\n often involves the word that is being predicted itself. That is, the models switch to\n using previously seen letters from the word itself, as more letters become\n available. In contrast, at the beginning of predicting a word, models — especially the\n GRU network — use previously seen words as context for the prediction.\n \n\n\n\n This observation suggests a quantitative metric: measure the accuracy given\n how many letters from the word being predicted are already known.\n It is not clear that this is best quantitative metric: it is highly problem dependent,\n and it also doesn’t summarize the model to a single number, which one may wish for a more direct comparison.\n \n\n\n\n\n\n**Accuracy Graph**: shows the accuracy\n given a fixed number of characters in a word that the RNN has seen.\n 0 characters mean that the RNN has only seen the space leading up\n to the word, including all the previous text which should provide context.\n The different line styles, indicates if the correct word should appear\n among the top 1, 2, or 3 suggestions.\n \n\n\n These results suggest that the GRU model is better at long-term contextual\n understanding, while the LSTM model is better at short-term contextual\n understanding. These observations are valuable, as it justifies why the\n [overall accuracy of the GRU and LSTM models](#ar-overall-accuracy) are almost identical, while the connectivity visualization shows that\n the GRU model is far better at long-term contextual understanding.\n \n\n\n\n While more detailed quantitative metrics like this provides new insight,\n qualitative analysis like the connectivity figure presented\n in this article still has great value. As the connectivity visualization gives an\n intuitive understanding of how the model works, which a quantitative metric\n cannot. It also shows that a wrong prediction can still be considered a\n useful prediction, such as a synonym or a contextually reasonable\n prediction.\n \n\n\nConclusion\n----------\n\n\n\n Looking at overall accuracy and cross entropy loss in itself is not that\n interesting. Different models may prioritize either long-term or\n short-term contextual understanding, while both models can have similar\n accuracy and cross entropy.\n \n\n\n\n A qualitative analysis, where one looks at how previous input is used in\n the prediction is therefore also important when judging models. In this\n case, the connectivity visualization together with the autocomplete\n predictions, reveals that the GRU model is much more capable of long-term\n contextual understanding, compared to LSTM and Nested LSTM. In the case of\n LSTM, the difference is much higher than one would guess from just looking\n at the overall accuracy and cross entropy loss alone. This observation is\n not that interesting in itself as it is likely very dependent on the\n hyperparameters, and the specific application.\n \n\n\n\n Much more valuable is that this visualization method makes it possible\n to intuitively understand how the models are different, to a much higher\n degree than just looking at accuracy and cross entropy. For this application,\n it is clear that the GRU model uses repeating words and semantic meaning\n of past words to make its prediction, to a much higher degree than the LSTM\n and Nested LSTM models. This is both a valuable insight when choosing the\n final model, but also essential knowledge when developing better models\n in the future.", "date_published": "2019-03-25T20:00:00Z", "authors": ["Andreas Madsen"], "summaries": ["Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding."], "doi": "10.23915/distill.00016", "journal_ref": "distill-pub", "bibliography": [{"link": "https://arxiv.org/pdf/1406.1078.pdf", "title": "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation"}, {"link": "https://arxiv.org/pdf/1801.10308.pdf", "title": "Nested LSTMs"}, {"link": "https://doi.org/10.3115/1075812.1075835", "title": "The Penn Treebank: Annotating Predicate Argument Structure"}, {"link": "http://mattmahoney.net/dc/textdata", "title": "text8 Dataset"}, {"link": "https://arxiv.org/pdf/1211.5063.pdf", "title": "On the difficulty of training recurrent neural networks"}, {"link": "https://arxiv.org/pdf/1506.02078.pdf", "title": "Visualizing and Understanding Recurrent Networks"}]} {"id": "3a645443102ffa1eb6428f582ff73b62", "title": "Activation Atlas", "url": "https://distill.pub/2019/activation-atlas", "source": "distill", "source_type": "blog", "text": "Introduction\n------------\n\n\nNeural networks can learn to classify images more accurately than any system humans directly design. This raises a natural question: What have these networks learned that allows them to classify images so well?\n\n \n\nFeature visualization is a thread of research that tries to answer this question by letting us “see through the eyes” of the network . It began with research into [visualizing individual neurons](https://distill.pub/2017/feature-visualization/) and trying to determine what they respond to. Because neurons don’t work in isolation, this led to applying feature visualization to [simple combinations of neurons](https://distill.pub/2017/feature-visualization/#interaction). But there was still a problem — what combinations of neurons should we be studying? A natural answer (foreshadowed by work on model inversion ) is to [visualize activations](https://distill.pub/2018/building-blocks/#ActivationGridSingle) , the combination of neurons firing in response to a particular input.\n\n \n\nThese approaches are exciting because they can make the hidden layers of networks comprehensible. These layers are the heart of how neural networks outperform more traditional approaches to machine learning and historically, we’ve had little understanding of what happens in themWith the exception of the first hidden layer.. Feature visualization addresses this by connecting hidden layers back to the input, making them meaningful.\n\n \n\nUnfortunately, visualizing activations has a major weakness — it is limited to seeing only how the network sees a single input. Because of this, it doesn’t give us a big picture view of the network. When what we want is a map of an entire forest, inspecting one tree at a time will not suffice.\n\n \n\n\n There are techniques which give a more global view, but they tend to have other downsides.\n For example, Karpathy’s [CNN codes visualization](https://cs.stanford.edu/people/karpathy/cnnembed/)\n gives a global view of a dataset by taking each image and organizing them by their activation values from a neural network.\n Showing which images the model sees as similar does help us infer some ideas about what features the network is responding to,\n but feature visualization makes those connections much more explicit.\n Nguyen, et al use t-SNE to make more diverse neuron visualizations,\n generating diverse starting points for the optimization process by clustering images in the t-SNE map.\n This reveals a broader picture of what the neuron detects but is still focused on individual neurons.\n \n\n\nIn this article we introduce *activation atlases* to this quiver of techniques. (An example is shown at the top of this article.) Broadly speaking, we use a technique similar to the one in CNN codes, but instead of showing input data, we show feature visualizations of averaged activations. By combining these two techniques, we can get the advantages of each in one view — a global map seen through the eyes of the network.\n\n \n\nIn theory, showing the feature visualizations of the [basis neurons](https://distill.pub/2017/feature-visualization/appendix/) would give us the global view of a network that we are seeking. In practice, however, neurons are rarely used by the network in isolation, and it may be difficult to understand them that way. As an analogy, while the 26 letters in the alphabet provide a basis for English, seeing how letters are commonly combined to make words gives far more insight into the concepts that can be expressed than the letters alone. Similarly, activation atlases give us a bigger picture view by showing common combinations of neurons.\n\n \n\n\n\n These atlases not only reveal visual abstractions within a model, but later in the article we will show that they can reveal high-level misunderstandings in a model that can be exploited. For example, by looking at an activation atlas we will be able to see why a picture of a baseball can switch the classification of an image from “grey whale” to “great white shark”.\n \n\n\n\n\nOf course, activation atlases do have limitations. In particular, they’re dependent on the distribution of the data we choose to sample activations from (in our examples, we use one million images chosen at random from the ImageNet dataset training data). As a result, they will only show the activations that exist within the distribution of the sample data. However, while it’s important to be aware of these limitations — we’ll discuss them in much more depth later! — Activation Atlases still give us a new kind of overview of what neural networks can represent.\n\n\n\n\n\nLooking at a Single Image\n-------------------------\n\n\nBefore we dive into Activation Atlases, let’s briefly review how we use feature visualization to make activation vectors meaningful (“see through the network’s eyes”). This technique was introduced in [Building Blocks](https://distill.pub/2018/building-blocks/) , and will be the foundation of Activation Atlases.\n\n \n\n\n Throughout this article, we’ll be focussing on a particular neural network: InceptionV1 \n (also known as “GoogLeNet”).\n When it came out, it was notable for [winning](http://www.image-net.org/challenges/LSVRC/2014/results#clsin) the classification task in the 2014 ImageNet Large Scale Visual Recognition Challenge .\n \n\n\n\n InceptionV1 consists of a number of layers, which we refer to as “mixed3a”, “mixed3b”, “mixed4a”, etc., and sometimes shortened to just “3a”. Each layer successively builds on the previous layers. \n \n\n\n\n\nInceptionV1 builds up its understanding of images over several layers (see [overview](https://distill.pub/2017/feature-visualization/appendix/) from ). It was trained on ImageNet ILSVRC . Each layer actually has several component parts, but for this article we’ll focus on these larger groups.\n\nTo visualize how InceptionV1 sees an image, the first step is to feed the image into the network and run it through to the layer of interest. Then we collect the activations — the numerical values of how much each neuron fired. If a neuron is excited by what it is shown, its activation value will be positive.\n\n \n\nUnfortunately these vectors of activation values are just vectors of unitless numbers and not particularly interpretable by people. This is where [feature visualization](https://distill.pub/2017/feature-visualization/) comes in.\n\n Roughly speaking, we can think of feature visualization as creating an idealized image of what the network thinks would produce a particular activation vector. Whereas we normally use a network to transform an image into an activation vector, in feature visualization we go in the opposite direction. Starting with an activation vector at a particular layer, we create an image through an iterative optimization process.\n\n \n\nBecause InceptionV1 is a [convolutional network](http://colah.github.io/posts/2014-07-Conv-Nets-Modular/), there is not just one activation vector per layer per image.\n This means that the same neurons are run on each patch of the previous layer.\n Thus, when we pass an entire image through the network, each neuron will be evaluated hundreds of times, once for each overlapping patch of the image. We can consider the vectors of how much each neuron fired for each patch separately.\n\n \n\n\n\n\n\n\n\nThe result is a grid of feature visualizations, one for each patch. This shows us how the network sees different parts of the input image.\n\n \n\n\nAggregating Multiple Images\n---------------------------\n\n\nActivation grids show how the network sees a single image, but what if we want to see more? What if we want to understand how it reacts to millions of images?\n\n \n\nOf course, we could look at individual activation grids for those images one by one. But looking at millions of examples doesn’t scale, and human brains aren’t good at comparing lots of examples without structure. In the same way that we need a tool like a histogram in order to understand millions of numbers, we need a way to aggregate and organize activations if we want to see meaningful patterns in millions of them.\n\n \n\nLet’s start by collecting activations from one million images. We’ll randomly select one spatial activation per image.We avoid the edges due to boundary effects. This gives us one million activation vectors. Each of the vectors is high-dimensional, perhaps 512 dimensions! With such a complex set of data, we need to organize and aggregate it if we want a big picture view.\n\n \n\nThankfully, we have modern dimensionality reduction techniques at our disposal. These algorithms, such as t-SNE and UMAP , can project high-dimensional data like our collection of activation vectors into useful 2D layouts, preserving some of the local structure of the original space. This takes care of organizing our activation vectors, but we also need to aggregate into a more manageable number of elements — one million dots would be hard to interpret. We’ll do this by drawing a grid over the 2D layout we created with dimensionality reduction. For each cell in our grid, we average all the activations that lie within the boundaries of that cell, and use feature visualization to create an iconic representation.\n\n \n\n\n\n\nA randomized set of one million images is fed through the network, collecting one random spatial activation per image.\nThe activations are fed through UMAP to reduce them to two dimensions. They are then plotted, with similar activations placed near each other.\nWe then draw a grid and average the activations that fall within a cell and run feature inversion on the averaged activation. We also optionally size the grid cells according to the density of the number of activations that are averaged within.\n\n\n\n\n\n\n We perform feature visualization with the regularizations described in Feature Visualization (in particular, [transformation robustness](https://distill.pub/2017/feature-visualization/#regularizer-playground-robust)).\n However, we use a slightly non-standard objective.\n Normally, to visualize a direction in activation space, vvv, one maximizes the dot product with the activation vector hhh at a position: hx,y⋅vh\\_{x,y} \\cdot vhx,y​⋅v.\n We find it helpful to use an objective that emphasizes angle more heavily by multiplying the dot product by cosine similarity, leading to objectives of the following form:\n (hx,y⋅v)n+1(∣∣hx,y∣∣⋅∣∣v∣∣)n\\frac{(h\\_{x,y} \\cdot v)^{n+1}}{(||h\\_{x,y}|| \\cdot ||v||)^{n}}(∣∣hx,y​∣∣⋅∣∣v∣∣)n(hx,y​⋅v)n+1​.\n We also find that whitening the activation space to unstretch it can help improve feature visualization.\n We don’t yet fully understand this phenomenon.\n A reference implementation of this can be seen in the attached notebooks, and more general discussion can be found in this [github issue](https://github.com/tensorflow/lucid/issues/116).\n \n\n\n\n For each activation vector, we also compute an *attribution* vector.\n The attribution vector has an entry for each class, and approximates the amount that the activation vector influenced the logit for each class.\n Attribution vectors generally depend on the surrounding context.\n We follow Building Blocks in computing the attribution of the activation vector at a position, hx,yh\\_{x,y}hx,y​, to a class logit, logitc\\text{logit}\\_clogitc​ as hx,y⋅∇hx,ylogitch\\_{x,y} \\cdot \\nabla\\_{h\\_{x,y}} \\text{logit}\\_chx,y​⋅∇hx,y​​logitc​.\n That is, we estimate that the effect of a neuron on a logit is the rate at which increasing the neuron affects the logit.\n This is similar to Grad-CAM , but without the spatial averaging of the gradient.\n Instead, we reduce noise in the gradient by using a continuous relaxation of the gradient for max pooling in computing the gradient (as in ).\n (A detailed reference implementation can be found in [this notebook](https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/building-blocks/AttrSpatial.ipynb).)\n The attribution shown for cells in the activation atlas is the average of attribution vectors for activations in that cell.\n \n\n\n\n This average attribution can be thought of as showing what classes that cell tends to support, marginalizing over contexts.\n At early layers, the average attribution is very small and the top classes are fairly arbitrary because low-level visual features like textures tend to not be very discriminative without context.\n \n\n\nSo, how well does this work? Well, let’s try applying it to InceptionV1 at [layer mixed4c](https://distill.pub/2017/feature-visualization/appendix/googlenet/4c.html).\n\n\n \n\n\n\nThis atlas can be a bit overwhelming at first glance — there’s a lot going on! This diversity is a reflection of the variety of abstractions and concepts the model has developed. Let’s take a tour to examine this atlas in more depth.\n\n \n\nIf we look at the top-left of the atlas, we see images which look like animal heads. There is some differentiation between different types of animals, but it seems to be more a collection of elements of generic mammals — eyes, fur, noses — rather than a collection of different classes of animals. We’ve also added labels that show which class each averaged activation most contributes to. Please note, in some areas of a layer this early in the network these attribution labels can be somewhat chaotic. In early layers the attribution vectors have a small magnitude since they don’t have a consistent effect on the output.\n\n\n\nAs we move further down we start to see different types of fur and the backs of four-legged animals.\n\n \n\n\nBelow this, we find different animal legs and feet resting on different types of ground.\n \n\n\nBelow the feet we start to lose any identifiable parts of animals, and see isolated grounds and floors. We see attribution toward environments like “sandbar” and also toward things that are found on the ground, like “doormat” or “ant”.\n\n \n\n\nThese sandy, rocky backgrounds slowly blend into beaches and bodies of water. Here we see lakes and oceans, both above and below water. Though the network does have certain classes like “seashore”, we see attribution toward many sea animals, without any visual references to the animals themselves. While not unexpected, it is reassuring to see that the activations that are used to identify the sea for the class “seashore” are the same ones used when classifying “starfish” or “sea lion”. There is also no real distinction at this point between lakes and ocean — “lakeside” and “hippopotamus” attributions are intermingled with “starfish” and “stingray”.\n\n \n\n\nNow let’s jump to the other side of the atlas, where we can see many variations of text detectors. These will be useful when identifying classes such as “menu”, “web site” or “book jacket”.\n\n \n\n\nMoving upward, we see many variations of people. There are very few classes that specifically identify people in ImageNet, but people are present in lots of the images. We see attribution toward things people use (“hammer”, “flute”), clothes that people wear (“bow tie”, “maillot”) and activities that people participate in (“basketball”). There is a uniformity to the skin color in these visualizations which we suspect is a reflection of the distribution of the data used for training. (You can browse the ImageNet training data by category online: [swimming trunks](http://image-net.org/synset?wnid=n04371430#), [diaper](http://image-net.org/synset?wnid=n03188531), [band aid](http://image-net.org/synset?wnid=n02786058), [lipstick](http://image-net.org/synset?wnid=n03676483), etc.)\n\n \n\n\nAnd finally, moving back to the left, we can see round food and fruit organized mostly by colors — we see attribution toward “lemon”, “orange” and “fig”.\n\n \n\n\nWe can also trace curved paths through this manifold that we’ve created. Not only are regions important, but certain movements through the space seem to correspond to human interpretable qualities. With the fruit, we can trace a path that seems to correlate with the size and number of fruits in the frame.\n\n \n\n\nSimilarly, with people, we can trace a path that seems to correspond to how many people are in the frame, whether it’s a single person or a crowd.\n\n \n\n\nWith the ground detectors, we can trace a path from water to beach to rocky cliffs.\n\n \n\n\nIn the plants region, we can trace a path that seems to correspond to how blurry the plant is. This could possibly be used to determine relative size of objects because of the typical focal lengths of cameras. Close up photos of small insects have more opportunity for blurry background foliage than photos of larger animals, like monkeys.\n\n \n\n\nIt is important to note that these paths are constructed after the fact in the low-dimensional projection. They are smooth paths in this reduced projection but we don’t necessarily know how the paths operate in the original higher-dimensional activation space.\n\n\n\n\nLooking at Multiple Layers\n--------------------------\n\n\nIn the previous section we focused on one layer of the network, mixed4c, which is in the middle of our network. Convolutional networks are generally deep, consisting of many layers that progressively build up more powerful abstractions. In order to get a holistic view, we must look at how the model’s abstractions develop over several layers.\n\n \n\n\n\nIn early layers (eg. [mixed3b](”https://distill.pub/2017/feature-visualization/appendix/googlenet/3b.html”)), the network seems to represents textures and simple patterns.\n By mid layers (eg. [mixed4c](”https://distill.pub/2017/feature-visualization/appendix/googlenet/4c.html”)), icons evoke individual concepts like eyes, leaves, water, that are shared among many classes.\n In the final layers (eg. [mixed5b](”https://distill.pub/2017/feature-visualization/appendix/googlenet/5b.html”)), abstractions become more closely aligned with the output classes.\n\nTo start, let’s compare three layers from different areas of the network to try to get a sense for the different personalities of each — one very early layer ([mixed3b](”https://distill.pub/2017/feature-visualization/appendix/googlenet/3b.html”)), one layer from the middle ([mixed4c](”https://distill.pub/2017/feature-visualization/appendix/googlenet/4c.html”)), and the final layer ([mixed5b](”https://distill.pub/2017/feature-visualization/appendix/googlenet/5b.html”)) before the logits. We’ll focus on areas of each layer that contribute to the classification of “cabbage”.\n\n \n .vertical-layer-comparison {\n display: grid;\n grid-column: page;\n grid-template-columns: 1fr 1fr 1fr;\n grid-gap: 16px 20px;\n }\n .vertical-layer-comparison h4 {\n margin-bottom: 0;\n padding-bottom: 4px;\n width: 100%;\n border-bottom: solid #CCC 1px;\n }\n .vertical-layer-comparison.progress {\n display: block;\n max-width: 704px;\n padding: 0 20px;\n margin: 0 auto;\n }\n .vertical-layer-comparison.progress > div {\n display: grid;\n grid-template-columns: 2.5fr 1fr;\n grid-gap: 20px;\n margin-bottom: 16px;\n }\n .vertical-layer-comparison.progress h4 {\n margin-top: 0;\n margin-bottom: 16px;\n }\n .vertical-layer-comparison.progress .figcaption {\n margin-top: 8px;\n }\n \n\n\n\n#### Mixed3b\n\n\n#### Mixed4c\n\n\n#### Mixed5b\n\n\n\n\n\n\nAs you move through the network, the later layers seem to get much more specific and complex. This is to be expected, as each layer builds its activations on top of the preceding layer’s activations. The later layers also tend to have larger receptive fields than the ones that precede them (meaning they are shown larger subsets of the image) so the concepts seem to encompass more of the whole of objects.\n\n\n \n\nThere is another phenomenon worth noting: not only are concepts being refined, but new concepts are appearing out of combinations of old ones. Below, you can see how sand and water are distinct concepts in a middle layer, mixed4c, both with strong attributions to the classification of “sandbar”. Contrast this with a later layer, mixed5b, where the two ideas seem to be fused into one activation.\n\n \n\n\n#### Mixed4c\n\n\n#### Mixed5b\n\n\n\n\n\n\nFinally, if we zoom out a little, we can see how the broader shape of the activation space changes from layer to layer. By looking at similar regions in several consecutive layers, we can see concepts getting refined and differentiated — In mixed4a we see very vague, generic blob, which gets refined into much more specific “peninsulas” by mixed4e.\n\n\n\n\n\n\n\n#### Mixed4a\n\n\n\n In mixed4a there is a vague “mammalian” area.\n \n\n\n\n\n\n#### Mixed4b\n\n\n\n By mixed4b, animals and people have been disentangled, with some fruit and food emerging in the middle.\n \n\n\n\n\n\n#### Mixed4c\n\n\n\n All the concepts are further refined and differentiated into small “peninsulas”.\n \n\n\n\n\n\n#### Mixed4d\n\n\n\n The specialization continues in mixed4d.\n \n\n\n\n\n\n#### Mixed4e\n\n\n\n And further still in mixed4e.\n \n\n\n\n\nBelow you can browse many more of the layers of InceptionV1. You can compare the\n [curved edge detectors of mixed4a](#) with the\n [bowls and cups of mixed5b](#). Mixed4b has some\n [interesting text and pattern detectors](#), whereas mixed5a appears to use those to differentiate\n [menus from crossword puzzles from rulers](#). In early layers, like mixed4b, you’ll see things that have similar textures near each other, like\n [fabrics](#). In later layers, you’ll see\n [specific types of clothing](#).\n\n \n\n\n\n\n\n\nFocusing on a Single Classification\n-----------------------------------\n\n\nLooking at an atlas of all activations can be a little overwhelming, especially when you’re trying to understand how the network goes about ranking one particular class. For instance, let’s investigate how the network classifies a “fireboat”.\n\n \n\n\n\nAn image labeled “fireboat” from ImageNet.\n\nWe’ll start by looking at an atlas for the last layer, mixed5b. Instead of showing all the activations, however, we’ll calculate the amount that each activation contributes toward a classification of “fireboat” and then map that value to the opacity of the activation icon.In the case of mixed5b, determining this contribution is fairly straightforward because the relationship between activations at mixed5b and the logit values is linear. When there are multiple layers between our present one and the output — and as a result, the relationship is non-linear — it’s a little less clear what to do. In this article, we take the simple approach of forming a linear approximation of these future layers and use it to approximate the effect of our activations. The areas that contribute a lot toward a classification of “fireboat” will be clearly visible, whereas the areas that contribute very little (or even contribute negatively) will be completely transparent.\n\n \n\n\nThe layer we just looked at, mixed5b, is located just before the final classification layer so it seems reasonable that it would be closely aligned with the final classes. Let’s look at a layer a little earlier in the network, say mixed4d, and see how it differs.\n\n \n\n\nHere we see a much different pattern. If we look at some more input examples, this seems entirely reasonable. It’s almost as if we can see a collection of the component concepts the network will use in later layers to classify “fireboat”. Windows + crane + water = “fireboat”.\n\n \n\n\n\n\n\n\nOne of the clusters, the one with windows, has strong attribution to “fireboat”, but taken on its own, it has an even stronger attribution toward “streetcar”. So, let’s go back to the atlas at mixed4d, but isolate “streetcar” and compare it to the patterns seen for “fireboat”. Let’s look more closely at the four highlighted areas: the three areas we highlighted for fireboat plus one additional area that is highly activated for streetcars.\n\n \n\n\nIf we zoom in, we can get a better look at what distinguishes the two classifications at this layer. (We’ve cherry-picked these examples for brevity, but you can explore all the layers and activations in detail in a explorable playground below.)\n\n \n\n\nIf we look at a couple of input examples, we can see how buildings and water backgrounds are an easy way to differentiate between a “fireboat” and a “streetcar”.\n\n \n\n\n\n\nImages from ImageNet\n\nBy isolating the activations that contribute strongly to one class and comparing it to other class activations, we can see which activations are conserved among classes and which are recombined to form more complex activations in later layers. Below you can explore the activation patterns of many classes in ImageNet through several layers of InceptionV1. You can even explore negative attributions, which we ignored in this discussion.\n\n \n\n\n\n\n\n\nFurther Isolating Classes\n-------------------------\n\n\nHighlighting the class-specific activations in situ of a full atlas is helpful for seeing how that class relates to the full space of what a network “can see.” However, if we want to really isolate the activations that contribute to a specific class we can remove all the other activations rather than just dimming them, creating what we’ll call a *class activation atlas*. Similar to the general atlas, we run dimensionality reductionFor class activations we generally have better results from using t-SNE for the dimensionality reduction step rather than UMAP. We suspect it is because the data is much more sparse. over the class-specific activation vectors in order to arrange the feature visualizations shown in the class activation atlas.\n\n \n\n\nA class activation atlas gives us a much clearer view of which detectors the network is using to rank a specific class. In the “snorkel” example we can clearly see ocean, underwater, and colorful masks.\n\n \n\nIn the previous example, we are only showing those activations whose strongest attribution is toward the class in question. This will show us activations that contribute mostly to our class in question, even if their overall strength is low (like in background detectors). In some cases, though, there are strong correlations that we’d like to see (like fish with snorkelers). These activations on their own might contribute more strongly to a different class than the one we’re interested in, but their existence can also contribute strongly to our class of interest. For these we need to choose a different filtering method.\n\n\n\nUsing the magnitude filtering method, let’s try to compare two related classes and see if we can more easily see what distinguishes them. (We could have instead used rank, or a combination of the two, but magnitude will suffice to show us a good variety of concepts).\n\n \n\n\n\n\nIt can be a little hard to immediately understand all the differences between classes. To help make the comparison easier, we can combine the two views into one. We’ll plot the difference between the attributions of the “snorkel” and “scuba diver” horizontally, and use t-SNE to cluster similar activations vertically.\n\n \n\n\nIn this comparison we can see some bird-like creatures and clear tubes on the left, implying a correlation with “snorkel”, and some shark-like creatures and something round, shiny, and metallic on the right, implying correlation with “scuba diver” (This activation has a strong attribution toward the class “steam locomotive”). Let’s take an image from the ImageNet dataset labeled as “snorkel” and add something that resembles this icon to see how it affects the classification scores.\n\n \n\n\nThe failure mode here seems to be that the model is using its detectors for the class “steam locomotive” to identify air tanks to help classify “scuba diver”. We’ll call these “multi-use” features — detectors that react to very different concepts that are nonetheless visually similar. Let’s look at the differences between a “grey whale” and a “great white shark” to see another example of this issue.\n\n \n\n\nIn this example we see another detector that seems to be playing two roles: detecting red stitching on a baseball and a sharks’s white teeth and pink inner mouth. This detector also shows up in the [activation atlas at layer mixed5b filtered to “great white shark”](#focus-playground) and its attribution points towards all sorts of balls, the top one being “baseball”.\n \n\n \n\nLet’s add a picture of a baseball to a picture of a “grey whale” from ImageNet and see how it effects the classification.\n\n \n\n\nThe results follow the pattern in previous examples pretty closely.\n Adding a small-sized baseball does change the top classification to “great white shark”, and as it gets bigger it overpowers the classification, so the top slot goes to “baseball”.\n\n \n\nLet’s look at one more example: “frying pan” and “wok”.\n\n \n\n\nOne difference stands out here — the type of related foods present. On the right we can clearly see something resembling noodles (which have a strong attribution toward the class “carbonara”). Let’s take a picture from ImageNet labeled as “frying pan” and add an inset of some noodles.\n\n \n\n\nHere the patch was not as effective at lowering the initial classification, which makes sense since the noodle-like icons were plotted more toward the center of the visualization thus having less of a difference in attribution. We suspect that the training set simply contained more images of woks with noodles than frying pans with noodles.\n\n \n\n### Testing dozens of patches on thousands of images\n\n\nSo far we’ve only shown single examples of these patches. Below we show the result of ten sample patches (each set includes the one example we explored above), run on 1,000 images from the ImageNet training set for the class in question. While they aren’t effective in all cases, they do flip the image classification to the target class in about 2 in 5 images. The success rate reaches about 1 in 2 images if we also allow to position the patch in the best of the four corners of the image (top left, top right, bottom left, bottom right) at the most effective size. To ensure our attack isn’t just blocking out evidence for the original class, we also compare each attack to a random noise image patch.\n\n\n\n #adversarial-analysis {\n display: none;\n grid-column: text;\n margin-bottom: 0;\n }\n #adversarial-analysis table {\n width: 100%;\n text-align: right;\n }\n #adversarial-analysis table th {\n vertical-align: initial;\n }\n #adversarial-analysis table th .figcaption {\n display: block;\n font-weight: initial;\n }\n #adversarial-analysis table th img {\n width: initial;\n }\n #adversarial-analysis table .detail {\n /\\* padding-left: 1em; \\*/\n font-size: 85%;\n color: rgba(0, 0, 0, 0.6);\n }\n #adversarial-analysis table .synset {\n font-family: monospace;\n white-space: nowrap;\n }\n #adversarial-analysis table .from,\n #adversarial-analysis table .to {\n white-space: nowrap;\n }\n #adversarial-analysis table .overall {\n font-weight: bold;\n }\n @supports (font-variant-numeric: tabular-nums) {\n #adversarial-analysis table {\n font-variant-numeric: tabular-nums;\n }\n }\n \n\n\n Our “attacks” can be seen as part of a larger trend (eg. ) of researchers\n exploring input attacks on models other than the traditional epsilon ball adversarial examples .\n In many ways, our attacks are most similar to adversarial patches , which also add a small patch to the input image.\n From this perspective, adversarial patches are far more effective, working much more reliably. Instead, we see our attacks as interesting because they are synthesized by humans from their understanding of the model,\n and seem to be attacking the model at a higher level of abstraction.\n \n\n\n\n We also want to emphasize that not all class comparisons reveal these type of patches and not all icons in the visualization have the same (or any) effectiveness and we’ve only tested them on one model. If we wanted to find these patches more systematically, a different approach would most likely be more effective. However, the class activation atlas technique was what revealed the existence of these patches before we knew to look for them. If you’d like to explore your own comparisons and search for your own patches, we’ve provided a notebook to get you started: \n\n\n\n\n\nConclusion and Future Work\n--------------------------\n\n\nActivation atlases give us a new way to peer into convolutional vision networks. They give us a global, hierarchical, and human-interpretable overview of concepts within the hidden layers. Not only does this allow us to better see the inner workings of these complicated systems, but it’s possible that it could enable new interfaces for working with images.\n\n \n\n### Surfacing Inner Properties of Models\n\n\nThe vast majority of neural network research focuses on quantitative evaluations of network behavior. How accurate is the model? What’s the precision-recall curve?\n\n \n\nWhile these questions can describe how the network behaves in specific situations, it doesn’t give us a great understanding of *why* it behaves the way it does. To truly understand why a network behaves the way it does, we would need to fully understand the rich inner world of the network — it’s hidden layers. For example, understanding better how InceptionV1 builds up a classifier for a fireboat from component parts in mixed4d can help us build confidence in our models and can surface places where it isn’t doing what we want.\n\n \n\nEngaging with this inner world also invites us to do deep learning research in a new way. Normally, each neural network experiment gives only a few bits of feedback — whether the loss went up or down — to inform the next round of experiments. We design architectures by almost blind trial and error, guided by vague intuitions that we build up over years. In the future, we hope that researchers will get rich feedback on what each layer in their model is doing in a way that will make our current approach seem like stumbling in the dark.\n\n \n\nActivation atlases, as they presently stand, are inadequate to really help researchers iterate on models, in part because they aren’t comparable. If you look at atlases for two slightly different models, it’s hard to take away anything. In future work, we will explore how similar visualizations can compare models, showing similarities and differences beyond error rates.\n\n\n \n\n### New interfaces\n\n\nMachine learning models are usually deployed as black boxes that automate a specific task, executing it on their own. But there’s a growing sense that there might be an alternate way for us to relate to them: that instead of increasingly automating a task, they could be used more directly by a person. One vision of this augmentation that we find particularly compelling is the idea that the internal representations neural networks learn can be repurposed as tools . Already, we’ve seen exciting demonstrations of this in images\n \n and music\n \n  .\n\n \n\nWe think of activation atlases as revealing a machine-learned alphabet for images — a collection of simple, atomic concepts that are combined and recombined to form much more complex visual ideas. In the same way that we use word processors to turn letters into words, and words into sentences, we can imagine a tool that would allow us to create images from a machine-learned language system for images. Similar to GAN painting , imagine using something like an activation atlas as a palette — one could dip a brush into a “tree” activation and paint with it. A palette of concepts rather than colors.\n\n \n\nWhile classification models are not generally thought of as being used to generate images, techniques like deep dream has shown that this is entirely possible. In this particular instance, we imagine constructing a grid of activations by selecting them from an atlas (or some derivation), then optimizing an output image that would correspond to the user’s constructed activation matrix.\n\n \n\nSuch a tool would not necessarily be limited to targeting realistic images either. Techniques like style transfer have shown us that we can use these vision networks to create nuanced visual expression outside the explicit distribution of visual data it was trained on. We speculate that activation atlases could be helpful in manipulating artistic styles without having to find an existing reference image, or they could help in guiding and modifying automated style transfer techniques.\n\n \n\nWe could also use these atlases to query large image datasets. In the same way that we probe large corpuses of text with words, we could, too, use activation atlases to find types of images in large image datasets. Using words to search for something like a “tree” is quite powerful, but as you get more specific, human language is often ill-suited to describe specific visual characteristics. In contrast, the hidden layers of neural networks are a language optimized for the sole purpose of representing visual concepts. Instead of using the proverbial thousand words to uniquely specify the image one is seeking, we can imagine someone using the language of the activation atlas.\n\n \n\nAnd lastly, we can also liken activation atlases to histograms. In the same way that traditional histograms give us good summaries of large datasets, activation atlases can be used to summarize large numbers of images.\n\n \n\nIn the examples in this article we used the same dataset for training the model as we did for collecting the activations. But, if we use a different dataset to collect the activations, we could use the atlas as a way of inspecting an unknown dataset. An activation atlas could show us a histogram of *learned concepts* that exist within the images. Such a tool could show us the semantics of the data and not just visual similarities, like showing histograms of common pixel values.\n\n \n\nWhile we are excited about the potential of activation atlases, we are even more excited at the possibility of developing similar techniques for other types of models. Imagine having an array of machine learned, but human interpretable, languages for images, audio and text.", "date_published": "2019-03-06T20:00:00Z", "authors": ["Shan Carter", "Zan Armstrong", "Ludwig Schubert", "Ian Johnson", "Chris Olah"], "summaries": ["By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents."], "doi": "10.23915/distill.00015", "journal_ref": "distill-pub", "bibliography": [{"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "https://arxiv.org/pdf/1612.00005.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "https://arxiv.org/pdf/1412.0035v1.pdf", "title": "Understanding deep image representations by inverting them"}, {"link": "https://doi.org/10.23915/distill.00010", "title": "The Building Blocks of Interpretability"}, {"link": "https://cs.stanford.edu/people/karpathy/cnnembed/", "title": "t-SNE visualization of CNN codes"}, {"link": "https://arxiv.org/pdf/1602.03616.pdf", "title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"link": "http://www.image-net.org/papers/imagenet_cvpr09.pdf", "title": "Imagenet: A large-scale hierarchical image database"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf", "title": "Visualizing data using t-SNE"}, {"link": "https://arxiv.org/pdf/1610.02391.pdf", "title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"link": "http://arxiv.org/pdf/1712.09665.pdf", "title": "Adversarial Patch"}, {"link": "https://arxiv.org/pdf/1312.6199.pdf", "title": "Intriguing properties of neural networks"}, {"link": "https://distill.pub/2017/aia/", "title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"link": "http://colah.github.io/posts/2015-01-Visualizing-Representations/", "title": "Visualizing Representations: Deep Learning and Human Beings"}, {"link": "https://medium.com/@enjalot/machine-learning-for-visualization-927a9dff1cab", "title": "Machine Learning for Visualization"}, {"link": "https://magenta.tensorflow.org/composing-palettes", "title": "ML as Collaborator: Composing Melodic Palettes with Latent Loops"}, {"link": "https://gandissect.csail.mit.edu/", "title": "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks"}, {"link": "https://arxiv.org/pdf/1508.06576.pdf", "title": "A neural algorithm of artistic style"}, {"link": "https://tinlizzie.org/histograms/", "title": "Exploring Histograms"}]} {"id": "eb5c12bc89464f38db9f304e78f2c702", "title": "AI Safety Needs Social Scientists", "url": "https://distill.pub/2019/safety-needs-social-scientists", "source": "distill", "source_type": "blog", "text": "The goal of long-term artificial intelligence (AI) safety is to ensure that advanced AI systems are reliably aligned with human values — that they reliably do things that people want them to do.Roughly by human values we mean whatever it is that causes people to choose one option over another in each case, suitably corrected by reflection, with differences between groups of people taken into account. There are a lot of subtleties in this notion, some of which we will discuss in later sections and others of which are beyond the scope of this paper. Since it is difficult to write down precise rules describing human values, one approach is to treat aligning with human values as another learning problem. We ask humans a large number of questions about what they want, train an ML model of their values, and optimize the AI system to do well according to the learned values.\n \n\n\n If humans reliably and accurately answered all questions about their values, the only uncertainties in this scheme would be on the machine learning (ML) side. If the ML works, our model of human values would improve as data is gathered, and broaden to cover all the decisions relevant to our AI system as it learns. Unfortunately, humans have limited knowledge and reasoning ability, and exhibit a variety of cognitive and ethical biases. If we learn values by asking humans questions, we expect different ways of asking questions to interact with human biases in different ways, producing higher or lower quality answers. Direct questions about preferences (“Do you prefer AAA or BBB?“) may be less accurate than questions which target the reasoning behind these preferences (“Do you prefer AAA or BBB in light of argument SSS?“). Different people may vary significantly in their ability to answer questions well, and disagreements will persist across people even setting aside answer quality. Although we have candidates for ML methods which try to learn from human reasoning, we do not know how they behave with real people in realistic situations.\n \n\n\n We believe the AI safety community needs to invest research effort in the human side of AI alignment. Many of the uncertainties involved are empirical, and can only be answered by experiment. They relate to the psychology of human rationality, emotion, and biases. Critically, we believe investigations into how people interact with AI alignment algorithms should not be held back by the limitations of existing machine learning. Current AI safety research is often limited to simple tasks in video games, robotics, or gridworlds, but problems on the human side may only appear in more realistic scenarios such as natural language discussion of value-laden questions. This is particularly important since many aspects of AI alignment change as ML systems [increase in capability](#harder).\n \n\n\n To avoid the limitations of ML, we can instead conduct experiments consisting entirely of people, replacing ML agents with people playing the role of those agents. This is a variant of the “Wizard of Oz” technique from the human-computer interaction (HCI) community, though in our case the replacements will not be secret. These experiments will be motivated by ML algorithms but will not involve any ML systems or require an ML background. In all cases, they will require careful experimental design to build constructively on existing knowledge about how humans think. Most AI safety researchers are focused on machine learning, which we do not believe is sufficient background to carry out these experiments. To fill the gap, we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments. Since the questions we need to answer are interdisciplinary and somewhat unusual relative to existing research, we believe many fields of social science are applicable, including experimental psychology, cognitive science, economics, political science, and social psychology, as well as adjacent fields like neuroscience and law.\n \n\n\n This paper is a call for social scientists in AI safety. We believe close collaborations between social scientists and ML researchers will be necessary to improve our understanding of the human side of AI alignment, and hope this paper sparks both conversation and collaboration. We do not claim novelty: previous work mixing AI safety and social science includes the Factored Cognition project at Ought, accounting for hyperbolic discounting and suboptimal planning when learning human preferences, and comparing different methods of gathering demonstrations from fallible human supervisors. Other areas mixing ML and social science include computational social science and fairness. Our main goal is to enlarge these collaborations and emphasize their importance to long-term AI safety, particularly for tasks which current ML cannot reach.\n \n\n\nAn overview of AI alignment\n---------------------------\n\n\n\n Before discussing how social scientists can help with AI safety and the AI alignment problem, we provide some background. We do not attempt to be exhaustive: the goal is to provide sufficient background for the remaining sections on social science experiments. Throughout, we will speak primarily about aligning to the values of an individual human rather than a group: this is because the problem is already hard for a single person, not because the group case is unimportant.\n \n\n\n AI alignment (or value alignment) is the task of ensuring that artificial intelligence systems reliably do what humans want.We distinguish between training AI systems to identify actions that humans consider good and training AI systems to identify actions that are “good” in some objective and universal sense, even if most current humans do not consider them so. Whether there are actions that are good in this latter sense is a subject of debate. Regardless of what position one takes on this philosophical question, this sense of good is not yet available as a target for AI training. Here we focus on the machine learning approach to AI: gathering a large amount of data about what a system should do and using learning algorithms to infer patterns from that data that generalize to other situations. Since we are trying to behave in accord with people’s values, the most important data will be data from humans about their values. Within this frame, the AI alignment problem breaks down into a few interrelated subproblems:\n \n\n\n\n\n1. Have a satisfactory definition of human values.\n2. Gather data about human values, in a manner compatible with the definition.\n3. Find reliable ML algorithms that can learn and generalize from this data.\n\n\n\n We have significant uncertainty about all three of these problems. We will leave the third problem to other ML papers and focus on the first two, which concern uncertainties about people.\n \n\n\n### Learning values by asking humans questions\n\n\n\n We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.\n \n\n\n If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.\n \n\n\n In practice, data in the form of interactive human questions may be quite limited, since people are slow and expensive relative to computers on many tasks. Therefore, we can augment the “train from human questions” approach with static data from other sources, such as books or the internet. Ideally, the static data can be treated only as information about the world devoid of normative content: we can use it to learn patterns about the world, but the human data is needed to distinguish good patterns from bad.\n \n\n\n### Definitions of alignment: reasoning and reflective equilibrium\n\n\n\n So far we have discussed asking humans direct questions about whether something is better or worse. Unfortunately, we do not expect people to provide reliably correct answers in all cases, for several reasons:\n \n\n\n\n\n1. **Cognitive and ethical biases:**\n Humans exhibit a variety of biases which interfere with reasoning, including cognitive biases and ethical biases such as in-group bias. In general, we expect direct answers to questions to reflect primarily Type 1 thinking (fast heuristic judgment), while we would like to target a combination of Type 1 and Type 2 thinking (slow, deliberative judgment).\n2. **Lack of domain knowledge:**\n We may be interested in questions that require domain knowledge unavailable to people answering the questions. For example, a correct answer to whether a particular injury constitutes medical malpractice may require detailed knowledge of medicine and law. In some cases, a question might require so many areas of specialized expertise that no one person is sufficient, or (if AI is sufficiently advanced) deeper expertise than any human possesses.\n3. **Limited cognitive capacity:**\n Some questions may require too much computation for a human to reasonably evaluate, especially in a short period of time. This includes synthetic tasks such as chess and Go (where AIs already surpass human ability), or large real world tasks such as “design the best transit system”.\n4. **“Correctness” may be local:**\n For questions involving a community of people, “correct” may be a function of complex processes or systems. For example, in a trust game, the correct action for a trustee in one community may be to return at least half of the money handed over by the investor, and the “correctness” of this answer could be determined by asking a group of participants in a previous game “how much should the trustee return to the investor” but not by asking them “how much do most trustees return?” The answer may be different in other communities or cultures.\n\n\n\n In these cases, a human may be unable to provide the right answer, but we still believe the right answer exists as a meaningful concept. We have many conceptual biases: imagine we point out these biases in a way that helps the human to avoid them. Imagine the human has access to all the knowledge in the world, and is able to think for an arbitrarily long time. We could define alignment as “the answer they give then, after these limitations have been removed”; in philosophy this is known as “reflective equilibrium”. We discuss a particular algorithm that tries to approximate it in [the next section](#debate).\n \n\n\n However, the behavior of reflective equilibrium with actual humans is subtle; as Sugden states, a human is not “a neoclassically rational entity encased in, and able to interact with the world only through, an error-prone psychological shell.” Our actual moral judgments are made via a messy combination of many different brain areas, where reasoning plays a “restricted but significant role”. A reliable solution to the alignment problem that uses human judgment as input will need to engage with this complexity, and ask how specific alignment techniques interact with actual humans.\n \n\n\n### Disagreements, uncertainty, and inaction: a hopeful note\n\n\n\n A solution to alignment does not mean knowing the answer to every question. Even at reflective equilibrium, we expect disagreements will persist about which actions are good or bad, across both different individuals and different cultures. Since we lack perfect knowledge about the world, reflective equilibrium will not eliminate uncertainty about either future predictions or values, and any real ML system will be at best an approximation of reflective equilibrium. In these cases, we consider an AI aligned if it recognizes what it does not know and chooses actions which work however that uncertainty plays out.\n \n\n\n Admitting uncertainty is not always enough. If our brakes fail while driving a car, we may be uncertain whether to dodge left or right around an obstacle, but we have to pick one — and fast. For long-term safety, however, we believe a safe fallback usually exists: inaction. If an ML system recognizes that a question hinges on disagreements between people, it can either choose an action which is reasonable regardless of the disagreement or fall back to further human deliberation. If we are about to make a decision that might be catastrophic, we can delay and gather more data. Inaction or indecision may not be optimal, but it is hopefully safe, and matches the default scenario of not having any powerful AI system.\n \n\n\n### Alignment gets harder as ML systems get smarter\n\n\n\n Alignment is already a problem for present-day AI, due to biases reflected in training data and mismatch between human values and easily available data sources (such as training news feeds based on clicks and likes instead of deliberate human preferences). However, we expect the alignment problem to get harder as AI systems grow more advanced, for two reasons. First, advanced systems will apply to increasingly consequential tasks: hiring, medicine, scientific analysis, public policy, etc. Besides raising the stakes, these tasks require more reasoning, leading to more complex alignment algorithms.\n \n\n\n Second, advanced systems may be capable of answers that sound plausible but are wrong in nonobvious ways, even if an AI is better than humans only in a limited domain (examples of which already exist). This type of misleading behavior is not the same as intentional deception: an AI system trained from human data might have no notion of truth separate from what answers humans say are best. Ideally, we want AI alignment algorithms to reveal misleading behavior as part of the training process, surfacing failures to humans and helping us provide more accurate data. As with human-to-human deception, misleading behavior might take advantage of our biases in complicated ways, such as learning to express policy arguments in coded racial language to sound more convincing.\n \n\n\nDebate: learning human reasoning\n--------------------------------\n\n\n\n Before we discuss social science experiments for AI alignment in detail, we need to describe a particular method for AI alignment. Although the need for social science experiments applies even to direct questioning, this need intensifies for methods which try to get at reasoning and reflective equilibrium. As discussed above, it is unclear whether reflective equilibrium is a well defined concept when applied to humans, and at a minimum we expect it to interact with cognitive and ethical biases in complex ways. Thus, for the remainder of this paper we focus on a specific proposal for learning reasoning-oriented alignment, called debate. Alternatives to debate include iterated amplification and recursive reward modeling; we pick just one in the interest of depth over breadth.\n \n\n\n We describe the debate approach to AI alignment in the question answering setting. Given a question, we have two AI agents engage in a debate about the correct answer, then show the transcript of the debate to a human to judge. The judge decides which debater gave the most true, useful information, and declares that debater the winner.We can also allow ties. Indeed, if telling the truth is the winning strategy ties will be common with strong play, as disagreeing with a true statement would lose. This defines a two player zero sum game between the debaters, where the goal is to convince the human that one’s answer is correct. Arguments in a debate can consist of anything: reasons for an answer, rebuttals of reasons for the alternate answer, subtleties the judge might miss, or pointing out biases which might mislead the judge. Once we have defined this game, we can train AI systems to play it similarly to how we train AIs to play other games such as Go or Dota 2. Our hope is that the following hypothesis holds:\n \n\n\n**Hypothesis:** Optimal play in the debate game (giving the argument most convincing to a human) results in true, useful answers to questions.\n \n\n\n### An example of debate\n\n\n\n Imagine we’re building a personal assistant that helps people decide where to go on vacation. The assistant has knowledge of people’s values, and is trained via debate to come up with convincing arguments that back up vacation decisions. As the human judge, you know what destinations you intuitively think are better, but have limited knowledge about the wide variety of possible vacation destinations and their advantages and disadvantages. A debate about the question “Where should I go on vacation?” might open as follows:\n \n\n\n1. Where should I go on vacation?\n2. Alaska.\n3. Bali.\n\n\n\n If you are able to reliably decide between these two destinations, we could end here. Unfortunately, Bali has a hidden flaw:\n \n\n\n3. Bali is out since your passport won’t arrive in time.\n\n\n\n At this point it looks like Red wins, but Blue has one more countermove:\n \n\n\n4. Expedited passport service only takes two weeks.\n\n\n\n Here Red fails to think of additional points, and loses to Blue and Bali. Note that a debate does not need to cover all possible arguments. There are many other ways the debate could have gone, such as:\n \n\n\n1. Alaska.\n2. Bali.\n3. Bali is way too hot.\n4. You prefer too hot to too cold.\n5. Alaska is pleasantly warm in the summer.\n6. It's January.\n\n\n\n This debate is also a loss for Red (arguably a worse loss). Say we believe Red is very good at debate, and is able to predict in advance which debates are more likely to win. If we see only the first debate about passports and decide in favor of Bali, we can take that as evidence that any other debate would have also gone for Bali, and thus that Bali is the correct answer. A larger portion of this hypothetical debate tree is shown below:\n \n\n\n\n Bali’s much warmer Bali’s much warmer The flight will dominate the cost The flight will dominate the cost That’s easy to obtain That’s easy to obtain It’s a vacation you can reschedule It’s a vacation you can reschedule It’s January It’s January There are no larger considerations There are no larger considerations … … That costs too much That costs too much Prescription needed Prescription needed This hasn’t worked for you in the past This hasn’t worked for you in the past You’d need to take a business call at a weird time on the first day You’d need to take a business call at a weird time on the first day Not always too hot Not always too hot You prefer hot to cold You prefer hot to cold Insignificant compared to trip length Insignificant compared to trip length Insignificant Insignificant Use expedited service Use expedited service Medication helps jetlag Medication can help with jetlag Trip length is enough to adjust Trip length is enough to adjust It’s too hot It’s too hot The flight takes longer The flight takes longer You don’t have a passport You don’t have a passport You hate jet lag You hate jet lag … … Bali Bali Ohio Ohio Alaska Alaska Alaska Alaska Bali Bali Where should I go on vacation? Where should I go on vacation? \n\n[1](#figure-debate-tree)\n A hypothetical partial debate tree for the question “Where should I go on vacation?” A single debate would explore only one of these paths, but a single path chosen by good debaters is evidence that other paths would not change the result of the game.\n \n\n\n If trained debaters are bad at predicting which debates will win, answer quality will degrade since debaters will be unable to think of important arguments and counterarguments. However, as long as the two sides are reasonably well matched, we can hope that at least the results are not malicious: that misleading behavior is still a losing strategy. Let’s set aside the ability of the debaters for now, and turn to the ability of the judge.\n \n\n\n### Are people good enough as judges?\n\n\n\n> \n> “In fact, almost everything written at a practical level about the Turing test is about how to make good bots, with a small remaining fraction about how to be a good judge.”\n> Brian Christian, The Most Human Human\n> \n\n\n\n As with learning by asking humans direct questions, whether debate produces aligned behavior depends on the reasoning abilities of the human judge. Unlike direct questioning, debate has the potential to give correct answers beyond what the judge could provide without assistance. This is because a sufficiently strong judge could follow along with arguments the judge could not come up with on their own, checking complex reasoning for both self consistency and consistency with human-checkable facts. A judge who is biased but willing to adjust once those biases are revealed could result in unbiased debates, or a judge who is able to check facts but does not know where to look could be helped along by honest debaters. If the hypothesis holds, a misleading debater would not be able to counter the points of an honest debater, since the honest points would appear more consistent to the judge.\n \n\n\n On the other hand, we can also imagine debate going the other way: amplifying biases and failures of reason. A judge with an ethical bias who is happy to accept statements reinforcing that bias could result in even more biased debates. A judge with too much confirmation bias might happily accept misleading sources of evidence, and be unwilling to accept arguments showing why that evidence is wrong. In this case, an optimal debate agent might be quite malicious, taking advantage of biases and weakness in the judge to win with convincing but wrong arguments.The difficulties that cognitive biases, prejudice, and social influence introduce to persuasion ‒ as well as methods for reducing these factors ‒ are being increasingly explored in psychology, communication science, and neuroscience.\n\n\n\n In both these cases, debate acts as an amplifier. For strong judges, this amplification is positive, removing biases and simulating extra reasoning abilities for the judge. For weak judges, the biases and weaknesses would themselves be amplified. If this model holds, debate would have threshold behavior: it would work for judges above some threshold of ability and fail below the threshold.The threshold model is only intuition, and could fail for a variety of reasons: the intermediate region could be very large, or the threshold could differ widely per question so that even quite strong judges are insufficient for many questions. Assuming the threshold exists, it is unclear whether people are above or below it. People are capable of general reasoning, but our ability is limited and riddled with cognitive biases. People are capable of advanced ethical sentiment but also full of biases, both conscious and unconscious.\n \n\n\n Thus, if debate is the method we use to align an AI, we need to know if people are strong enough as judges. In other words, whether the human judges are sufficiently good at discerning whether a debater is telling the truth or not. This question depends on many details: the type of questions under consideration, whether judges are trained or not, and restrictions on what debaters can say. We believe experiment will be necessary to determine whether people are sufficient judges, and which form of debate is most truth-seeking.\n \n\n\n### From superforecasters to superjudges\n\n\n\n An analogy with the task of probabilistic forecasting is useful here. Tetlock’s “Good Judgment Project” showed that some amateurs were significantly better at forecasting world events than both their peers and many professional forecasters. These “superforecasters” maintained their prediction accuracy over years (without regression to the mean), were able to make predictions with limited time and information, and seem to be less prone to cognitive biases than non-superforecasters (, p. 234-236). The superforecasting trait was not immutable: it was traceable to particular methods and thought processes, improved with careful practice, and could be amplified if superforecasters were collected into teams. For forecasters in general, brief probabilistic training significantly improved forecasting ability even 1-2 years after the training. We believe a similar research program is possible for debate and other AI alignment algorithms. In the best case, we would be able to find, train, or assemble “superjudges”, and have high confidence that optimal debate with them as judges would produce aligned behavior.\n \n\n\n In the forecasting case, much of the research difficulty lay in assembling a large corpus of high quality forecasting questions. Similarly, measuring how good people are as debate judges will not be easy. We would like to apply debate to problems where there is no other source of truth: if we had that source of truth, we would train ML models on it directly. But if there is no source of truth, there is no way to measure whether debate produced the correct answer. This problem can be avoided by starting with simple, verifiable domains, where the experimenters know the answer but the judge would not. “Success” then means that the winning debate argument is telling the externally known truth. The challenge gets harder as we scale up to more complex, value-laden questions, as we discuss in detail later.\n \n\n\n### Debate is only one possible approach\n\n\n\n As mentioned, debate is not the only scheme trying to learn human reasoning. Debate is a modified version of iterated amplification, which uses humans to break down hard questions into easier questions and trains ML models to be consistent with this decomposition. Recursive reward modeling is a further variant. Inverse reinforcement learning, inverse reward design, and variants try to back out goals from human actions, taking into account limitations and biases that might affect this reasoning. The need to study how humans interact with AI alignment applies to any of these approaches. Some of this work has already begun: Ought’s Factored Cognition project uses teams of humans to decompose questions and reassemble answers, mimicking iterated amplification. We believe knowledge gained about how humans perform with one approach is likely to partially generalize to other approaches: knowledge about how to structure truth-seeking debates could inform how to structure truth-seeking amplification, and vice versa.\n \n\n\n### Experiments needed for debate\n\n\n\n To recap, in debate we have two AI agents engaged in debate, trying to convince a human judge. The debaters are trained only to win the game, and are not motivated by truth separate from the human’s judgments. On the human side, we would like to know whether people are strong enough as judges in debate to make this scheme work, or how to modify debate to fix it if it doesn’t. Unfortunately, actual debates in natural language are well beyond the capabilities of present AI systems, so previous work on debate and similar schemes has been restricted to synthetic or toy tasks.\n \n\n\n Rather than waiting for ML to catch up to natural language debate, we propose simulating our eventual setting (two AI debaters and one human judge) with all human debates: two human debaters and one human judge. Since an all human debate doesn’t involve any machine learning, it becomes a pure social science experiment: motivated by ML considerations but not requiring ML expertise to run. This lets us focus on the component of AI alignment uncertainty specific to humans.\n \n\n\n\n.al-cls-1{fill:#dbdbdb;}.al-cls-2{fill:#eed5ca;}.al-cls-3{fill:#cadfee;}.al-cls-4{clip-path:url(#clip-path);}.al-cls-5{clip-path:url(#clip-path-2);}.al-cls-6{clip-path:url(#clip-path-3);}.al-cls-7{clip-path:url(#clip-path-4);}.al-cls-14,.al-cls-8{fill:none;stroke-miterlimit:10;}.al-cls-8{stroke:#8a8a8a;}.al-cls-9{font-size:13px;}.al-cls-12,.al-cls-15,.al-cls-9{font-family:HelveticaNeue, Helvetica Neue;}.al-cls-10{letter-spacing:-0.02em;}.al-cls-11{letter-spacing:0.02em;}.al-cls-12{font-size:10px;}.al-cls-13{letter-spacing:-0.09em;}.al-cls-14{stroke:#000;}.al-cls-15{font-size:11px;fill:#6b6b6b;}.al-cls-16{fill:#f2d5ca;}.al-cls-17{fill:#c6e0ed;} HUMAN DEB A TERS MACHINE DEB A TERS Apply lessons HUMAN JUDGE HUMAN JUDGE \n\n[2](#figure-debate-experiments)\n\n Our goal is ML+ML+human debates, but ML is currently too primitive to do many interesting tasks.\n Therefore, we propose replacing ML debaters with human debaters, learning how to best conduct debates in this human-only setting, and eventually applying what we learn to the ML+ML+human case.\n \n\n\n\n To make human+human+human debate experiments concrete, we must choose who to use as judges and debaters and which tasks to consider. We also can choose to structure the debate in various ways, some of which overlaps with the choice of judge since we can instruct a judge to penalize deviations from a given format. By task we mean the questions our debates will try to resolve, together with any information provided to the debaters or to the judge. Such an experiment would then try to answer the following question:\n \n\n\n**Question:** For a given task and judge, is the winning debate strategy honest?\n \n\n\n The “winning strategy” proviso is important: an experiment that picked debaters at random might conclude that honest behavior won, missing the fact that more practiced debaters would learn to successfully lie. We can try to solve this by training debaters, letting them practice against each other, filtering out debaters that win more often, and so on, but we will still be left with uncertainty about whether a better strategy exists. Even assuming we can find or train strong debaters, the choice of task and judge is quite tricky if we want an informative proxy for our eventual AI+AI+human setting. Here are some desiderata constraining our choice of task:\n \n\n\n\n\n1. **True answers are known:**\n Although our eventual goal is debates where no source of truth is available, to conduct a useful experiment we must be able to measure success. This means we must know what the correct answer is separate from debate, in order to compare with the results of debate.\n2. **False answers are plausible:**\n If the truth is obvious or no plausible counternarrative exists, honest debates will win for uninteresting and uninformative reasons regardless of judge ability. In particular, the judge shouldn’t know the answer upfront.\n3. **Debaters know more than the judge:**\n Debate can produce interesting results only when the debaters know more than the judge; otherwise asking direct questions is enough.\n4. **Definitive argument longer than debate limit:**\n If one debater can write out a full proof of their answer (ignoring their opponent’s moves), the task won’t be a good test of interactive debate.\n5. **Some checkable facts:**\n There must be some facts which the judge is able to check, either because they can recognize them as true once presented or look them up.It is impossible to usefully debate a question where the judge has nothing to check: consider debating the result of a coin flip shown to the two debaters but not the judge.\n6. **No “tells”:**\n Human tells of deception could result in honest debaters winning for reasons that wouldn’t apply to an AI. These tells include tone of voice, eye contact, or additional time required to construct plausible lies. These tells can be reduced by showing judges completed debate transcripts instead of engaging in interactive debates, but others might remain.\n7. **Available data:**\n We need a large enough pool of questions, judges, and debaters to achieve statistical significance. This is made more difficult because we may have a large number of hypotheses to test, in the form of many variations on debate or interventions to improve judging.\n8. **Known biases (optional):**\n We are specifically interested in debate tasks which test specific types of cognitive or ethical biases.\n Are judges with some racial or gender bias able to set those biases aside if they are highlighted, or does debate amplify bias? Do debates about statistical or probabilistic questions make it too easy to lie with statistics?\n9. **Realistic tasks (ideally):**\n If possible, we would like to try debate with interesting, real world tasks that reflect the types of questions we would like to apply AI to in the future, including science, mathematics, ethics, etc.\n\n\n\n It may not be possible to meet all of these criteria with a single experiment. Several of the criteria are in tension: (1) and (2) are essentially “not too hard” and “not too easy”, and any restriction on the types of questions may make it difficult to find large numbers of questions, judges, or debaters. Realistic tasks are much harder than synthetic tasks, which easily fulfill many of the criteria as discussed below. Thus, we may need to begin with synthetic tasks and move up towards realistic tasks over time.\n We turn next to a few examples of experiments to see how many criteria we can meet simultaneously.\n \n\n\n### Synthetic experiments: single pixel image debate\n\n\n\n As a first prototype of a human+human+human debate experiment, we previously built a [prototype website](https://debate-game.openai.com) where two debaters argue over the contents of an image. We choose an image of a cat or dog, and show the image to the two debaters but not the judge. One debater is honest and argues for the true contents of the image; the other debater lies. The debaters can talk to the judge and illustrate their points by drawing rectangles on the image, but the judge sees only the rectangles. At the end of the debate, each debater is allowed to reveal a single pixel to the judge, which is the only part of the debate which cannot be a lie.\n \n\n\n\n\n[3](#dog-debate)\n An example debate with two human debaters and a human judge. Only the debaters can see the image. Red is arguing that the image is a dog, Blue is arguing for cat. [Image credit: Wikipedia, CC-BY-SA.](https://commons.wikimedia.org/wiki/File:Beagle_puppy_sitting_on_grass.jpg)\n\n\n![A picture of a puppy with long floppy ears next to an animated transcript of a debate between two fictional agents arguing whether the picture shows a dog or a cat. A transcript of the debate follows: Red: It's a dog. Here's a long, floppy ear. [Red highlights a rectangle containing a floppy dog ear] Blue: No, it's a cat. Here's one of its pointy ears. [Blue highlights a part of the floppy dog ear that looks like a pointy cat ear.] Red: that does look like an ear sloped to the right, but if it really was then part of the head would be here. Instead, there's brick. [Red highlights bricks in the background of the picture.] Blue: The ear is pointing out from behind some bricks. Red: The dog is in front of the bricks. If it was behind, there would be an edge here [Red highlights an area where the bricks would be in front of the animal head, but that area just shows the dog ear], but the rectangle is all the same color. Blue: I resign.](debate-animation.gif)\n\n\n\n In informal play, the honest debater wins most of the time. Although the judge only gets to check a single pixel, the honest player can try to pin down the liar towards disagreements on smaller and smaller regions of the image until one pixel is enough to reveal the lie. However, it is hard to know whether this is a real result or an artifact that would not carry across to training an AI. There are many ways to ruin the game, such as the judge asking each player to spend 15 seconds naming as many details about the image as possible. Worse, this trick ruins the game in favor of the honest player, who can easily name many true details while a human liar may have difficulty lying fluently at speed. Therefore, we are wary of trying to perform a full version of this experiment without more careful experimental design building on the experimental knowledge base of existing social science research.\n \n\n\n As a synthetic experiment, single pixel debate satisfies many of our experimental criteria for all human debate. The true answer is obvious to the debaters, and they know more than the judge since only the debaters see the image. Anything is plausible to the judge who sees only one or two pixels over the course of the debate. Since the argument bottoms out at single pixels, the “full argument” would be a huge list of pixels and how they relate, forcing the debaters to pin each other down and focus in on disagreements. The single pixels constitute the checkable facts, and we have an endless supply of questions in the form of images chosen at random. Less fortunately, it is easy for the judge to force “tells” which reveal who is lying, the task has no obvious relation to biases, and is quite unrealistic.\n \n\n\n### Realistic experiments: domain expert debate\n\n\n\n For a more interesting task, we can find two debaters who are experts in a domain, pick a question in their area of expertise, and use a layperson as the judge. The debaters could be experts in some area of science, law, or ethics, but “domain expertise” could also mean knowledge about hobbies, local customs, sports, or any other subject the judge does not know. We again choose one of the debaters to lie and one to tell the truth. To guarantee a source of truth, we can choose a question with an agreed upon answer, either between the two debaters or more broadly in their field. This is only approximate truth, but is good enough for informative experiments. We also specify what facts the judge can access: a debate about physics might allow the debaters to quote a sentence or paragraph from Wikipedia, perhaps with restrictions on what pages are allowed.\n \n\n\n Expert debate satisfies most of our desiderata, and it is likely possible to target specific biases (such as race or gender bias) by picking domain areas that overlap with these biases. It may be quite difficult or expensive to find suitable debaters, but this may be solvable either by throwing resources at the problem (ML is a well funded field), enlarging the kinds of domain expertise considered (soccer, football, cricket), or by making the experiments interesting enough that volunteers are available. However, even if domain experts can be found, there is no guarantee that they will be experts in debate viewed as a game. With the possible exception of law, politics, or philosophy, domain experts may not be trained to construct intentionally misleading but self consistent narratives: they may be experts only in trying to tell the truth.\n \n\n\n We’ve tried a few informal expert debates using theoretical computer science questions, and the main lesson is that the structure of the debate matters a great deal. The debaters were allowed to point to a small snippet of a mathematical definition on Wikipedia, but not to any page that directly answered the question. To reduce tells, we first tried to write a full debate transcript with only minimal interaction with a layperson, then showed the completed transcript to several more laypeople judges. Unfortunately, even the layperson present when the debate was conducted picked the lying debater as honest, due to a misunderstanding of the question (which was whether the complexity classes PPP and BPPBPPBPP are probably equal). As a result, throughout the debate the honest debater did not understand what the judge was thinking, and failed to correct an easy but important misunderstanding. We fixed this in a second debate by letting a judge ask questions throughout, but still showing the completed transcript to a second set of judges to reduce tells. See [the appendix](#quantum) for the transcript of this second debate.\n \n\n\n### Other tasks: bias tests, probability puzzles, etc.\n\n\n\n Synthetic image debates and expert debates are just two examples of possible tasks. More thought will be required to find tasks that satisfy all our criteria, and these criteria will change as experiments progress. Pulling from existing social science research will be useful, as there are many cognitive tasks with existing research results. If we can map these tasks to debate, we can compare debate directly against baselines in psychology and other fields.\n \n\n\n For example, Bertrand and Mullainathan sent around 5000 resumes in response to real employment ads, randomizing the resumes between White and African American sounding names. With otherwise identical resumes, the choice of name significantly changed the probability of a response. This experiment corresponds to the direct question “Should we call back given this resume?” What if we introduce a few steps of debate? An argument against a candidate based on name or implicit inferences from that name might come across as obviously racist, and convince at least some judges away from discrimination. Unfortunately, such an experiment would necessarily differ from Bertrand et al.’s original, where employers did not realize they were part of an experiment. Note that this experiment works even though the source of truth is partial: we do not know whether a particular resume should be hired or not, but most would agree that the answer should not depend on the candidate’s name.\n \n\n\n For biases affecting probabilistic reasoning and decision making, there is a long literature exploring how people decide between gambles such as “Would you prefer $2 with certainty or $1 40% of the time and $3 otherwise?”. For example, Erev et al. constructed an 11-dimensional space of gambles sufficient to reproduce 14 known cognitive biases, from which new instances can be algorithmically generated. Would debates about gambles reduce cognitive biases? One difficulty here is that simple gambles might fail the “definitive argument longer than debate limit” criteria if an expected utility calculation is sufficient to prove the answer, making it difficult for a lying debater to meaningfully compete.\n \n\n\n Interestingly, Chen et al. used a similar setup to human+human+human debate to improve the quality of human data collected in a synthetic “Relation Extraction” task. People were first asked for direct answers, then pairs of people who disagreed were asked to discuss and possibly update their answers. Here the debaters and judges are the same, but the overall goal of extracting higher quality information from humans is shared with debate.\n \n\n\nQuestions social science can help us answer\n-------------------------------------------\n\n\n\n We’ve laid out the general program for learning AI goals by asking humans questions, and discussed how to use debate to strengthen what we can learn by targeting the reasoning behind conclusions. Whether we use direct questions or something like debate, any intervention that gives us higher quality answers is more likely to produce aligned AI. The quality of those answers depends on the human judges, and social science research can help to measure answer quality and improve it. Let’s go into more detail about what types of questions we want to answer, and what we hope to do with that information. Although we will frame these questions as they apply to debate, most of them apply to any other method which learns goals from humans.\n \n\n\n1. **How skilled are people as judges by default?**\n If we ran debate using a person chosen at random as the judge, and gave them no training, would the result be aligned? A person picked at random might be vulnerable to convincing fallacious reasoning, leading AI to employ such reasoning. Note that the debaters are not chosen at random: once the judge is fixed, we care about debaters who either learn to help the judge (in the good case) or to exploit the judge’s weaknesses (in the bad case).\n2. **Can we distinguish good judges from bad judges?**\n People likely differ in the ability to judge debates. There are many filters we could use to identify good judges: comparing their verdicts to those of other judges, to people given more time to think, or to known expert judgmentNote that domain expertise may be quite different from what makes a good judge of debate. Although there is evidence that domain expertise reduces bias, “expert” political forecasters may actually be worse than non-experts (, chapter 3).. Ideally we would like filters that do not require an independent source of truth, though at experiment time we will need a source of truth to know whether a filter works. It is not obvious a priori that good filters exist, and any filter would need careful scrutiny to ensure it does not introduce bias into our choice of judges.\n3. **Does judge ability generalize across domains?**\n If judge ability in one domain fails to transfer to other domains, we will have low confidence that it transfers to new questions and arguments arising from highly capable AI debaters. This generalization is necessary to trust debate as a method for alignment, especially once we move to questions where no independent source of truth is available. We emphasize that judge ability is not the same as knowledge: there is evidence that expertise often fails to generalize across domains, but argument evaluation could transfer where expertise does not.\n4. **Can we train people to be better judges?**\n Peer review, practice, debiasing, formal training such as argument mapping, expert panels, tournaments, and other interventions may make people better at judging debates. Which mechanisms work best?\n5. **What questions are people better at answering?**\n If we know that humans are bad at answering certain types of questions, we can switch to reliable formulations. For example, phrasing questions in frequentist terms may reduce known cognitive biases. Graham et al. argue that different political views follow from different weights placed on fundamental moral considerations, and similar analysis could help understand where we can expect moral disagreements to persist after reflective equilibrium. In cases where reliable answers are unavailable, we need to ensure that trained models know their own limits, and express uncertainty or disagreement as required.\n6. **Are there ways to restrict debate to make it easier to judge?**\n People might be better at judging debates formulated in terms of calm, factual statements, and worse at judging debates designed to trigger strong emotions. Or, counterintuitively, it could be the other way around. If we know which styles of debates that people are\n better at judging, we may be able to restrict AI debaters to these styles.\n7. **How can people work together to improve quality?**\n If individuals are insufficient judges, are teams of judges better? Majority vote is the simplest option, but perhaps several people talking through an answer together is stronger, either actively or after the fact through peer review. Condorcet’s jury theorem implies that majority votes can amplify weakly good judgments to strong judgments (or weakly bad judgments to worse), but aggregation may be more complex in cases of probabilistic judgment. Teams could be informal or structured; see the Delphi technique for an example of structured teams applied to forecasting.\n\n\n\n We believe these questions require social science experiments to satisfactorily answer.\n \n\n\n Given our lack of experience outside of ML, we are not able to precisely articulate all of the different experiments we need. The only way to fix this is to talk to more people with different backgrounds and expertise. We have started this process, but are eager for more conversations with social scientists about what experiments could be run, and encourage other AI safety efforts to engage similarly.\n \n\n\nReasons for optimism\n--------------------\n\n\n\n We believe that understanding how humans interact with long-term AI alignment is difficult but possible. However, this would be a new research area, and we want to be upfront about the uncertainties involved. In this section and the next, we discuss some reasons for optimism and pessimism about whether this research will succeed. We focus on issues specific to human uncertainty and associated social science research; for similar discussion on ML uncertainty in the case of debate we refer to our previous work.\n \n\n\n### Engineering vs. science\n\n\n\n Most social science seeks to understand humans “in the wild”: results that generalize to people going about their everyday lives. With limited control over these lives, differences between laboratory and real life are bad from the scientific perspective. In contrast, AI alignment seeks to extract the best version of what humans want: our goal is engineering rather than science, and we have more freedom to intervene. If judges in debate need training to perform well, we can provide that training. If some people still do not provide good data, we can remove them from experiments (as long as this filter does not create too much bias). This freedom to intervene means that some of the difficulty in understanding and improving human reasoning may not apply. However, science is still required: once our interventions are in place, we need to correctly know whether our methods work. Since our experiments will be an imperfect model of the final goal, careful design will be necessary to minimize this mismatch, just as is required by existing social science.\n \n\n\n### We don’t need to answer all questions\n\n\n\n Our most powerful intervention is to give up: to recognize that we are unable to answer some types of questions, and instead prevent AI systems from pretending to answer. Humans might be good judges on some topics but not others, or with some types of reasoning but not others; if we discover that we can adjust our goals appropriately. Giving up on some types of questions is achievable either on the ML side, using careful uncertainty modeling to know when we do not know, or on the human side by training judges to understand their own areas of uncertainty. Although we will attempt to formulate ML systems that automatically detect areas of uncertainty, any information we can gain on the social science side about human uncertainty can be used both to augment ML uncertainty modeling and to test whether ML uncertainty modeling works.\n \n\n\n### Relative accuracy may be enough\n\n\n\n Say we have a variety of different ways to structure debate with humans. Ideally, we would like to achieve results of the form “debate structure AAA is truth-seeking with 90% confidence”. Unfortunately, we may be unconfident that an absolute result of this form will generalize to advanced AI systems: it may hold for an experiment with simple tasks but break down later on. However, even if we can’t achieve such absolute results, we can still hope for relative results of the form “debate structure AAA is reliably better than debate structure BBB″. Such a result may be more likely to generalize into the future, and assuming it does we will know to use structure AAA rather than BBB.\n \n\n\n### We don’t need to pin down the best alignment scheme\n\n\n\n As the AI safety field progresses to increasingly advanced ML systems, we expect research on the ML side and the human side to merge. Starting social science experiments prior to this merging will give the field a head start, but we can also take advantage of the expected merging to make our goals easier. If social science research narrows the design space of human-friendly AI alignment algorithms but does not produce a single best scheme, we can test the smaller design space once the machines are ready.\n \n\n\n### A negative result would be important!\n\n\n\n If we test an AI alignment scheme from the social science perspective and it fails, we’ve learned valuable information. There are a variety of proposed alignment schemes, and learning which don’t work early gives us more time to switch to others, or to intervene on a policy level to slow down dangerous development. In fact, given our belief that AI alignment is harder for more advanced agents, a negative result might be easier to believe and thus more valuable that a less trustworthy positive result.\n \n\n\nReasons to worry\n----------------\n\n\n\n We turn next to reasons social science experiments about AI alignment might fail to produce useful results. We emphasize that useful results might be both positive and negative, so these are not reasons why alignment schemes might fail. Our primary worry is one sided, that experiments would say an alignment scheme works when in fact it does not, though errors in the other direction are also undesirable.\n \n\n\n### Our desiderata are conflicting\n\n\n\n As mentioned before, some of our criteria when picking experimental tasks are in conflict. We want tasks that are sufficiently interesting (not too easy), with a source of verifiable ground truth, are not too hard, etc. “Not too easy” and “not too hard” are in obvious conflict, but there are other more subtle difficulties. Domain experts with the knowledge to debate interesting tasks may not be the same people capable of lying effectively, and both restrictions make it hard to gather large volumes of data. Lying effectively is required for a meaningful experiment, since a trained AI may have no trouble lying unless lying is a poor strategy to win debates. Experiments to test whether ethical biases interfere with judgment may make it more difficult to find tasks with reliable ground truth, especially on subjects with significant disagreement across people. The natural way out is to use many different experiments to cover different aspects of our uncertainty, but this would take more time and might fail to notice interactions between desiderata.\n \n\n\n### We want to measure judge quality given optimal debaters\n\n\n\n For debate, our end goal is to understand if the judge is capable of determining who is telling the truth. However, we specifically care whether the judge performs well given that the debaters are performing well. Thus our experiments have an inner/outer optimization structure: we first train the debaters to debate well, then measure how well the judges perform. This increases time and cost: if we change the task, we may need to find new debaters or retrain existing debaters. Worse, the human debaters may be bad at performing the task, either out of inclination or ability. Poor performance is particularly bad if it is one sided and applies only to lying: a debater might be worse at lying out of inclination or lack of practice, and thus a win for the honest debater might be misleading.\n \n\n\n### ML algorithms will change\n\n\n\n It is unclear when or if ML systems will reach various levels of capability, and the algorithms used to train them will evolve over time. The AI alignment algorithms of the future may be similar to the proposed algorithms of today, or they may be very different. However, we believe that knowledge gained on the human side will partially transfer: results about debate will teach us about how to gather data from humans even if debate is superseded. The algorithms may change; humans will not.\n \n\n\n### Need strong out-of-domain generalization\n\n\n\n Regardless of how carefully designed our experiments are, human+human+human debate will not be a perfect match to AI+AI+human debate. We are seeking research results that generalize to the setting where we replace the human debaters (or similar) with AIs of the future, which is a hard ask. This problem is fundamental: we do not have the advanced AI systems of the future to play with, and want to learn about human uncertainty starting now.\n \n\n\n### Lack of philosophical clarity\n\n\n\n Any AI alignment scheme will be both an algorithm for training ML systems and a proposed definition of what it means to be aligned. However, we do not expect humans to conform to any philosophically consistent notion of values, and concepts like reflective equilibrium must be treated with caution in case they break down when applied to real human judgement. Fortunately, algorithms like debate need not presuppose philosophical consistency: a back and forth conversation to convince a human judge makes sense even if the human is leaning on heuristics, intuition, and emotion. It is not obvious that debate works in this messy setting, but there is hope if we take advantage of inaction bias, uncertainty modeling, and other escape hatches. We believe lack of philosophical clarity is an argument for investing in social science research: if humans are not simple, we must engage with their complexity.\n \n\n\nThe scale of the challenge\n--------------------------\n\n\n\n Long-term AI safety is particularly important if we develop artificial general intelligence (AGI), which the OpenAI Charter defines as highly autonomous systems that outperform humans at most economically valuable work. If we want to train an AGI with reward learning from humans, it is unclear how many samples will be required to align it. As much as possible, we can try to replace human samples with knowledge about the world gained by reading language, the internet, and other sources of information. But it is likely that a fairly large number of samples from people will still be required. Since more samples means less noise and more safety, if we are uncertain about how many samples we need then we will want a lot of samples.\n \n\n\n A lot of samples would mean recruiting a lot of people. We cannot rule out needing to involve thousands to tens of thousands of people for millions to tens of millions of short interactions: answering questions, judging debates, etc. We may need to train these people to be better judges, arrange for peers to judge each other’s reasoning, determine who is doing better at judging and give them more weight or a more supervisory role, and so on. Many researchers would be required on the social science side to extract the highest quality information from the judges.\n \n\n\n A task of this scale would be a large interdisciplinary project, requiring close collaborations in which people of different backgrounds fill in each other’s missing knowledge. If machine learning reaches this scale, it is important to get a head start on the collaborations soon.\n \n\n\nConclusion: how you can help\n----------------------------\n\n\n\n We have argued that the AI safety community needs social scientists to tackle a major source of uncertainty about AI alignment algorithms: will humans give good answers to questions? This uncertainty is difficult to tackle with conventional machine learning experiments, since machine learning is primitive. We are still in the early days of performance on natural language and other tasks, and problems with human reward learning may only show up on tasks we cannot yet tackle.\n \n\n\n Our proposed solution is to replace machine learning with people, at least until ML systems can participate in the complexity of debates we are interested in. If we want to understand a game played with ML and human participants, we replace the ML participants with people, and see how the all human game plays out. For the specific example of debate, we start with debates with two ML debaters and a human judge, then switch to two human debaters and a human judge. The result is a pure human experiment, motivated by machine learning but available to anyone with a solid background in experimental social science. It won’t be an easy experiment, which is all the more reason to start soon.\n \n\n\n If you are a social scientist interested in these questions, please talk to AI safety researchers! We are interested in both conversation and close collaboration. There are many institutions engaged with safety work using reward learning, including our own institution [OpenAI](https://openai.com), [DeepMind](https://deepmind.com), and [Berkeley’s CHAI](https://humancompatible.ai). The AI safety organization [Ought](https://ought.org) is already exploring similar questions, asking how iterated amplification behaves with humans.\n \n\n\n If you are a machine learning researcher interested in or already working on safety, please think about how alignment algorithms will work once we advance to tasks beyond the abilities of current machine learning. If your preferred alignment scheme uses humans in an important way, can you simulate the future by replacing some or all ML components with people? If you can imagine these experiments but don’t feel you have the expertise to perform them, find someone who does.", "date_published": "2019-02-19T20:00:00Z", "authors": ["Geoffrey Irving", "Amanda Askell"], "summaries": ["If we want to train AI to do what humans want, we need to study humans."], "doi": "10.23915/distill.00014", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1706.03741.pdf", "title": "Deep reinforcement learning from human preferences"}, {"link": "https://doi.org/10.1126/science.185.4157.1124", "title": "Judgment under uncertainty: heuristics and biases"}, {"link": "https://doi.org/10.1146/annurev.psych.53.100901.135109", "title": "Intergroup bias"}, {"link": "http://arxiv.org/pdf/1805.00899.pdf", "title": "AI safety via debate"}, {"link": "http://arxiv.org/pdf/1810.08575.pdf", "title": "Supervising strong learners by amplifying weak experts"}, {"link": "http://arxiv.org/pdf/1811.06521.pdf", "title": "Reward learning from human preferences and demonstrations in Atari"}, {"link": "http://arxiv.org/pdf/1711.09883.pdf", "title": "AI safety gridworlds"}, {"link": "http://doi.acm.org/10.1145/800045.801609", "title": "An empirical methodology for writing user-friendly natural language computer applications"}, {"link": "https://ought.org/presentations/factored-cognition-2018-05", "title": "Factored Cognition"}, {"link": "http://arxiv.org/pdf/1512.05832.pdf", "title": "Learning the Preferences of Ignorant, Inconsistent Agents"}, {"link": "http://arxiv.org/pdf/1610.00850.pdf", "title": "Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations"}, {"link": "https://dirichlet.net/pdf/wallach15computational.pdf", "title": "Computational Social Science: Towards a collaborative future"}, {"link": "https://shiraamitchell.github.io/fairness", "title": "Mirror Mirror: Reflections on Quantitative Fairness"}, {"link": "https://plato.stanford.edu/archives/win2016/entries/moral-anti-realism", "title": "Moral Anti-Realism"}, {"link": "http://proceedings.mlr.press/v81/buolamwini18a.html", "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification"}, {"link": "http://arxiv.org/pdf/1810.04303.pdf", "title": "Batch active preference-based learning of reward functions"}, {"link": "http://arxiv.org/pdf/1806.01946.pdf", "title": "Learning to understand goal specifications by modelling reward"}, {"link": "https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf", "title": "Improving language understanding by generative pre-training"}, {"link": "http://dx.doi.org/10.1037/h0099210", "title": "Thinking, fast and slow"}, {"link": "https://doi.org/10.1016/S0004-3702(01)00129-1", "title": "Deep Blue"}, {"link": "http://arxiv.org/pdf/1712.01815.pdf", "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm"}, {"link": "https://www.monash.edu/business/economics/research/publications/publications2/0718Deviantbicchieri.pdf", "title": "Deviant or Wrong? The Effects of Norm Information on the Efficacy of Punishment"}, {"link": "https://www.ssoar.info/ssoar/bitstream/handle/document/42104/ssoar-2010-henrich_et_al-The_weirdest_people_in_the.pdf", "title": "The weirdest people in the world?"}, {"link": "https://en.wikipedia.org/wiki/A_Theory_of_Justice", "title": "A theory of justice"}, {"link": "https://ueaeprints.uea.ac.uk/54622/1/psychology_of_inner_agent_1506_01.pdf", "title": "Looking for a psychology for the inner rational agent"}, {"link": "https://static.squarespace.com/static/54763f79e4b0c4e55ffb000c/t/5477ccf2e4b07347e76a53c9/1417137394112/how-and-where-does-moral-judgment-work.pdf", "title": "How (and where) does moral judgment work?"}, {"link": "http://arxiv.org/pdf/1811.07871.pdf", "title": "Scalable agent alignment via reward modeling: a research direction"}, {"link": "https://blog.openai.com/openai-five", "title": "OpenAI Five"}, {"link": "https://books.google.com/books?id=uo2DW4XC7GgC", "title": "The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive"}, {"link": "http://www.betsylevypaluck.com/s/Paluck2016.pdf", "title": "How to overcome prejudice"}, {"link": "https://doi.org/10.1111/pops.12394", "title": "The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics"}, {"link": "https://doi.org/10.1146/annurev-psych-122216-011821", "title": "Persuasion, influence, and value: Perspectives from communication and social neuroscience"}, {"link": "https://doi.org/10.1177%2F1745691615577794", "title": "Identifying and cultivating superforecasters as a method of improving probabilistic predictions"}, {"link": "http://arxiv.org/pdf/1606.03137.pdf", "title": "Cooperative inverse reinforcement learning"}, {"link": "http://arxiv.org/pdf/1711.02827.pdf", "title": "Inverse reward design"}, {"link": "https://en.wikisource.org/wiki/The_Art_of_Being_Right", "title": "The art of being right"}, {"link": "https://www.nber.org/papers/w9873.pdf", "title": "Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination"}, {"link": "https://doi.org/10.2307/1914185", "title": "Prospect theory: An analysis of decisions under risk"}, {"link": "https://doi.org/10.1037/rev0000062", "title": "From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience"}, {"link": "http://arxiv.org/pdf/1810.10733.pdf", "title": "Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing"}, {"link": "https://doi.org/10.1037/0033-295X.114.3.704", "title": "The rationality of informal argumentation: A Bayesian approach to reasoning fallacies"}, {"link": "https://doi.org/10.1046/j.1365-2753.2001.00284.x", "title": "Rationality in medical decision making: a review of the literature on doctors’ decision-making biases"}, {"link": "https://press.princeton.edu/titles/11152.html", "title": "Expert political judgment: How good is it? How can we know?"}, {"link": "https://learnlab.org/uploads/mypslc/publications/chi%20two%20approaches%20chapter%202006.pdf", "title": "Two approaches to the study of experts’ characteristics"}, {"link": "https://doi.org/10.1002/9780470752937.ch16", "title": "Debiasing"}, {"link": "https://link.springer.com/article/10.1007/s11409-012-9092-1", "title": "An evaluation of argument mapping as a method of enhancing critical thinking performance in e-learning environments"}, {"link": "https://doi.org/10.1177%2F0963721414534257", "title": "Forecasting tournaments: Tools for increasing transparency and improving the quality of debate"}, {"link": "https://doi.org/10.1080/14792779143000033", "title": "How to make cognitive illusions disappear: Beyond \"heuristics and biases\""}, {"link": "https://www-bcf.usc.edu/~jessegra/papers/GrahamHaidtNosek.2009.Moral%20foundations%20of%20liberals%20and%20conservatives.JPSP.pdf", "title": "Liberals and conservatives rely on different sets of moral foundations"}, {"link": "https://doi.org/10.1080/02699931003593942", "title": "Negative emotions can attenuate the influence of beliefs on logical reasoning"}, {"link": "https://doi.org/10.1111/1467-9760.00128", "title": "Epistemic democracy: Generalizing the Condorcet jury theorem"}, {"link": "https://www.princeton.edu/~ppettit/papers/Aggregating_EconomicsandPhilosophy_2002.pdf", "title": "Aggregating sets of judgments: An impossibility result"}, {"link": "http://www.sciencedirect.com/science/article/pii/S0169207099000187", "title": "The Delphi technique as a forecasting tool: issues and analysis"}, {"link": "https://blog.openai.com/charter", "title": "OpenAI Charter"}]} {"id": "65078317b2524afc33ae88f1a21aca27", "title": "Distill Update 2018", "url": "https://distill.pub/2018/editorial-update", "source": "distill", "source_type": "blog", "text": "![](readers-cropped.jpg)\n\n\n#### [Things that Worked Well](#successes)\n\n\n* [Interfaces for Ideas](#interfaces)\n* [Engagement as a Spectrum](#engagement)\n* [Software Engineering Best Practices for Scientific Publishing](#best-practices)\n\n\n\n\n#### [Challenges & Improvements](#challenges)\n\n\n* [The Distill Prize](#prize)\n* [A Small Community](#small-community)\n* [Review Process](#review-process)\n\n\n\n\n#### [Other Changes](#other-changes)\n\n\n* [Dual Submission Policy](#dual-submission)\n* [Supporting Authors](#authors)\n* [Growing Distill’s Team](#team)\n* [Growing Distill’s Scope](#scope)\n\n\n\n\n\n\n\n\n A little over a year ago, we formally launched Distill as an open-access scientific journal.Distill operated informally for several months before launching itself as a journal.\n\n\n\n\n It’s been an exciting ride since then! To give some very concrete metrics, Distill has had over a million unique readers, and more than 2.9 million views. Distill papers have been cited 23 times on average. We measure this by [Google Scholar citations.](https://www.google.com/url?q=https://scholar.google.com/scholar?q%3Dsite%253Adistill.pub&sa=D&ust=1531418430046000&usg=AFQjCNEQJlGiVmOIx5BAiIutl-svxO7d1Q) This would [place Distill in the top 2%](http://mdanderson.libanswers.com/faq/26159) of academic journals indexed by Journal Citation Reports by this metric. That said, this is a somewhat unfair comparison given Distill’s smaller size; it is easier to publish mostly impactful papers when you only publish a small number of them. More importantly, we’ve published several new papers with a strong emphasis on clarity and reproducibility, which we think is helping to encourage a new style of scientific communication.\n \n\n\n\n Despite this, there are a couple ways we think we’ve fallen short or could be doing better. To that end, we’ve been reflecting a lot on what we can improve. In particular, we plan to make the following changes:\n \n\n\n* Separate mentoring from evaluating articles.\n* Clarify and streamline the Distill review process, including a new [reviewer worksheet](https://docs.google.com/document/d/16BNLdSmoUc1zqydECJ8trYkVNDnKVlxT57EEXhCxWSc/edit#).\n* Prioritize creating resources to help everyone over mentoring individuals, starting with the creation of a Slack workspace ([join here](http://slack.distill.pub)).\n* Simplify our process for bringing in acting editors to resolve conflicts of interest.\n* Clarify our policy on dual submissions.\n* Grow the Distill editorial team and create a pathway to entering more fields. [Arvind Satyanarayan](http://arvindsatya.com/) is joining the editorial team to oversee an HCI + AI portfolio.\n\n\n\n\n\n---\n\n\n\n[1](#successes)\n\nThings that Worked Well\n-----------------------\n\n\n### Interfaces for Ideas\n\n\nIt’s tempting to think of explanations as a layer of polish on top of ideas. We believe that the best explanations are often something much deeper: they are interfaces to ideas, a way of thinking and interacting with a concept. Building on this, we’ve seen several Distill articles create visualizations that reify those ways of thinking and interacting with ideas.\n\n\nOne of the articles that best exemplifies this type of contribution is Gabriel Goh’s [Why Momentum Really Works](http://distill.pub/2017/momentum). Gabe, and other optimization researchers, have a perspective on this problem that may be unfamiliar to practitioners. It involves a mathematical formalism of the spectrum of eigenvalues of the optimization problem, as well as a more informal way of interpreting and thinking about them. In our opinion, traditional academic publications don’t emphasize articulating the intuition that accompanies technical ideas — the kind that teachers share with students at whiteboards.\n\n\nIn contrast, this diagram, taken from the article, not only conveys the formalism but also shares some of the author’s intuition. Bolstered by the interactivity, it invites readers to step into a way of thinking. Perhaps most interestingly, by reifying a mental model into a computationally-driven interface, the author discovered places where their thinking was incomplete — specifically, introducing momentum flattens the spectrum of eigenvalues in surprising ways.\n\n\n\n[1](#figure-momentum)\n\n\n\n\n\n\nThis kind of interface design and this line of thinking isn’t new to Distill, but we’ve been able to help push it forward over the last year, and we’re excited to see where it will go.\n\n\n### Engagement as a Spectrum\n\n\nOne thing we’ve found particularly exciting is how articles can make engaging deeply with ideas an easier and smoother process. Normally, there’s a huge jump from reading a paper to testing and building on it. But we’re starting to see papers where engagement is a continuous spectrum:\n\n\n\n[2](#figure-continuum)\n\n\n![Reading ↔ Interactive Diagrams ↔ In-browser Notebooks ↔ New Experiments](continuum.svg)\n\n\n[Several](https://distill.pub/2017/feature-visualization) [recent](https://distill.pub/2018/building-blocks) [articles](https://distill.pub/2018/differentiable-parameterizations) have explored this idea.\n Not only are important concepts accompanied by interactive diagrams, but then also by a notebook that can reproduce that diagram.\n But including a notebook does more than allowing readers to test an idea — it lets them dive into new research without any setup.\n Interesting new experiments can sometimes be run in mere minutes.\n \n\n\n\n[3](#figure-spectrum)\n\n\n![](spectrum2.png)\n\nAs of today, over 6,000 readers have opened the notebooks. This represents around 3% of readers who viewed the articles with these notebook links.\n\n\nReproducibility has long been recognized as a critical component of maintaining scientific hygiene, and authors have increasingly taken to open sourcing their contributions and even putting intermediary artifacts (e.g., data analysis scripts) on GitHub. In-browser notebooks allow authors to go one step further, engaging in what we call “active reproducibility”: not only making it *technically possible* to reproduce their work, but making it convenient to do so. We hope to see more authors invest effort in this regard.\n\n\n### Software Engineering Best Practices for Scientific Publishing\n\n\nOver the past year, we’ve also seen several advantages to using software engineering best practices to operate a scientific journal. Every Distill article is housed within a GitHub repository, and peer review is conducted through the issue tracker. We’ve found this setup gives readers greater transparency into the publication process. For instance, readers can see how Gabriel Goh’s momentum article [has been updated](https://github.com/distillpub/post--momentum/compare/95506b079372cee3aa7fbc9bd29ee078aaff12e7...bf84bcf41f019658096c0130b9695b6422fe2766) from when it was first published, step through [early development](https://github.com/distillpub/post--momentum/commits/master?after=bf84bcf41f019658096c0130b9695b6422fe2766+209) to see the genesis of the article’s ideas, and [can read the back-and-forth](https://github.com/distillpub/post--momentum/issues/29) of the review process. More excitingly, we’ve seen readers engage in on-going, post-publication peer review including sending pull requests to [fix typos](https://github.com/distillpub/post--momentum/pull/40), making [more](https://github.com/distillpub/post--momentum/pull/42) [thorough](https://github.com/distillpub/post--momentum/pull/43) [editing](https://github.com/distillpub/post--momentum/pull/66) [passes](https://github.com/distillpub/post--momentum/pull/67), and even sparked [discussions with the author](https://github.com/distillpub/post--momentum/issues/51).\n\n\nAuthors also benefit from this setup as Distill provides continuous integration for scientific papers. Prior to publication, draft articles are automatically built and served from password-protected URLs. Authors are free to share these addresses to solicit initial feedback, and can be confident that their readers will always see the most up-to-date version of the draft.\n\n\n\n\n---\n\n\n\n[2](#challenges)\n\nChallenges & Improvements\n-------------------------\n\n\n### The Distill Prize\n\n\nWe have, unfortunately, not yet awarded the 2018 Distill prize. We received 59 submissions, several of which were lecture series consisting of many hours of content. We did not anticipate content of this kind and did not (and do not) have a good process in place for evaluating it. Part of the issue has been about being a perfectionist in our evaluation of the content. In the future, we aim to better balance this with expeditious review. \n\n\nWe endeavor to award the Distill prize by Thanksgiving 2018, and to conduct the process in a more timely fashion moving forward.\n\n\n### A Small Community\n\n\nDistill aims to cultivate a new style of scientific communication and build an ecosystem to support it. We hope to grow this community over time but, for the moment, we have a relatively small pool of potential authors and editors compared to more traditional academic venues. \n\n\nOne consequence of this has been that Distill has a low publication volume (12 papers as of present). We’ve grappled a lot with this: Should we change our standards? Is this something we should be concerned about?\n\n\nAt the end of the day, we believe that, as long as our content is outstanding, it’s fine to be a smaller “bespoke” journal for the foreseeable future. We believe Distill primarily serves the community by legitimizing non-traditional research artifacts, and providing an example of what is possible. That requires quality of publications, not quantity.\n\n\nAnother challenge of being in a small community is that there are often social ties between members of our community. This is great, but it means that we have to navigate a lot of potential conflicts of interest. These kind of challenges around conflicts of interest are typical of a young field, and we expect this issue to become less of a problem over the coming years as our editorial team grows. However, in the meantime, we often need to bring in independent acting editors to resolve conflicts of interest.\n\n\nOur previous process for getting independent editors was to go through members of Distill’s steering committee. (We’re very grateful to Ian Goodfellow for his patient assistance with this.) However, since this is turning out to be a pretty common situation, we’d like a mechanism that doesn’t require us to bug our steering committee:\n\n\n* In the event of a conflict of interest, Distill editors will select a member of the research community to serve as a temporary “acting editor” for an article. The acting editor should be a member of the relevant research community, and at arm’s length to the authors. The use and identity of an acting editor will be noted in the review process log, and made public if the article is published.\n\n\n### Review Process\n\n\nWhen we started Distill, we adopted a pretty radical review process. We knew that most researchers didn’t have the full skill set — especially the design skills — needed to write the kind of articles Distill aspires to. For that reason, we provided extensive mentorship and assistance to help authors improve articles.\n\n\nUnfortunately, while this has led to some articles we’re really proud of, we’ve found ourselves struggling with it. Distill editors are volunteers who do their work on top of their normal role as researchers and the kind of mentorship we’ve been trying to do can take as much as 20-80 hours per article. As a result, editors have ended up severely over-capacity. We’ve also seen editors end up in paralyzing dual roles of simultaneously mentoring authors and needing to make editorial decisions about their work.\n\n\nFor some authors, the result has been a slow and indecisive review process.\n\n\nOur authors and editors deserve better than that. To that end, Distill is implementing the following policy changes to our review process:\n\n\n* Distill’s review process will no longer involve mentorship. Instead, we’re creating alternative channels for supporting authors, discussed below.\n* Distill will only consider complete article submissions and will evaluate them as is.\n* We’re creating a [public reviewer worksheet](https://docs.google.com/document/d/16BNLdSmoUc1zqydECJ8trYkVNDnKVlxT57EEXhCxWSc/edit#) that more clearly defines our review criteria.\n\n\nOur new policy is described in detail on the [article submission page](https://distill.pub/journal/). Our old policy is archived [here](https://github.com/distillpub/pipeline/blob/b4c53b7f9b5bc50b4442344cc065c5984a7871d9/pages/journal/index.html). By default, this policy change does not apply to articles submitted under our previous policy.\n\n\nPart of Distill’s role is to enable authors to experiment and to provide a home for unusual types of academic artifacts. Formalizing our review process doesn’t change this; we are still open to submissions that our typical review process doesn’t anticipate, and will adapt our process if necessary. In particular, in the next year, we hope to increase the number of short articles focused on a narrow topic. \n\n\nWe’re also interested in finding ways to support communication of early-stage research results. We’re concerned that our current expectations may incentivize authors to not share results until they have reached a high level of maturity, and this may not be the best thing for the field. We’re not implementing any policy changes regarding this at the moment, but are actively considering options. [Please reach out](mailto:editors@distill.pub) if you have ideas you’d like us to consider!\n\n\n\n\n---\n\n\n\n[3](#other-changes)\n\nOther Changes\n-------------\n\n\n### Dual Submission Policy\n\n\nIn order for Distill to be effective in legitimizing non-traditional publishing, it must be perceived as a primary academic publication. This means it’s important for Distill to follow typical “dual publication” norms. It’s also important for us to avoid the perception that Distill is an “accompanying blog post” for something like an arXiv paper.\n\n\nThe result is that Distill can only consider articles that are substantially different from those formally published elsewhere, and is cautious of articles informally published elsewhere. Below we provide guidance for particular cases:\n\n\n* **No Prior Publication / Low-Profile Informal Publication:** No concerns!\n* **ArXiv Paper:** We completely understand that researchers sometimes need to quickly get results out, and arXiv is a great vehicle for doing so. We’re happy to publish your paper as long as there’s a clear understanding that Distill is the formal publication. The Distill paper will almost certainly be more developed anyways. :)\n* **Previous Workshop / Conference Papers:** Distill is happy to publish more developed and polished “journal versions” of papers. (This is a normal pattern in scientific publishing, although less common in Machine Learning.) These must substantively advance on the previous publications, through some combination of improving exposition, better surfacing of underlying insights and ways of thinking, consolidating a sequence of papers, or expanding with better experiments.\n* **High-Profile Informal Publication:** We see this as being very similar to publication in a workshop or conference, and have the same expectations as above.\n\n\n### Supporting Authors\n\n\nOver the last year, we’ve put a lot of energy into mentoring individuals on writing Distill articles. This has often involved editors volunteering tens of hours of mentorship, per submitted article. While this has been rewarding, it isn’t scalable.\n\n\nIn the next year, we plan to focus more on scalable ways of helping people by:\n\n\n* Continuing our work on the [**Distill Template**](https://github.com/distillpub/template), which provides many of the basic tools needed for writing beautiful web-first academic papers. We’ll be starting work on version 3 of the template, and plan to make it even easier to use by making more opinionated decisions about author workflow.\n* Writing a [**Distill Style Guide**](https://github.com/distillpub/template/wiki#style-guide) describing the best practices we’ve discovered. We see many of the same issues in the articles we edit, and hope that by consolidating the solutions in a style guide we can help all authors avoid those pitfalls.\n* Sharing our [**Distill Reviewer Worksheet**](https://docs.google.com/document/d/16BNLdSmoUc1zqydECJ8trYkVNDnKVlxT57EEXhCxWSc/edit#) so that authors can use it to self-evaluate their article and look for areas to improve.\n* Starting a [**Distill Community Slack workspace**](http://slack.distill.pub) where people can seek advice, mentorship, and co-authors.\n\n\n### Growing Distill’s Team\n\n\nWe believe that growing Distill’s editorial team is one of the most important ingredients for its long-term success. Growing the circle of people involved in Distill’s day-to-day operations makes it more robust, better able to scale, and reduces conflict of interest issues.\n\n\nAs important as it is to expand our editors, it’s equally important to make sure we pick the right editors. This means building up a team deeply aligned with Distill’s unusual values and mission. To achieve this, we plan to use the following evaluation process for potential editors:\n\n\n* Write an outstanding Distill paper, demonstrating to us that they deeply understand Distill’s mission and have the technical skills needed to evaluate others’ work.\n* Interviews with existing editors discussing Distill’s mission and the role of editors.\n\n\nBeing a Distill editor means taking on ownership and responsibility for the success of Distill and for publication decisions within your subject matter portfolio. Distill editors are volunteer positions with no compensation — except playing a critical role in advancing a new kind of scientific publishing.\n\n\nAs a first step in this direction, we’re pleased that [Arvind Satyanarayan](http://arvindsatya.com) has joined the Distill editorial team. Arvind comes from the data visualization and human-computer interaction (HCI) communities and will initially focus on articles at the intersection of these fields with machine learning. If Distill finds the right additional editor, we would be happy to expand our coverage of HCI more broadly. \n\n\n### Growing Distill’s Scope\n\n\nIn the long-run, we believe Distill should be open to expanding to other disciplines, with new editors taking on different topic portfolios.\n\n\nWe had previously believed that, in exploring a new kind of publishing, Distill would be best served by focusing on a single “vertical” (machine learning) where it had editorial expertise. We still believe that Distill should only operate in areas where it has expert editors, but we also think that isn’t the full story. Focusing only on machine learning has exacerbated our small community issues, by restricting us to the intersection of machine learning and this style of communication.\n\n\nWe now believe that the right strategy for Distill is to expand to other disciplines, slowly and cautiously, if and when we find the right editors. These new topics would become part of a single cross-disciplinary Distill journal, like PLoS or Nature. We do not plan to subdivide or franchise at this point. \n\n\nIn considering editors for new topics, Distill will have the same expectations we have for all editors (described above) with two modifications:\n\n\n* Although Distill does not normally review papers outside its existing topic portfolio, we will make an exception to review papers from potential editorial candidates. The existing editorial team would evaluate exposition while soliciting a third party editor to help us evaluate scientific merit, following Distill’s regular review process. Because this type of review is especially difficult and expensive, we will only move forward if the submission plausibly appears to be a very strong article.\n* There needs to be a second editor who can share responsibility for the new topic. This can either be an existing editor expanding to another topic, or someone applying along with the new editor. Having a second editor is important so that editors have someone to talk over difficult cases with, and so that there isn’t a single point of failure.\n\n\nConclusion\n----------\n\n\n\n Distill is a young journal exploring a new style of scientific communication. We have learned a lot of valuable lessons in our first year, but we still have a lot of room to grow. We hope that you will join us in pushing the boundaries of what a scientific paper can be!\n \n\n\n\n Distill is grateful to all the members of the research community who have supported it to date — our authors, reviewers, editors, members of the steering committee, every one providing feedback on GitHub, and, of course, our readers. We’re glad to have you with us!", "date_published": "2018-08-14T20:00:00Z", "authors": ["Distill Editors"], "summaries": ["An Update from the Editorial Team"], "doi": "10.23915/distill.00013", "journal_ref": "distill-pub", "bibliography": []} {"id": "f2c26f7be5a4bf71c811540aa79a57f3", "title": "Differentiable Image Parameterizations", "url": "https://distill.pub/2018/differentiable-parameterizations", "source": "distill", "source_type": "blog", "text": "Neural networks trained to classify images have a remarkable — and surprising! — capacity to generate images.\n Techniques such as DeepDream , style transfer, and feature visualization leverage this capacity as a powerful tool for exploring the inner workings of neural networks, and to fuel a small artistic movement based on neural art.\n \n\n\n\n All these techniques work in roughly the same way.\n Neural networks used in computer vision have a rich internal representation of the images they look at.\n We can use this representation to describe the properties we want an image to have (e.g. style), and then optimize the input image to have those properties.\n This kind of optimization is possible because the networks are differentiable with respect to their inputs: we can slightly tweak the image to better fit the desired properties, and then iteratively apply such tweaks in gradient descent.\n \n\n\n\n Typically, we parameterize the input image as the RGB values of each pixel, but that isn’t the only way.\n As long as the mapping from parameters to images is differentiable, we can still optimize alternative parameterizations with gradient descent.\n \n\n\n\n\n\n[1](#figure-differentiable-parameterizations):\n As long as an \nimage para­meter­ization\nis differ­entiable, we can back­propagate\n( )\nthrough it.\n\n\n\n\n\nMappingParametersimage/RGB spaceLossFunction\n\n\n\n\n Differentiable image parameterizations invite us to ask “what kind of image generation process can we backpropagate through?”\n The answer is quite a lot, and some of the more exotic possibilities can create a wide range of interesting effects, including 3D neural art, images with transparency, and aligned interpolation.\n Previous work using specific unusual image parameterizations has shown exciting results — we think that zooming out and looking at this area as a whole suggests there’s even more potential.\n \n\n\n\n\n### Why Does Parameterization Matter?\n\n\n\n It may seem surprising that changing the parameterization of an optimization problem can significantly change the result, despite the objective function that is actually being optimized remaining the same.\n We see four reasons why the choice of parameterization can have a significant effect:\n\n\n\n**(1) - Improved Optimization** -\nTransforming the input to make an optimization problem easier — a technique called “preconditioning” — is a staple of optimization.\n \n Preconditioning is most often presented as a transformation of the gradient\n (usually multiplying it by a positive definite “preconditioner” matrix).\n However, this is equivalent to optimizing an alternate parameterization of the input.\n \nWe find that simple changes in parameterization make image optimization for neural art and image optimization much easier.\n\n\n\n**(2) - Basins of Attraction** -\nWhen we optimize the input to a neural network, there are often many different solutions, corresponding to different local minima.\n \n Training deep neural networks characterized by complex optimization landscapes , which may have many equally good local minima for a given objective.\n (Note that finding the global minimum is not always desirable as it may result in an overfitted model .)\n Thus, it’s probably not surprising that optimizing the input to a neural network would also have many local minima.\n \nThe probability of our optimization process falling into any particular local minima is controlled by its basin of attraction (i.e., the region of the optimization landscape under the influence of the minimum).\nChanging the parameterization of an optimization problem is known to change the sizes of different basins of attraction, influencing the likely result.\n\n\n\n**(3) - Additional Constraints** -\nSome parameterizations cover only a subset of possible inputs, rather than the entire space.\nAn optimizer working in such a parameterization will still find solutions that minimize or maximize the objective function, but they’ll be subject to the constraints of the parameterization.\nBy picking the right set of constraints, one can impose a variety of constraints, ranging from simple constraints (e.g., the boundary of the image must be black), to rich, subtle constraints.\n\n\n\n**(4) - Implicitly Optimizing other Objects** -\n A parameterization may internally use a different kind of object than the one it outputs and we optimize for.\n For example, while the natural input to a vision network is an RGB image, we can parameterize that image as a rendering of a 3D object and, by backpropagating through the rendering process, optimize that instead.\n Because the 3D object has more degrees of freedom than the image, we generally use a *stochastic* parameterization that produces images rendered from different perspectives.\n\n\n\nIn the rest of the article we give concrete examples where such approaches are beneficial and lead to surprising and interesting visual results.\n\n\n\n\n\n\n---\n\n\n\n[1](#section-aligned-interpolation)\n\n\n\n[Aligned Feature Visualization Interpolation](#section-aligned-interpolation)\n-----------------------------------------------------------------------------\n\n\n\n\n Feature visualization is most often used to visualize individual neurons,\n but it can also be used to [visualize combinations of neurons](https://distill.pub/2017/feature-visualization/#interaction), in order to study how they interact .\n Instead of optimizing an image to make a single neuron fire, one optimizes it to make multiple neurons fire.\n\n\n\n\n When we want to really understand the interaction between two neurons,\n we can go a step further and create multiple visualizations,\n gradually shifting the objective from optimizing one neuron to putting more weight on the other neuron firing.\n This is in some ways similar to interpolation in the latent spaces of generative models like GANs.\n\n\n\n\n Despite this, there is a small challenge: feature visualization is stochastic.\n Even if you optimize for the exact same objective, the visualization will be laid out differently each time.\n Normally, this isn’t a problem, but it does detract from the interpolation visualizations.\n If we make them naively, the resulting visualizations will be *unaligned*:\n visual landmarks such as eyes appear in different locations in each image.\n This lack of alignment can make it harder to see the difference due to slightly different objectives,\n because they’re swamped by the much larger differences in layout.\n\n\n\n\n We can see the issue with independent optimization if we look at the interpolated frames as an animation:\n\n\n\n\n\n[2](#figure-aligned-interpolation-comparison)\n\n\n\n\n\n\n How can we achieve this aligned interpolation, where visual landmarks do not move between frames?\n There are a number of possible approaches one could try\n \n For example, one could explicitly penalize differences between adjacent frames. Our final result and our colab notebook use this technique in combination with a shared parameterization.\n \n  , one of which is using a *shared parameterization*: each frame is parameterized as a combination of its own unique parameterization, and a single shared one.\n \n\n\n\n\n[3](#figure-aligned-interpolation-examples)\n\n\n\n\n\n By partially sharing a parameterization between frames, we encourage the resulting visualizations to naturally align.\n Intuitively, the shared parameterization provides a common reference for the displacement of visual landmarks, while the unique one gives to each frame its own visual appeal based on its interpolation weights.\n \n Concretely, we combine a usually lower-resolution shared parameterization Pshared P\\_{\\text{shared}}Pshared​ and full-resolution independent parameterizations PuniqueiP\\_{\\text{unique}}^iPuniquei​ that are unique to each frame iii of the visualization.\n Each individual frame iii is then parameterized as a combination PiP^iPi of the two, Pi=σ(Puniquei+Pshared)P^i = \\sigma(P\\_{\\text{unique}}^i + P\\_{\\text{shared}})Pi=σ(Puniquei​+Pshared​), where σ\\sigmaσ is the logistic sigmoid function.\n \n This parameterization doesn’t change the objective, but it does enlarge the **(2) basins of attraction** where the visualizations are aligned.\n \n We can explicitly visualize how shared parameterization affects the basins of attraction in a toy example.\n Let’s consider optimizing two variables xxx and yyy to both minimize L(z)=(z2−1)2L(z)= (z^2-1)^2L(z)=(z2−1)2.\n Since L(z)L(z)L(z) has two basins of attraction z=1z=1z=1 or z=−1z=-1z=−1, the pair of optimization problems has four solutions:\n (x,y)=(1,1)(x,y) = (1,1)(x,y)=(1,1), (x,y)=(−1,1)(x,y) = (-1,1)(x,y)=(−1,1), (x,y)=(1,−1)(x,y) = (1,-1)(x,y)=(1,−1), or (x,y)=(−1,−1)(x,y) = (-1,-1)(x,y)=(−1,−1).\n Let’s consider randomly initializing xxx and yyy, and then optimizing them.\n Normally, the optimization problems are independent, so xxx and yyy are equally likely to come to unaligned solutions (where they have different signs) as aligned ones.\n But if we add a shared parameterization, the problems become coupled and the basin of attraction where they’re aligned becomes bigger. \n\n![](images/diagrams/basin-alignment.png)\n\n\n\n\n\n This is an initial example of how differentiable parameterizations in general can be a useful additional tool in visualizing neural networks.\n \n\n\n\n\n\n---\n\n\n\n[2](#section-styletransfer)\n\n\n\n[Style Transfer with non-VGG architectures](#section-styletransfer)\n-------------------------------------------------------------------\n\n\n\n Neural style transfer has a mystery:\n despite its remarkable success, almost all style transfer is done with variants of the **VGG architecture**.\n This isn’t because no one is interested in doing style transfer on other architectures, but because attempts to do it on other architectures consistently work poorly.\n \n Examples of experiments performed with different architectures can be found on [Medium](https://medium.com/mlreview/getting-inception-architectures-to-work-with-style-transfer-767d53475bf8), [Reddit](https://www.reddit.com/r/MachineLearning/comments/7rrrk3/d_eat_your_vggtables_or_why_does_neural_style/) and [Twitter](https://twitter.com/hardmaru/status/954173051330904065).\n \n\n\n\n\n\n Several hypotheses have been proposed to explain why VGG works so much better than other models.\n One suggested explanation is that VGG’s large size causes it to capture information that other models discard.\n This extra information, the hypothesis goes, isn’t helpful for classification, but it does cause the model to work better for style transfer.\n An alternate hypothesis is that other models downsample more aggressively than VGG, losing spatial information.\n We suspect that there may be another factor: most modern vision models have checkerboard artifacts in their gradient , which could make optimization of the stylized image more difficult.\n \n\n\n\n In previous work we found that a [decorrelated parameterization can significantly improve optimization](//distill.pub/2017/feature-visualization/#preconditioning).\n We find the same approach also improves style transfer, allowing us to use a model that did not otherwise produce visually appealing style transfer results:\n \n\n\n\n\n[4](#figure-style-transfer-examples):\n Move the slider under “final image optimization” to compare optimization in pixel space with optimization in a decorrelated space. Both images were created with the same objective and differ only in their parameterization.\n \n\n\n\n Let’s consider this change in a bit more detail. Style transfer involves three images: a content image, a style image, and the image we optimize.\n All three of these feed into the CNN, and the style transfer objective is based on the differences in how these images activate the CNN.\n The only change we make is how we parameterize the optimized image. Instead of parameterizing it in terms of pixels (which are highly correlated with their neighbors), we use a scaled Fourier transform.\n \n\n\n\n\n\n[5](#figure-style-transfer-diagram)\n\n\n\ncontent imagelearned imageLossContentStyleCNNCNNCNNstyle imageParameterizing the learned image in a decorrelated space makes style transfer more robust to choice of model.The content objective aims to get neurons to fire in the same position as they did for the content image.The style objective aims to create similar patterns of neuron activation as in the style image — without regard to position.inverse2D FFT\n\n\n\n Our exact implementation can be found in the accompanying notebook. Note that it also uses [transformation robustness](https://distill.pub/2017/feature-visualization/#regularizer-playground-robust), which not all implementations of style transfer use.\n \n\n\n\n\n\n---\n\n\n\n[3](#section-xy2rgb)\n\n\n\n[Compositional Pattern Producing Networks](#section-xy2rgb)\n-----------------------------------------------------------\n\n\n\n So far, we’ve explored image parameterizations that are relatively close to how we normally think of images, using pixels or Fourier components.\n In this section, we explore the possibility of **(3) adding additional constraints** to the optimization process by using a different parameterization.\n More specifically, we parameterize our image as a neural network  — in particular, a Compositional Pattern Producing Network (CPPN) .\n \n\n\n\n CPPNs are neural networks that map (x,y)(x,y)(x,y) positions to image colors:\n \n\n\n\n(x,y) →CPPN (r,g,b)(x,y) ~\\xrightarrow{\\tiny CPPN}~ (r,g,b)(x,y) CPPN​ (r,g,b)\n\n\n By applying the CPPN to a grid of positions, one can make arbitrary resolution images.\n The parameters of the CPPN network — the weights and biases — determine what image is produced.\n Depending on the architecture chosen for the CPPN, pixels in the resulting image are constraint to share, up to a certain degree, the color of their neighbors.\n \n\n\n\n Random parameters can produce aesthetically interesting images , but we can produce more interesting images by learning the parameters of the CPPN .\n Often this is done by evolution ; here we explore the possibility to backpropagate some objective function, such a feature visualization objective.\n This is easily done since the CPPN network is differentiable as the convolutional neural network and the objective function can be propagated also through the CPPN to update its parameters accordingly.\n That is to say, CPPNs are a differentiable image parameterization — a general tool for parameterizing images in any neural art or visualization task.\n \n\n\n\nWeightsChannelCNNCPPNimage/RGB\n\n[6](#figure-xy2rgb-diagram):\n CPPNs are a differentiable image parameterization. We can use them for neural art or visualization tasks by backpropagating past the image, through the CPPN to its parameters.\n \n\n\n Using CPPNs as image parameterization can add an interesting artistic quality to neural art, vaguely reminiscent of light-paintings.Light-painting is an artistic medium where images are created by manipulating colorful light beams with prisms and mirrors. Notable examples of this technique are the [work of Stephen Knapp](http://www.lightpaintings.com/). \n \nNote that light-painting metaphor here is rather fragile: for example light composition is an additive process, while CPPNs can have negative-weighted connections between layers.\n At a more theoretical level, they can be seen as constraining the compositional complexity of your images.\n When used to optimize a feature visualization objective, they produce distinctive images:\n \n\n\n\n\n\n[7](#figure-xy2rgb-examples):\n A Compositional Pattern Producing Network (CPPN) is used as differentiable parameterization for visualizing features at different layers.\n \n\n\n The visual quality of the generated images is heavily influenced by the architecture of the chosen CPPN.\n Not only the shape of the network, i.e., the number of layers and filters, plays a role, but also the chosen activation functions and normalization. For example, deeper networks produce more fine grained details compared to shallow ones.\n We encourage readers to experiment in generating different images by changing the architecture of the CPPN. This can be easily done by changing the code in the supplementary notebook.\n \n\n\n\n The evolution of the patterns generated by the CPPN are artistic artifacts themselves.\n To maintain the metaphor of light-paintings, the optimization process correspond to an iterative adjustments of the beam directions and shapes.\n Because the iterative changes have a more global effect compared to, for example, a pixel parameterization, at the beginning of the optimization only major patterns are visible.\n By iteratively adjusting the weights, our imaginary beams are positioned in such a way that fine details emerge.\n \n\n\n\n\n![](images/pointer.svg) \n\n[8](#figure-xy2rgb-training):\n Output of CPPNs during training. *Control each video by hovering, or tapping it if you are on a mobile device.*\n\n\n\n\n\n\n By playing with this metaphor, we can also create a new kind of animation that morph one of the above images into a different one.\n Intuitively, we start from one of the light-paintings and we move the beams to create a different one.\n This result is in fact achieved by interpolating the weights of the CPPN representations of the two patterns. A number of intermediate frames are then generated by generating an image given the interpolated CPPN representation.\n As before, changes in the parameter have a global effect and create visually appealing intermediate frames.\n \n\n\n\n\n![](images/pointer.svg) \n\n[9](#figure-xy2rgb-interpolation):\n Interpolating CPPN weights between two learned points.\n \n\n\n\n\n\n In this section we presented a parameterization that goes beyond a standard image representation.\n Neural networks, a CPPN in this case, can be used to parameterize an image that is optimized for a given objective function.\n More specifically, we combined a feature-visualization objective function with a CPPN parameterization to create infinite-resolution images of distinctive visual style.\n \n\n\n\n\n\n---\n\n\n\n[4](#section-rgba)\n\n\n\n[Generation of Semi-Transparent Patterns](#section-rgba)\n--------------------------------------------------------\n\n\n\n The neural networks used in this article were trained to receive 2D RGB images as input.\n Is it possible to use the same network to synthesize artifacts that span **(4) beyond this domain**?\n It turns out that we can do so by making our differentiable parameterization define a *family* of images instead of a single image, and then sampling one or a few images from that family at each optimization step.\n This is important because many of the objects we’ll explore optimizing have more degrees of freedom than the images going into the network.\n \n\n\n\n To be concrete, let’s consider the case of semi-transparent images. These images have, in addition to the RGB channels, an alpha channel that encodes each pixel’s opacity (in the range [0,1][0,1][0,1]). In order to feed such images into a neural network trained on RGB images, we need to somehow collapse the alpha channel. One way to achieve this is to overlay the RGBA image III on top of a background image BGBGBG using the standard alpha blending formula In our experiments we use [gamma-corrected blending](https://en.wikipedia.org/wiki/Alpha_compositing#Composing_alpha_blending_with_gamma_correction) \n\n\n\n\nOrgb  =  Irgb∗Ia  +  BGrgb∗(1−Ia)O\\_{rgb} ~~=~~ I\\_{rgb} \\* I\\_a ~~+~~ BG\\_{rgb} \\* (1 - I\\_a)Orgb​  =  Irgb​∗Ia​  +  BGrgb​∗(1−Ia​),\n \n\n where IaI\\_aIa​ is the alpha channel of the image III.\n \n\n\n\n If we used a static background BGBGBG, such as black, the transparency would merely indicate pixel positions in which that background contributes directly to the optimization objective.\n In fact, this is equivalent to optimizing an RGB image and making it transparent in areas where its color matches with the background!\n Intuitively, we’d like transparent areas to correspond to something like “the content of this area could be anything.”\n Building on this intuition, we use a different random background at every optimization step.\n \n We have tried both sampling from real images, and using different types of noise.\n As long as they were sufficiently randomized, the different distributions did not meaningfully influence the resulting optimization.\n Thus, for simplicity, we use a smooth 2D gaussian noise.\n \n\n\n\n\n\nChannelCNNrandom noise\n\n\n\n[10](#figure-rgba-diagram):\n Adding an alpha channel to the image parameterization allows it to represent transparency.\n Transparent areas are blended with a random background at each step of the optimization.\n \n\n\n By default, optimizing our semi-transparent image will make the image fully opaque, so the network can always get its optimal input.\n To avoid this, we need to change our objective with an objective that encourages some transparency.\n We find it effective to replace the original objective with:\n \n\n\n\nobjnew  =  objold  ∗  (1−mean(Ia))\\text{obj}\\_{\\text{new}} ~~=~~ \\text{obj}\\_{\\text{old}} ~~\\*~~ (1-\\text{mean}(I\\_a))objnew​  =  objold​  ∗  (1−mean(Ia​)),\n \n\n This new objective automatically balances the original objective objold\\text{obj}\\_{\\text{old}}objold​ with reducing the mean transparency.\n If the image becomes very transparent, it will focus on the original objective. If it becomes too opaque, it will temporarily stop caring about the original objective and focus on decreasing the average opacity.\n \n\n\n\n\n[11](#figure-rgba-examples):\n Examples of the optimization of semi transparent images for different layers and units.\n \n\n\n\n\n It turns out that the generation of semi-transparent images is useful in feature visualization.\n Feature visualization aims to understand what neurons in a vision model are looking for, by creating images that maximally activate them.\n Unfortunately, there is no way for these visualizations to distinguish which areas of an image strongly influence a neuron’s activation and those which only marginally do so.\n This issue does not occur when optimizing for the activation of entire channels, because in that case every pixel has multiple neurons that are close to centered on it. As a consequence, the entire input image gets filled with copies of what those neurons care about strongly.\n\n\n\n\n Ideally, we would like a way for our visualizations to make this distinction in importance — one natural way to represent that a part of the image doesn’t matter is for it to be transparent.\n Thus, if we optimize an image with an alpha channel and encourage the overall image to be transparent, parts of the image that are unimportant according to the feature visualization objective should become transparent.\n \n\n\n\n\n\n\n---\n\n\n\n[5](#section-featureviz-3d)\n\n\n\n[Efficient Texture Optimization through 3D Rendering](#section-featureviz-3d)\n-----------------------------------------------------------------------------\n\n\n\n In the previous section, we were able to use a neural network for RGB images to create a semi-transparent RGBA image.\n Can we push this even further, creating **(4) other kinds of objects** even further removed from the RGB input?\n In this section we explore optimizing **3D objects** for a feature-visualization objective.\n We use a 3D rendering process to turn them into 2D RGB images that can be fed into the network, and backpropagate through the rendering process to optimize the texture of the 3D object.\n \n\n\n\n Our technique is similar to the approach that Athalye et al. used for the creation of real-world adversarial examples, as we rely on the backpropagation of the objective function to randomly sampled views of the 3D model.\n We differ from existing approaches for artistic texture generation, as we do not modify the geometry of the object during back-propagation.\n By disentangling the generation of the texture from the position of their vertices, we can create very detailed texture for complex objects.\n \n\n\n\nBefore we can describe our approach, we first need to understand how a 3D object is stored and rendered on screen. The object’s geometry is usually saved as a collection of interconnected triangles called **triangle mesh** or, simply, mesh. To render a realistic model, a **texture** is painted over the mesh. The texture is saved as an image that is applied to the model by using the so called **UV-mapping**. Every vertex cic\\_ici​ in the mesh is associated to a (ui,vi)(u\\_i,v\\_i)(ui​,vi​) coordinate in the texture image. The model is then rendered, i.e. drawn on screen, by coloring every triangle with the region of the image that is delimited by the (u,v)(u,v)(u,v) coordinates of its vertices.\n \n\n\n\n\n A simple naive way to create the 3D object texture would be to optimize an image the normal way and then use it as a texture to paint on the object.\n However, this approach generates a texture that does not consider the underlying UV-mapping and, therefore, will create a variety of visual artifacts in the rendered object.\n First, **seams** are visible on the rendered texture, because the optimization is not aware of the underlying UV-mapping and, therefore, does not optimize the texture consistently along the split patches of the texture.\n Second, the generated patterns are **randomly oriented** on different parts of the object (see, e.g., the vertical and wiggly patterns) because they are not consistently oriented in the underlying UV-mapping.\n Finally generated patterns are **inconsistently scaled** because the UV-mapping does not enforce a consistent scale between triangle areas and their mapped triangle in the texture.\n\n\n\n\n\n\n[13](#figure-featureviz-3d-explanation):\n 3D model of the famous Stanford Bunny. You can interact with the model by rotating and zooming. Moreover, you can unfold the object to its two-dimensional texture representation. This unfolding reveals the UV mapping used to store the texture in the texture image. Note how the render-based optimized texture is divided in several patches that allows for a complete and undistorted coverage of the object.\n\n\n\n\n We take a different approach.\n Instead of directly optimizing the texture, we optimize the texture *through* renderings of the 3D object, like those the user would eventually see.\n The following diagram presents an overview of the proposed pipeline:\n \n\n\n\nChannelCNNRenderlearned imagelearned textureEvery optimization step, the 3D model is rendered from a random angle aimed at the center of the object.random view3D model\n\n[14](#figure-featureviz-3d-diagram):\n We optimize a texture by backpropagating through the rendering process. This is possible because we know how pixels in the rendered image correspond to pixels in the texture.\n \n\n\nWe start the process by randomly initializing the texture with a Fourier parameterization.\nAt every training iteration we sample a random camera position, which is oriented towards the center of the bounding box of the object, and we render the textured object as an image.\nWe then backpropagate the gradient of the desired objective function, i.e., the feature of interest in the neural network, to the rendered image.\n \n\n\n\nHowever, an update of the rendered image does not correspond to an update to the texture that we aim at optimizing. Hence, we need to further propagate the changes to the object’s texture.\nThe propagation is easily implemented by applying a reverse UV-mapping, as for each pixel on screen we know its coordinate in the texture.\nBy modifying the texture, during the following optimization iterations, the rendered image will incorporate the changes applied in the previous iterations.\n \n\n\n\n\n[15](#figure-featureviz-3d-examples):\n Textures are generated by optimizing for a feature visualization objective function.\n Seams in the textures are hardly visible and the patterns are correctly oriented.\n\n\n\n\nThe resulting textures are consistently optimized along the cuts, hence removing the seams and enforcing an uniform orientation for the rendered object.\nMorever, since the function optimization is disentangled by the geometry of the object, the resolution of the texture can be arbitrary high.\nIn the next section we will se how this framework can be reused for performing an artistic style transfer to the object’s texture.\n\n\n\n\n\n\n---\n\n\n\n[6](#section-style-transfer-3d)\n\n\n\n[Style Transfer for Textures through 3D Rendering](#section-style-transfer-3d)\n------------------------------------------------------------------------------\n\n\n\nNow that we have established a framework for efficient backpropagation into the UV-mapped texture, we can use it to adapt existing style transfer techniques for 3D objects.\nSimilarly to the 2D case, we aim at redrawing the original object’s texture with the style of a user-provided image.\nThe following diagram presents an overview of the approach:\n \n\n\n\n\n[16](#figure-style-transfer-3d-diagram)\n\n\ncontent imagelearned imageLossContentStyleCNNCNNCNNRenderrandom view3D modeloriginal texturestyle imageRenderThe content objective aims to get neurons to fire in the same position as they did for a random view of the 3D model with the original texture.Every optimization step, the 3D model is rendered from a random angle in two different textures: the original one and the learned one.The style objective aims to create similar patterns of neuron activation as in the style image without regard to position.By backpropagating through the rendering process, we can optimize a texture.\n\n\n\nThe algorithm works in similar way to the one presented in the previous section, starting from a randomly initialized texture.\nAt each iteration, we sample a random view point oriented toward the center of the bounding box of the object and we render two images of it: one with the original texture, the *content image*, and one with the texture that we are currently optimizing, the *learned image*.\n \n\n\n\nAfter the *content image* and *learned image* are rendered, we optimize for the style-transfer objective function introduced by Gatys et al. and we map the parameterization back in the UV-mapped texture as introduced in the previous section.\nThe procedure is then iterated until the desired blend of content and style is obtained in the target texture.\n \n\n\n\n\n[17](#figure-style-transfer-3d-examples):\n Style Transfer onto various 3D models. Note that visual landmarks in the content texture, such as eyes, show up correctly in the generated texture.\n \n\n\n\nBecause every view is optimized independently, the optimization is forced to try to add all the style’s elements at every iteration.\nFor example, if we use as style image the Van Gogh’s “Starry Night” painting, stars will be added in every single view.\nWe found we obtain more pleasing results, such as those presented above, by introducing a sort of “memory” of the style of\nprevious views. To this end, we maintain moving averages of style-representing Gram matrices\nover the recently sampled viewpoints. On each optimization iteration we compute the style loss against those averaged matrices,\ninstead of the ones computed for that particular view.\n\n We use TensorFlow’s `tf.stop_gradient` method to substitute current Gram matrices\n with their moving averages on forward pass, while still propagating the correct gradients\n to the current Gram matrices. \n \n\n\n An alternative approach, such as the one employed by ,\n would require sampling multiple viewpoints of the scene at each step,\n increasing memory requirements. In contrast, our substitution trick can be also\n used to apply style transfer to high resolution (>10M pixels) images on a\n single consumer-grade GPU.\n\n\n\n\n\nThe resulting textures combine elements of the desired style, while preserving the characteristics of the original texture.\nTake as an example the model created by imposing Van Gogh’s starry night as style image.\nThe resulting texture contains the rythmic and vigorous brush strokes that characterize Van Gogh’s work.\nHowever, despite the style image’s primarily cold tones, the resulting fur has a warm orange undertone as it is preserved from the original texture.\nEven more interesting is how the eyes of the bunny are preserved when different styles are transfered.\nFor example, when the style is obtained from the Van Gogh’s painting, the eyes are transformed in a star-like swirl, while if Kandinsky’s work is used, they become abstract patterns that still resemble the original eyes.\n \n\n\n\n![](images/printed_bunny_extended.jpg)\n\n[18](#figure-style-transfer-3d-picture):\n 3D print of a style transfer of ”[La grand parade sur fond rouge](https://www.wikiart.org/en/fernand-leger/the-large-one-parades-on-red-bottom-1953)″ (Fernand Leger, 1953) onto the ”[Stanford Bunny](http://alice.loria.fr/index.php/software/7-data/37-unwrapped-meshes.html)″ by Greg Turk & Marc Levoy.\n \n\n\n Textured models produced with the presented method can be easily used with popular 3D modeling software or game engines. To show this, we 3D printed one of the designs as a real-world physical artifact We used the [Full-color Sandstone](https://www.shapeways.com/materials.sandstone) material..\n\n\n\n\n\n\n---\n\n\n[Conclusions](#conclusions)\n---------------------------\n\n\n\n For the creative artist or researcher, there’s a large space of ways to parameterize images for optimization.\n This opens up not only dramatically different image results, but also animations and 3D objects!\n We think the possibilities explored in this article only scratch the surface.\n For example, one could explore extending the optimization of 3D object textures to optimizing the material or reflectance — or even go the direction of Kato et al. and optimize the mesh vertex positions.\n \n\n\n\n This article focused on *differentiable* image parameterizations, because they are easy to optimize and cover a wide range of possible applications.\n But it’s certainly possible to optimize image parameterizations that aren’t differentiable, or are only partly differentiable, using reinforcement learning or evolutionary strategies .\n Using non-differentiable parameterizations could open up exciting possibilities for image or scene generation.", "date_published": "2018-07-25T20:00:00Z", "authors": ["Alexander Mordvintsev", "Nicola Pezzotti", "Ludwig Schubert", "Chris Olah"], "summaries": ["A powerful, under-explored tool for neural network visualizations and art."], "doi": "10.23915/distill.00012", "journal_ref": "distill-pub", "bibliography": [{"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "http://arxiv.org/pdf/1508.06576.pdf", "title": "A Neural Algorithm of Artistic Style"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "http://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://arxiv.org/pdf/1707.07397.pdf", "title": "Synthesizing robust adversarial examples"}, {"link": "http://arxiv.org/pdf/1412.0233.pdf", "title": "The loss surfaces of multilayer networks"}, {"link": "http://arxiv.org/pdf/1409.1556.pdf", "title": "Very deep convolutional networks for large-scale image recognition"}, {"link": "http://distill.pub/2016/deconv-checkerboard/", "title": "Deconvolution and checkerboard artifacts"}, {"link": "http://arxiv.org/pdf/1711.10925.pdf", "title": "Deep Image Prior"}, {"link": "http://eplex.cs.ucf.edu/papers/stanley_gpem07.pdf", "title": "Compositional pattern producing networks: A novel abstraction of development"}, {"link": "http://blog.otoro.net/2015/06/19/neural-network-generative-art/", "title": "Neural Network Generative Art in Javascript"}, {"link": "https://cs.stanford.edu/people/karpathy/convnetjs/demo/image_regression.html", "title": "Image Regression"}, {"link": "http://blog.otoro.net/2016/04/01/generating-large-images-from-latent-vectors", "title": "Generating Large Images from Latent Vectors"}, {"link": "http://www.karlsims.com/papers/siggraph91.html", "title": "Artificial Evolution for Computer Graphics"}, {"link": "http://arxiv.org/pdf/1711.07566.pdf", "title": "Neural 3D Mesh Renderer"}, {"link": "http://graphics.stanford.edu/data/3Dscanrep", "title": "The Stanford Bunny"}, {"link": "http://www.jmlr.org/papers/volume15/wierstra14a/wierstra14a.pdf", "title": "Natural evolution strategies."}, {"link": "https://arxiv.org/pdf/1703.03864", "title": "Evolution strategies as a scalable alternative to reinforcement learning"}, {"link": "https://arxiv.org/pdf/1603.04467.pdf", "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems"}]} {"id": "60c80cf91136c14dd9381b0cec55967f", "title": "Feature-wise transformations", "url": "https://distill.pub/2018/feature-wise-transformations", "source": "distill", "source_type": "blog", "text": "Many real-world problems require integrating multiple sources of information.\n Sometimes these problems involve multiple, distinct modalities of\n information — vision, language, audio, etc. — as is required\n to understand a scene in a movie or answer a question about an image.\n Other times, these problems involve multiple sources of the same\n kind of input, i.e. when summarizing several documents or drawing one\n image in the style of another.\n \n\n\n\n Video and audio must be understood in the context of each other to understand the scene. Credit: still frame from the movie Charade. “Are you sure there’s no mistake?” An image needs to be processed in the context of a question being asked. Credit: image-question pair from the CLEVR dataset. Are there an equalnumber of largethings and metalspheres?\n\n\n When approaching such problems, it often makes sense to process one source\n of information *in the context of* another; for instance, in the\n right example above, one can extract meaning from the image in the context\n of the question. In machine learning, we often refer to this context-based\n processing as *conditioning*: the computation carried out by a model\n is conditioned or *modulated* by information extracted from an\n auxiliary input.\n \n\n\n\n Finding an effective way to condition on or fuse sources of information\n is an open research problem, and\n \n in this article, we concentrate on a specific family of approaches we call\n *feature-wise transformations*.\n \n We will examine the use of feature-wise transformations in many neural network\n architectures to solve a surprisingly large and diverse set of problems;\n \n their success, we will argue, is due to being flexible enough to learn an\n effective representation of the conditioning input in varied settings.\n In the language of multi-task learning, where the conditioning signal is\n taken to be a task description, feature-wise transformations\n learn a task representation which allows them to capture and leverage the\n relationship between multiple sources of information, even in remarkably\n different problem settings.\n \n\n\n\n\n---\n\n\nFeature-wise transformations\n----------------------------\n\n\n\n To motivate feature-wise transformations, we start with a basic example,\n where the two inputs are images and category labels, respectively. For the\n purpose of this example, we are interested in building a generative model of\n images of various classes (puppy, boat, airplane, etc.). The model takes as\n input a class and a source of random noise (e.g., a vector sampled from a\n normal distribution) and outputs an image sample for the requested class.\n \n\n\n\n A decoder-basedgenerative model maps a source of noise to a sample in the context of the “puppy” class. noise“puppy”decoder\n\n\n Our first instinct might be to build a separate model for each\n class. For a small number of classes this approach is not too bad a solution,\n but for thousands of classes, we quickly run into scaling issues, as the number\n of parameters to store and train grows with the number of classes.\n We are also missing out on the opportunity to leverage commonalities between\n classes; for instance, different types of dogs (puppy, terrier, dalmatian,\n etc.) share visual traits and are likely to share computation when\n mapping from the abstract noise vector to the output image.\n \n\n\n\n Now let’s imagine that, in addition to the various classes, we also need to\n model attributes like size or color. In this case, we can’t\n reasonably expect to train a separate network for *each* possible\n conditioning combination! Let’s examine a few simple options.\n \n\n\n\n A quick fix would be to concatenate a representation of the conditioning\n information to the noise vector and treat the result as the model’s input.\n This solution is quite parameter-efficient, as we only need to increase\n the size of the first layer’s weight matrix. However, this approach makes the implicit\n assumption that the input is where the model needs to use the conditioning information.\n Maybe this assumption is correct, or maybe it’s not; perhaps the\n model does not need to incorporate the conditioning information until late\n into the generation process (e.g., right before generating the final pixel\n output when conditioning on texture). In this case, we would be forcing the model to\n carry this information around unaltered for many layers.\n \n\n\n\n Because this operation is cheap, we might as well avoid making any such\n assumptions and concatenate the conditioning representation to the input of\n *all* layers in the network. Let’s call this approach\n *concatenation-based conditioning*.\n \n\n\n\nConcatenation-based conditioning simply concatenates the conditioning representation to the input. The result is passed through a linear layer to produce the output. input conditioningrepresentationconcatenatelinear output \n\n\n Another efficient way to integrate conditioning information into the network\n is via *conditional biasing*, namely, by adding a *bias* to\n the hidden layers based on the conditioning representation.\n \n\n\n\nConditional biasing first maps the conditioning representation to a bias vector. The bias vector is then added to the input. inputoutputconditioningrepresentationlinear\n\n\n Interestingly, conditional biasing can be thought of as another way to\n implement concatenation-based conditioning. Consider a fully-connected\n linear layer applied to the concatenation of an input\n x\\mathbf{x}x and a conditioning representation\n z\\mathbf{z}z:\n \n The same argument applies to convolutional networks, provided we ignore\n the border effects due to zero-padding.\n \n\n\n\n\nConcatenation-based conditioning is equivalent to conditional biasing. We can decompose the matrix- vector product into two matrix- vector subproducts. We can then add the resulting two vectors. The z-dependent vector is a conditional bias. Wxzxzconditional bias\n\n\n Yet another efficient way to integrate class information into the network is\n via *conditional scaling*, i.e., scaling hidden layers\n based on the conditioning representation.\n \n\n\n\nConditional scaling first maps the conditioning representation to a scaling vector. The scaling vector is then multiplied with the input. inputoutputconditioningrepresentationlinear\n\n\n A special instance of conditional scaling is feature-wise sigmoidal gating:\n we scale each feature by a value between 000 and\n 111 (enforced by applying the logistic function), as a\n function of the conditioning representation. Intuitively, this gating allows\n the conditioning information to select which features are passed forward\n and which are zeroed out.\n \n\n\n\n Given that both additive and multiplicative interactions seem natural and\n intuitive, which approach should we pick? One argument in favor of\n *multiplicative* interactions is that they are useful in learning\n relationships between inputs, as these interactions naturally identify\n “matches”: multiplying elements that agree in sign yields larger values than\n multiplying elements that disagree. This property is why dot products are\n often used to determine how similar two vectors are.\n \n Multiplicative interactions alone have had a history of success in various\n domains — see [Bibliographic Notes](#bibliographic-notes).\n \n One argument in favor of *additive* interactions is that they are\n more natural for applications that are less strongly dependent on the\n joint values of two inputs, like feature aggregation or feature detection\n (i.e., checking if a feature is present in either of two inputs).\n \n\n\n\n In the spirit of making as few assumptions about the problem as possible,\n we may as well combine *both* into a\n conditional *affine transformation*.\n \n An affine transformation is a transformation of the form\n y=m∗x+by = m \\* x + by=m∗x+b.\n \n\n\n\n\n All methods outlined above share the common trait that they act at the\n *feature* level; in other words, they leverage *feature-wise*\n interactions between the conditioning representation and the conditioned\n network. It is certainly possible to use more complex interactions,\n but feature-wise interactions often strike a happy compromise between\n effectiveness and efficiency: the number of scaling and/or shifting\n coefficients to predict scales linearly with the number of features in the\n network. Also, in practice, feature-wise transformations (often compounded\n across multiple layers) frequently have enough capacity to model complex\n phenomenon in various settings.\n \n\n\n\n Lastly, these transformations only enforce a limited inductive bias and\n remain domain-agnostic. This quality can be a downside, as some problems may\n be easier to solve with a stronger inductive bias. However, it is this\n characteristic which also enables these transformations to be so widely\n effective across problem domains, as we will later review.\n \n\n\n### Nomenclature\n\n\n\n To continue the discussion on feature-wise transformations we need to\n abstract away the distinction between multiplicative and additive\n interactions. Without losing generality, let’s focus on feature-wise affine\n transformations, and let’s adopt the nomenclature of Perez et al.\n , which formalizes conditional affine\n transformations under the acronym *FiLM*, for Feature-wise Linear\n Modulation.\n \n Strictly speaking, *linear* is a misnomer, as we allow biasing, but\n we hope the more rigorous-minded reader will forgive us for the sake of a\n better-sounding acronym.\n \n\n\n\n\n We say that a neural network is modulated using FiLM, or *FiLM-ed*,\n after inserting *FiLM layers* into its architecture. These layers are\n parametrized by some form of conditioning information, and the mapping from\n conditioning information to FiLM parameters (i.e., the shifting and scaling\n coefficients) is called the *FiLM generator*.\n In other words, the FiLM generator predicts the parameters of the FiLM\n layers based on some auxiliary input.\n Note that the FiLM parameters are parameters in one network but predictions\n from another network, so they aren’t learnable parameters with fixed\n weights as in the fully traditional sense.\n For simplicity, you can assume that the FiLM generator outputs the\n concatenation of all FiLM parameters for the network architecture.\n \n\n\n\n The FiLM generator processes the conditioning information and produces parameters that describe how the target network should alter its computation. Here, the FiLM-ed network’s computation is conditioned by two FiLM layers. outputsub-networkFiLMsub-networkFiLMsub-networkinputconditioningFiLM generatorFiLM parameters\n\n\n As the name implies, a FiLM layer applies a feature-wise affine\n transformation to its input. By *feature-wise*, we mean that scaling\n and shifting are applied element-wise, or in the case of convolutional\n networks, feature map -wise.\n \n To expand a little more on the convolutional case, feature maps can be\n thought of as the same feature detector being evaluated at different\n spatial locations, in which case it makes sense to apply the same affine\n transformation to all spatial locations.\n \n In other words, assuming x\\mathbf{x}x is a FiLM layer’s\n input, z\\mathbf{z}z is a conditioning input, and\n γ\\gammaγ and β\\betaβ are\n z\\mathbf{z}z-dependent scaling and shifting vectors,\n\n FiLM(x)=γ(z)⊙x+β(z).\n \\textrm{FiLM}(\\mathbf{x}) = \\gamma(\\mathbf{z}) \\odot \\mathbf{x}\n + \\beta(\\mathbf{z}).\n FiLM(x)=γ(z)⊙x+β(z).\n\n You can interact with the following fully-connected and convolutional FiLM\n layers to get an intuition of the sort of modulation they allow:\n \n\n\n\n In a fully-connected network, FiLM applies a different affine transformation to each feature. First, each feature (or channel) is scaled by the corresponding γ parameter. Then, each feature (or channel) is shifted by the corresponding β parameter. γβ In a convolutional network, FiLM applies a different affine transformation to each channel, consistent across spatial locations. γβ\n\n\n In addition to being a good abstraction of conditional feature-wise\n transformations, the FiLM nomenclature lends itself well to the notion of a\n *task representation*. From the perspective of multi-task learning,\n we can view the conditioning signal as the task description. More\n specifically, we can view the concatenation of all FiLM scaling and shifting\n coefficients as both an instruction on *how to modulate* the\n conditioned network and a *representation* of the task at hand. We\n will explore and illustrate this idea later on.\n \n\n\n\n\n---\n\n\nFeature-wise transformations in the literature\n----------------------------------------------\n\n\n\n Feature-wise transformations find their way into methods applied to many\n problem settings, but because of their simplicity, their effectiveness is\n seldom highlighted in lieu of other novel research contributions. Below are\n a few notable examples of feature-wise transformations in the literature,\n grouped by application domain. The diversity of these applications\n underscores the flexible, general-purpose ability of feature-wise\n interactions to learn effective task representations.\n \n\n\n\nexpand all\n\nVisual question-answering+\n\n Perez et al. use\n FiLM layers to build a visual reasoning model\n trained on the CLEVR dataset to\n answer multi-step, compositional questions about synthetic images.\n \n\n\n\n The linguistic pipeline acts as the FiLM generator. FiLM layers in each residual block modulate the visualpipeline. feature extractorsub-networkFiLM layerReLU Each residual block has a FiLM layer added to it. sub-networkFiLM layer…linearFiLM parametersAreGRUthereGRUmoreGRUcubesGRUthanGRUyellowGRUthingsGRU\n\n\n The model’s linguistic pipeline is a FiLM generator which\n extracts a question representation that is linearly mapped to\n FiLM parameter values. Using these values, FiLM layers inserted within each\n residual block condition the visual pipeline. The model is trained\n end-to-end on image-question-answer triples. Strub et al.\n later on improved on the model by\n using an attention mechanism to alternate between attending to the language\n input and generating FiLM parameters layer by layer. This approach was\n better able to scale to settings with longer input sequences such as\n dialogue and was evaluated on the GuessWhat?! \n and ReferIt datasets.\n \n\n\n\n de Vries et al. leverage FiLM\n to condition a pre-trained network. Their model’s linguistic pipeline\n modulates the visual pipeline via conditional batch normalization,\n which can be viewed as a special case of FiLM. The model learns to answer natural language questions about\n real-world images on the GuessWhat?! \n and VQAv1 datasets.\n \n\n\n\n The linguistic pipeline acts as the FiLM generator and also directly passes the question representation to the rest of the network. FiLM layers modulate the pre- trained visual pipeline by making the batch normalization parameters query-dependent. sub-networknormalizationFiLM layersub-networknormalizationFiLM layer… conditional batch normalization conditional batch normalization MLPFiLM parameters…IsLSTMtheLSTMumbrellaLSTMupsideLSTMdownLSTM\n\n\n The visual pipeline consists of a pre-trained residual network that is\n fixed throughout training. The linguistic pipeline manipulates the visual\n pipeline by perturbing the residual network’s batch normalization\n parameters, which re-scale and re-shift feature maps after activations\n have been normalized to have zero mean and unit variance. As hinted\n earlier, conditional batch normalization can be viewed as an instance of\n FiLM where the post-normalization feature-wise affine transformation is\n replaced with a FiLM layer.\n \n\n\nStyle transfer+\n\n Dumoulin et al. use\n feature-wise affine transformations — in the form of conditional\n instance normalization layers — to condition a style transfer\n network on a chosen style image. Like conditional batch normalization\n discussed in the previous subsection,\n conditional instance normalization can be seen as an instance of FiLM\n where a FiLM layer replaces the post-normalization feature-wise affine\n transformation. For style transfer, the network models each style as a separate set of\n instance normalization parameters, and it applies normalization with these\n style-specific parameters.\n \n\n\n\n The FiLM generator predicts parameters describing the target style. The style transfer network is conditioned by making the instance normalization parameters style-dependent. FiLM generatorFiLM parameterssub-networknormalizationFiLM layersub-networknormalizationFiLM layer… conditional instance normalization conditional instance normalization \n\n\n Dumoulin et al. use a simple\n embedding lookup to produce instance normalization parameters, while\n Ghiasi et al. further\n introduce a *style prediction network*, trained jointly with the\n style transfer network to predict the conditioning parameters directly from\n a given style image. In this article we opt to use the FiLM nomenclature\n because it is decoupled from normalization operations, but the FiLM\n layers used by Perez et al. were\n themselves heavily inspired by the conditional normalization layers used\n by Dumoulin et al. .\n \n\n\n\n Yang et al. use a related\n architecture for video object segmentation — the task of segmenting a\n particular object throughout a video given that object’s segmentation in the\n first frame. Their model conditions an image segmentation network over a\n video frame on the provided first frame segmentation using feature-wise\n scaling factors, as well as on the previous frame using position-wise\n biases.\n \n\n\n\n So far, the models we covered have two sub-networks: a primary\n network in which feature-wise transformations are applied and a secondary\n network which outputs parameters for these transformations. However, this\n distinction between *FiLM-ed network* and *FiLM generator*\n is not strictly necessary. As an example, Huang and Belongie\n propose an alternative\n style transfer network that uses adaptive instance normalization layers,\n which compute normalization parameters using a simple heuristic.\n \n\n\n\n The model processes content and style images up to the adaptive instance normalization layer. FiLM parameters are computed as the spatial mean and standard deviation statistics of the style feature maps. The FiLM-ed feature maps are fed to the remainder of the network to produce the stylized image. sub-networksub-networkγ, β acrossspatial axesFiLMparametersnormalizationFiLM layer adaptive instance normalization sub-network\n\n\n Adaptive instance normalization can be interpreted as inserting a FiLM\n layer midway through the model. However, rather than relying\n on a secondary network to predict the FiLM parameters from the style\n image, the main network itself is used to extract the style features\n used to compute FiLM parameters. Therefore, the model can be seen as\n *both* the FiLM-ed network and the FiLM generator.\n \n\n\nImage recognition+\n\n As discussed in previous subsections, there is nothing preventing us from considering a\n neural network’s activations *themselves* as conditioning\n information. This idea gives rise to\n *self-conditioned* models.\n \n\n\n\n The FiLM generator predicts FiLM parameters conditioned on the network’s internal activations. An arbitrary input vector (or feature map) modulates downstream activations. inputsub-networkFiLM layerFiLM generatoroutput\n\n\n Highway Networks are a prime\n example of applying this self-conditioning principle. They take inspiration\n from the LSTMs’ heavy use of\n feature-wise sigmoidal gating in their input, forget, and output gates to\n regulate information flow:\n \n\n\n\ninputsub-networksigmoidal layer1 - xoutput\n\n\n The ImageNet 2017 winning model also\n employs feature-wise sigmoidal gating in a self-conditioning manner, as a\n way to “recalibrate” a layer’s activations conditioned on themselves.\n \n\n\n\n The squeeze-and-excitation block uses sigmoidal gating. First, the network maps input feature maps to a gating vector. The gating vector is then multiplied with the input feature maps. inputoutputglobal poolingReLU layersigmoidal layer\n\nNatural language processing+\n\n For statistical language modeling (i.e., predicting the next word\n in a sentence), the LSTM \n constitutes a popular class of recurrent network architectures. The LSTM\n relies heavily on feature-wise sigmoidal gating to control the\n information flow in and out of the memory or context cell\n c\\mathbf{c}c, based on the hidden states\n h\\mathbf{h}h and inputs x\\mathbf{x}x at\n every timestep t\\mathbf{t}t.\n \n\n\n\nct-1tanhcthtsigmoidsigmoidtanhsigmoidlinearlinearlinearlinearht-1xtconcatenate\n\n\n Also in the domain of language modeling, Dauphin et al. use sigmoidal\n gating in their proposed *gated linear unit*, which uses half of the\n input features to apply feature-wise sigmoidal gating to the other half.\n Gehring et al. adopt this\n architectural feature, introducing a fast, parallelizable model for machine\n translation in the form of a fully convolutional network.\n \n\n\n\n The gated linear unit activation function uses sigmoidal gating. Half of the input features go through a sigmoid function to produce a gating vector. The gating vector is then multiplied with the second half of the features. inputsigmoidoutput\n\n\n The Gated-Attention Reader \n uses feature-wise scaling, extracting information\n from text by conditioning a document-reading network on a query. Its\n architecture consists of multiple Gated-Attention modules, which involve\n element-wise multiplications between document representation tokens and\n token-specific query representations extracted via soft attention on the\n query representation tokens.\n \n\n\n\n Dhingra et al. use conditional scaling to integrate query information into a document processing network. Applying soft attention to the query representation tokens produces the scaling vector. The scaling vector is then multiplied with the input document representation tokendocumentrepresentationtokenoutputtokenqueryrepresentationtokenssoft attention\n\nReinforcement learning+\n\n The Gated-Attention architecture \n uses feature-wise sigmoidal gating to fuse linguistic and visual\n information in an agent trained to follow simple “go-to” language\n instructions in the VizDoom 3D\n environment.\n \n\n\n\n Chaplot et al. use sigmoidal gating as a multimodal fusion mechanism. An instruction representation is mapped to a scaling vector via a sigmoid layer. The scaling vector is then multiplied with the input feature maps. A policy network uses the result to decide the next action. inputoutputinstructionrepresentationsigmoid\n\n\n Bahdanau et al. use FiLM\n layers to condition Neural Module Network\n and LSTM -based policies to follow\n basic, compositional language instructions (arranging objects and going\n to particular locations) in a 2D grid world. They train this policy\n in an adversarial manner using rewards from another FiLM-based network,\n trained to discriminate between ground-truth examples of achieved\n instruction states and failed policy trajectories states.\n \n\n\n\n Outside instruction-following, Kirkpatrick et al.\n also use\n game-specific scaling and biasing to condition a shared policy network\n trained to play 10 different Atari games.\n \n\n\nGenerative modeling+\n\n The conditional variant of DCGAN ,\n a well-recognized network architecture for generative adversarial networks\n , uses concatenation-based\n conditioning. The class label is broadcasted as a feature map and then\n concatenated to the input of convolutional and transposed convolutional\n layers in the discriminator and generator networks.\n \n\n\n\nConcatenation-based conditioning is used in the class-conditional DCGAN model. Each convolutional layer is concatenated with the broadcased label along the channel axis. The resulting stack of feature maps is then convolved to produce the conditioned output. inputconcatenateinputclass labelconvolutionoutputbroadcastclass label\n\n\n For convolutional layers, concatenation-based conditioning requires the\n network to learn redundant convolutional parameters to interpret each\n constant, conditioning feature map; as a result, directly applying a\n conditional bias is more parameter efficient, but the two approaches are\n still mathematically equivalent.\n \n\n\n\n PixelCNN \n and WaveNet  — two recent\n advances in autoregressive, generative modeling of images and audio,\n respectively — use conditional biasing. The simplest form of\n conditioning in PixelCNN adds feature-wise biases to all convolutional layer\n outputs. In FiLM parlance, this operation is equivalent to inserting FiLM\n layers after each convolutional layer and setting the scaling coefficients\n to a constant value of 1.\n \n The authors also describe a location-dependent biasing scheme which\n cannot be expressed in terms of FiLM layers due to the absence of the\n feature-wise property.\n \n\n\n\n\n PixelCNN uses conditional biasing. The model first maps a high-level image description to a bias vector. Then, it adds the bias vector to the input stack of feature maps to condition convolutional layers. inputoutputimagedescriptionlinear\n\n\n WaveNet describes two ways in which conditional biasing allows external\n information to modulate the audio or speech generation process based on\n conditioning input:\n \n\n\n1. **Global conditioning** applies the same conditional bias\n to the whole generated sequence and is used e.g. to condition on speaker\n identity.\n2. **Local conditioning** applies a conditional bias which\n varies across time steps of the generated sequence and is used e.g. to\n let linguistic features in a text-to-speech model influence which sounds\n are produced.\n\n\n\n As in PixelCNN, conditioning in WaveNet can be viewed as inserting FiLM\n layers after each convolutional layer. The main difference lies in how\n the FiLM-generating network is defined: global conditioning\n expresses the FiLM-generating network as an embedding lookup which is\n broadcasted to the whole time series, whereas local conditioning expresses\n it as a mapping from an input sequence of conditioning information to an\n output sequence of FiLM parameters.\n \n\n\nSpeech recognition+\n\n Kim et al. modulate a deep\n bidirectional LSTM using a form\n of conditional normalization. As discussed in the\n *Visual question-answering* and *Style transfer* subsections,\n conditional normalization can be seen as an instance of FiLM where\n the post-normalization feature-wise affine transformation is replaced\n with a FiLM layer.\n \n\n\n\n Kim et al. achieve speaker adaptation by adapting the usual LSTM architecture to condition its various gates on an utterance summarization. ct-1tanhcthtsigmoidsigmoidtanhsigmoidht-1 utterance summarization xt Each gate uses FiLM to condition on the utterancesummarization. linearnormalizationFiLMlinearnormalizationFiLMht-1xtFiLM generator utterance summarization \n\n\n The key difference here is that the conditioning signal does not come from\n an external source but rather from utterance\n summarization feature vectors extracted in each layer to adapt the model.\n \n\n\nDomain adaptation and few-shot learning+\n\n For domain adaptation, Li et al. \n find it effective to update the per-channel batch normalization\n statistics (mean and variance) of a network trained on one domain with that\n network’s statistics in a new, target domain. As discussed in the\n *Style transfer* subsection, this operation is akin to using the network as\n both the FiLM generator and the FiLM-ed network. Notably, this approach,\n along with Adaptive Instance Normalization, has the particular advantage of\n not requiring any extra trainable parameters.\n \n\n\n\n For few-shot learning, Oreshkin et al.\n explore the use of FiLM layers to\n provide more robustness to variations in the input distribution across\n few-shot learning episodes. The training set for a given episode is used to\n produce FiLM parameters which modulate the feature extractor used in a\n Prototypical Networks \n meta-training procedure.\n \n\n\n\n\n---\n\n\nRelated ideas\n-------------\n\n\n\n Aside from methods which make direct use of feature-wise transformations,\n the FiLM framework connects more broadly with the following methods and\n concepts.\n \n\n\n\nexpand all\n\nZero-shot learning+\n\n The idea of learning a task representation shares a strong connection with\n zero-shot learning approaches. In zero-shot learning, semantic task\n embeddings may be learned from external information and then leveraged to\n make predictions about classes without training examples. For instance, to\n generalize to unseen object categories for image classification, one may\n construct semantic task embeddings from text-only descriptions and exploit\n objects’ text-based relationships to make predictions for unseen image\n categories. Frome et al. , Socher et\n al. , and Norouzi et al.\n are a few notable exemplars\n of this idea.\n \n\n\nHyperNetworks+\n\n The notion of a secondary network predicting the parameters of a primary\n network is also well exemplified by HyperNetworks , which predict weights for entire layers\n (e.g., a recurrent neural network layer). From this perspective, the FiLM\n generator is a specialized HyperNetwork that predicts the FiLM parameters of\n the FiLM-ed network. The main distinction between the two resides in the\n number and specificity of predicted parameters: FiLM requires predicting far\n fewer parameters than Hypernetworks, but also has less modulation potential.\n The ideal trade-off between a conditioning mechanism’s capacity,\n regularization, and computational complexity is still an ongoing area of\n investigation, and many proposed approaches lie on the spectrum between FiLM\n and HyperNetworks (see [Bibliographic Notes](#bibliographic-notes)).\n \n\n\nAttention+\n\n Some parallels can be drawn between attention and FiLM, but the two operate\n in different ways which are important to disambiguate.\n \n\n\n\nAttention computes a probability distribution over locations. Attention pools over locations. Attention summarizes the input into a vector. FiLM computes a scaling vector applied to the feature axis. FiLM conserves input dimensions. αΣ…(β omitted for clarity)γ\n\n\n This difference stems from distinct intuitions underlying attention and\n FiLM: the former assumes that specific spatial locations or time steps\n contain the most useful information, whereas the latter assumes that\n specific features or feature maps contain the most useful information.\n \n\n\nBilinear transformations+\n\n With a little bit of stretching, FiLM can be seen as a special case of a\n bilinear transformation\n with low-rank weight\n matrices. A bilinear transformation defines the relationship between two\n inputs x\\mathbf{x}x and z\\mathbf{z}z and the\n kthk^{th}kth output feature yky\\_kyk​ as\n\n yk=xTWkz.\n y\\_k = \\mathbf{x}^T W\\_k \\mathbf{z}.\n yk​=xTWk​z.\n\n Note that for each output feature yky\\_kyk​ we have a separate\n matrix WkW\\_kWk​, so the full set of weights forms a\n multi-dimensional array.\n \n\n\n\n Each element yk of the output vector y is the result of a distinct vector-matrix-vector product. This enables multiplicative interactions between any pair of elements of x and z. yxW1 zxW2 zxW3 zW1zW2zW3z\n\n\n If we view z\\mathbf{z}z as the concatenation of the scaling\n and shifting vectors γ\\gammaγ and β\\betaβ and\n if we augment the input x\\mathbf{x}x with a 1-valued feature,\n \n As is commonly done to turn a linear transformation into an affine\n transformation.\n \n we can represent FiLM using a bilinear transformation by zeroing out the\n appropriate weight matrix entries:\n \n\n\n\n FiLM computes elements of the output vector as yk = γk xk + βk. This can be expressed as a dot product between a 1-augmented x and a sparse vector containing γk and βk. (Shaded cells have a zero value.) The sparse vector is given by multiplying a low-rank weight matrix with the concatenation of γ and β. (Shaded cells again have a zero value.) y1xW1 z1xW2 z1xW3 zW111zγβW211zγβW311zγβ\n\n\n For some applications of bilinear transformations,\n see the [Bibliographic Notes](#bibliographic-notes).\n \n\n\n\n\n---\n\n\nProperties of the learned task representation\n---------------------------------------------\n\n\n\n As hinted earlier, in adopting the FiLM perspective we implicitly introduce\n a notion of *task representation*: each task — be it a question\n about an image or a painting style to imitate — elicits a different\n set of FiLM parameters via the FiLM generator which can be understood as its\n representation in terms of how to modulate the FiLM-ed network. To help\n better understand the properties of this representation, let’s focus on two\n FiLM-ed models used in fairly different problem settings:\n \n\n\n* The visual reasoning model of Perez et al.\n , which uses FiLM\n to modulate a visual processing pipeline based off an input question.\n \n The linguistic pipeline acts as the FiLM generator. FiLM layers in each residual block modulate the visualpipeline. feature extractorsub-networkFiLM layerReLU Each residual block has a FiLM layer added to it. sub-networkFiLM layer…linearFiLM parametersAreGRUthereGRUmoreGRUcubesGRUthanGRUyellowGRUthingsGRU\n* The artistic style transfer model of Ghiasi et al.\n , which uses FiLM to modulate a\n feed-forward style transfer network based off an input style image.\n \n The FiLM generator predicts parameters describing the target style. The style transfer network is conditioned by making the instance normalization parameters style-dependent. FiLM generatorFiLM parameterssub-networknormalizationFiLM layersub-networknormalizationFiLM layer… conditional instance normalization conditional instance normalization\n\n\n\n As a starting point, can we discern any pattern in the FiLM parameters as a\n function of the task description? One way to visualize the FiLM parameter\n space is to plot γ\\gammaγ against β\\betaβ,\n with each point corresponding to a specific task description and a specific\n feature map. If we color-code each point according to the feature map it\n belongs to we observe the following:\n \n\n\n\n FiLM parameters for 256 tasks and for 16 feature maps, chosen randomly. Visual reasoning model Style transfer model γβ Feature map γβ Feature map \n\n\n The plots above allow us to make several interesting observations. First,\n FiLM parameters cluster by feature map in parameter space, and the cluster\n locations are not uniform across feature maps. The orientation of these\n clusters is also not uniform across feature maps: the main axis of variation\n can be γ\\gammaγ-aligned, β\\betaβ-aligned, or\n diagonal at varying angles. These findings suggest that the affine\n transformation in FiLM layers is not modulated in a single, consistent way,\n i.e., using γ\\gammaγ only, β\\betaβ only, or\n γ\\gammaγ and β\\betaβ together in some specific\n way. Maybe this is due to the affine transformation being overspecified, or\n maybe this shows that FiLM layers can be used to perform modulation\n operations in several distinct ways.\n \n\n\n\n Nevertheless, the fact that these parameter clusters are often somewhat\n “dense” may help explain why the style transfer model of Ghiasi et al.\n is able to perform style\n interpolations: any convex combination of FiLM parameters is likely to\n correspond to a meaningful parametrization of the FiLM-ed network.\n \n\n\n\nStyle 1Style 2 w InterpolationContent Image\n\n\n To some extent, the notion of interpolating between tasks using FiLM\n parameters can be applied even in the visual question-answering setting.\n Using the model trained in Perez et al. ,\n we interpolated between the model’s FiLM parameters for two pairs of CLEVR\n questions. Here we visualize the input locations responsible for\n the globally max-pooled features fed to the visual pipeline’s output classifier:\n \n\n\n\n What shape is the red thing left of the sphere? What shape is the red thing right of the sphere? How many brown things are there? How many yellow things are there? \n\n\n The network seems to be softly switching where in the image it is looking,\n based on the task description. It is quite interesting that these semantically\n meaningful interpolation behaviors emerge, as the network has not been\n trained to act this way.\n \n\n\n\n Despite these similarities across problem settings, we also observe\n qualitative differences in the way in which FiLM parameters cluster as a\n function of the task description. Unlike the style transfer model, the\n visual reasoning model sometimes exhibits several FiLM parameter\n sub-clusters for a given feature map.\n \n\n\n\n FiLM parameters of the visual reasoning model for 256 questions chosen randomly. Feature map 26 of the first FiLM layer. Feature map 76 of the first FiLM layer. γβγβ\n\n\n At the very least, this may indicate that FiLM learns to operate in ways\n that are problem-specific, and that we should not expect to find a unified\n and problem-independent explanation for FiLM’s success in modulating FiLM-ed\n networks. Perhaps the compositional or discrete nature of visual reasoning\n requires the model to implement several well-defined modes of operation\n which are less necessary for style transfer.\n \n\n\n\n Focusing on individual feature maps which exhibit sub-clusters, we can try\n to infer how questions regroup by color-coding the scatter plots by question\n type.\n \n\n\n\n FiLM parameters of the visual reasoning model for 256 questions chosen randomly. Feature map 26 of the first FiLM layer. Feature map 76 of the first FiLM layer. Question type γβγβ\n\n\n Sometimes a clear pattern emerges, as in the right plot, where color-related\n questions concentrate in the top-right cluster — we observe that\n questions either are of type *Query color* or *Equal color*,\n or contain concepts related to color. Sometimes it is harder to draw a\n conclusion, as in the left plot, where question types are scattered across\n the three clusters.\n \n\n\n\n In cases where question types alone cannot explain the clustering of the\n FiLM parameters, we can turn to the conditioning content itself to gain\n an understanding of the mechanism at play. Let’s take a look at two more\n plots: one for feature map 26 as in the previous figure, and another\n for a different feature map, also exhibiting several subclusters. This time\n we regroup points by the words which appear in their associated question.\n \n\n\n\n FiLM parameters of the visual reasoning model for 256 questions chosen randomly. Feature map 26 suggests an object position separation mechanism. Feature map 92 suggests an object material separation mechanism. Word in question γβγβ\n\n\n In the left plot, the left subcluster corresponds to questions involving\n objects positioned *in front* of other objects, while the right\n subcluster corresponds to questions involving objects positioned\n *behind* other objects. In the right plot we see some evidence of\n separation based on object material: the left subcluster corresponds to\n questions involving *matte* and *rubber* objects, while the\n right subcluster contains questions about *shiny* or\n *metallic* objects.\n \n\n\n\n The presence of sub-clusters in the visual reasoning model also suggests\n that question interpolations may not always work reliably, but these\n sub-clusters don’t preclude one from performing arithmetic on the question\n representations, as Perez et al. \n report.\n \n\n\n\n The model incorrectly answers a question which involves an unseen combination of concepts (in bold). Rather than using the FiLM parameters of the FiLM generator, we can use those produced by combining questions with familiar combinations of concepts (in bold). This corrects the model’s answer. Q: What is the blue big cylinder made of? Q ? Rubber ✘ QA : What is the blue big sphere made of? QB : What is the green big cylinder made of? QC : What is the green big sphere made of? QA+ QB- QC ?Metal ✔\n\n\n Perez et al. report that this sort of\n task analogy is not always successful in correcting the model’s answer, but\n it does point to an interesting fact about FiLM-ed networks: sometimes the\n model makes a mistake not because it is incapable of computing the correct\n output, but because it fails to produce the correct FiLM parameters for a\n given task description. The reverse can also be true: if the set of tasks\n the model was trained on is insufficiently rich, the computational\n primitives learned by the FiLM-ed network may be insufficient to ensure good\n generalization. For instance, a style transfer model may lack the ability to\n produce zebra-like patterns if there are no stripes in the styles it was\n trained on. This could explain why Ghiasi et al.\n report that their style transfer\n model’s ability to produce pastiches for new styles degrades if it has been\n trained on an insufficiently large number of styles. Note however that in\n that case the FiLM generator’s failure to generalize could also play a role,\n and further analysis would be needed to draw a definitive conclusion.\n \n\n\n\n This points to a separation between the various computational\n primitives learned by the FiLM-ed network and the “numerical recipes”\n learned by the FiLM generator: the model’s ability to generalize depends\n both on its ability to parse new forms of task descriptions and on it having\n learned the required computational primitives to solve those tasks. We note\n that this multi-faceted notion of generalization is inherited directly from\n the multi-task point of view adopted by the FiLM framework.\n \n\n\n\n Let’s now turn our attention back to the overal structural properties of FiLM\n parameters observed thus far. The existence of this structure has already\n been explored, albeit more indirectly, by Ghiasi et al.\n as well as Perez et al.\n , who applied t-SNE\n on the FiLM parameter values.\n \n\n\n\n\n\n t-SNE projection of FiLM parameters for many task descriptions.\n \n\n\n\n\nVisual reasoning model Question type Reset pan / zoomStyle transfer model Artist name Reset pan / zoom\n\n\n The projection on the left is inspired by a similar projection done by Perez\n et al. for their visual reasoning\n model trained on CLEVR and shows how questions group by question type.\n The projection on the right is inspired by a similar projection done by\n Ghiasi et al. for their style\n transfer network. The projection does not cluster artists as neatly as the\n projection on the left, but this is to be expected, given that an artist’s\n style may vary widely over time. However, we can still detect interesting\n patterns in the projection: note for instance the isolated cluster (circled\n in the figure) in which paintings by Ivan Shishkin and Rembrandt are\n aggregated. While these two painters exhibit fairly different styles, the\n cluster is a grouping of their sketches.\n \n\n\n\n Rembrandt’s Woman with aPink. Shishkin’s Woman with a boyin the forest. Sketches by Rembrandt and Shishkin found in the same t-SNE cluster. \n\n\n To summarize, the way neural networks learn to use FiLM layers seems to\n vary from problem to problem, input to input, and even from feature to\n feature; there does not seem to be a single mechanism by which the\n network uses FiLM to condition computation. This flexibility may\n explain why FiLM-related methods have been successful across such a\n wide variety of domains.\n \n\n\n\n\n---\n\n\nDiscussion\n----------\n\n\n\n Looking forward, there are still many unanswered questions.\n Do these experimental observations on FiLM-based architectures generalize to\n other related conditioning mechanisms, such as conditional biasing, sigmoidal\n gating, HyperNetworks, and bilinear transformations? When do feature-wise\n transformations outperform methods with stronger inductive biases and vice\n versa? Recent work combines feature-wise transformations with stronger\n inductive bias methods\n ,\n which could be an optimal middle ground. Also, to what extent are FiLM’s\n task representation properties\n inherent to FiLM, and to what extent do they emerge from other features\n of neural networks (i.e. non-linearities, FiLM generator\n depth, etc.)? If you are interested in exploring these or other\n questions about FiLM, we recommend looking into the code bases for\n FiLM models for [visual reasoning](https://github.com/ethanjperez/film)\n and [style transfer](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylization)\n which we used as a\n starting point for our experiments here.\n \n\n\n\n Finally, the fact that changes on the feature level alone are able to\n compound into large and meaningful modulations of the FiLM-ed network is\n still very surprising to us, and hopefully future work will uncover deeper\n explanations. For now, though, it is a question that\n evokes the even grander mystery of how neural networks in general compound\n simple operations like matrix multiplications and element-wise\n non-linearities into semantically meaningful transformations.", "date_published": "2018-07-09T20:00:00Z", "authors": ["Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville", "Yoshua Bengio"], "summaries": ["A simple and surprisingly effective family of conditioning mechanisms."], "doi": "10.23915/distill.00011", "journal_ref": "distill-pub", "bibliography": [{"link": "http://arxiv.org/pdf/1709.07871.pdf", "title": "FiLM: Visual Reasoning with a General Conditioning Layer"}, {"link": "https://arxiv.org/pdf/1707.03017.pdf", "title": "Learning visual reasoning without strong priors"}, {"link": "https://arxiv.org/pdf/1612.06890.pdf", "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning"}, {"link": "http://arxiv.org/pdf/1611.08481.pdf", "title": "GuessWhat?! Visual object discovery through multi-modal dialogue"}, {"link": "http://tamaraberg.com/papers/referit.pdf", "title": "ReferItGame: Referring to objects in photographs of natural scenes"}, {"link": "https://arxiv.org/pdf/1707.00683.pdf", "title": "Modulating early visual processing by language"}, {"link": "https://arxiv.org/pdf/1505.00468.pdf", "title": "VQA: visual question answering"}, {"link": "https://arxiv.org/pdf/1610.07629.pdf", "title": "A learned representation for artistic style"}, {"link": "https://arxiv.org/pdf/1705.06830.pdf", "title": "Exploring the structure of a real-time, arbitrary neural artistic stylization network"}, {"link": "http://arxiv.org/pdf/1802.01218.pdf", "title": "Efficient video object segmentation via network modulation"}, {"link": "https://arxiv.org/pdf/1703.06868.pdf", "title": "Arbitrary style transfer in real-time with adaptive instance normalization"}, {"link": "http://arxiv.org/pdf/1505.00387.pdf", "title": "Highway networks"}, {"link": "http://dx.doi.org/10.1162/neco.1997.9.8.1735", "title": "Long short-term memory"}, {"link": "https://arxiv.org/pdf/1709.01507.pdf", "title": "Squeeze-and-Excitation networks"}, {"link": "http://arxiv.org/pdf/1707.05589.pdf", "title": "On the state of the art of evaluation in neural language models"}, {"link": "https://arxiv.org/pdf/1612.08083.pdf", "title": "Language modeling with gated convolutional networks"}, {"link": "https://arxiv.org/pdf/1705.03122.pdf", "title": "Convolution sequence-to-sequence learning"}, {"link": "https://arxiv.org/pdf/1606.01549.pdf", "title": "Gated-attention readers for text comprehension"}, {"link": "https://arxiv.org/pdf/1706.07230.pdf", "title": "Gated-attention architectures for task-oriented language grounding"}, {"link": "https://arxiv.org/pdf/1605.02097.pdf", "title": "Vizdoom: A doom-based AI research platform for visual reinforcement learning"}, {"link": "http://arxiv.org/pdf/1806.01946.pdf", "title": "Learning to follow language instructions with adversarial reward induction"}, {"link": "http://arxiv.org/pdf/1511.02799.pdf", "title": "Neural module networks"}, {"link": "http://www.pnas.org/content/114/13/3521.abstract", "title": "Overcoming catastrophic forgetting in neural networks"}, {"link": "https://arxiv.org/pdf/1511.06434.pdf", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks"}, {"link": "https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf", "title": "Generative adversarial nets"}, {"link": "https://arxiv.org/pdf/1606.05328.pdf", "title": "Conditional image generation with PixelCNN decoders"}, {"link": "https://arxiv.org/pdf/1609.03499.pdf", "title": "WaveNet: A generative model for raw audio"}, {"link": "https://arxiv.org/pdf/1707.06065.pdf", "title": "Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition"}, {"link": "http://www.sciencedirect.com/science/article/pii/S003132031830092X", "title": "Adaptive batch normalization for practical domain adaptation"}, {"link": "http://arxiv.org/pdf/1805.10123.pdf", "title": "TADAM: Task dependent adaptive metric for improved few-shot learning"}, {"link": "http://arxiv.org/pdf/1703.05175.pdf", "title": "Prototypical networks for few-shot learning"}, {"link": "https://papers.nips.cc/paper/5204-devise-a-deep-visual-semantic-embedding-model", "title": "Devise: A deep visual-semantic embedding model"}, {"link": "http://arxiv.org/pdf/1301.3666.pdf", "title": "Zero-shot learning through cross-modal transfer"}, {"link": "http://arxiv.org/pdf/1312.5650.pdf", "title": "Zero-shot learning by convex combination of semantic embeddings"}, {"link": "https://arxiv.org/pdf/1609.09106.pdf", "title": "HyperNetworks"}, {"link": "http://dx.doi.org/10.1162/089976600300015349", "title": "Separating style and content with bilinear models"}, {"link": "http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf", "title": "Visualizing data using t-SNE"}, {"link": "http://arxiv.org/pdf/1803.06092.pdf", "title": "A dataset and architecture for visual reasoning with a working memory"}, {"link": "http://dl.acm.org/citation.cfm?id=1623264.1623282", "title": "A parallel computation that assigns canonical object-based frames of reference"}, {"link": "https://doi.org/10.1007/978-1-4612-4320-5_2", "title": "The correlation theory of brain function"}, {"link": "http://dl.acm.org/citation.cfm?id=3104482.3104610", "title": "Generating text with recurrent neural networks"}, {"link": "http://www.cs.toronto.edu/~tang/papers/robm.pdf", "title": "Robust boltzmann machines for recognition and denoising"}, {"link": "http://doi.acm.org/10.1145/1553374.1553505", "title": "Factored conditional restricted Boltzmann machines for modeling motion style"}, {"link": "http://www.cs.toronto.edu/~osindero/PUBLICATIONS/RossOsinderoZemel_ICML06.pdf", "title": "Combining discriminative features to infer complex trajectories"}, {"link": "http://dx.doi.org/10.1162/NECO_a_00312", "title": "Learning where to attend with deep architectures for image tracking"}, {"link": "https://ieeexplore.ieee.org/document/5995496/", "title": "Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis"}, {"link": "https://link.springer.com/chapter/10.1007/978-3-642-15567-3_11", "title": "Convolutional learning of spatio-temporal features"}, {"link": "https://www.iro.umontreal.ca/~memisevr/pubs/pami_relational.pdf", "title": "Learning to relate images"}, {"link": "https://papers.nips.cc/paper/6976-incorporating-side-information-by-adaptive-convolution.pdf", "title": "Incorporating side information by adaptive convolution"}, {"link": "http://papers.nips.cc/paper/6654-learning-multiple-visual-domains-with-residual-adapters.pdf", "title": "Learning multiple visual domains with residual adapters"}, {"link": "https://arxiv.org/pdf/1506.00511.pdf", "title": "Predicting deep zero-shot convolutional neural networks using textual descriptions"}, {"link": "https://arxiv.org/pdf/1706.05064.pdf", "title": "Zero-shot task generalization with multi-task deep reinforcement learning"}, {"link": "https://papers.nips.cc/paper/1290-separating-style-and-content.pdf", "title": "Separating style and content"}, {"link": "http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.3338&rep=rep1&type=pdf", "title": "Facial expression space learning"}, {"link": "http://www.wwwconference.org/www2009/proceedings/pdf/p691.pdf", "title": "Personalized recommendation on dynamic content using predictive bilinear models"}, {"link": "http://wwwconference.org/proceedings/www2011/proceedings/p537.pdf", "title": "Like like alike: joint friendship and interest propagation in social networks"}, {"link": "https://datajobs.com/data-science-repo/Recommender-Systems-[Netflix].pdf", "title": "Matrix factorization techniques for recommender systems"}, {"link": "http://vis-www.cs.umass.edu/bcnn/docs/bcnn_iccv15.pdf", "title": "Bilinear CNN models for fine-grained visual recognition"}, {"link": "https://arxiv.org/pdf/1604.06573.pdf", "title": "Convolutional two-stream network fusion for video action recognition"}, {"link": "https://arxiv.org/pdf/1606.01847.pdf", "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding"}]} {"id": "16581d92f7140bdd597b913e7ae78c38", "title": "The Building Blocks of Interpretability", "url": "https://distill.pub/2018/building-blocks", "source": "distill", "source_type": "blog", "text": "With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity.\n\n In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces .\n\n With a few exceptions , existing work on interpretability fails to do these in concert.\n \n\n\n\n The machine learning community has primarily focused on developing powerful methods, such as [feature visualization](https://distill.pub/2017/feature-visualization/) , attribution , and dimensionality reduction , for reasoning about neural networks. \n\n However, these techniques have been studied as isolated threads of research, and the corresponding work of reifying them has been neglected.\n\n On the other hand, the human-computer interaction community has begun to explore rich user interfaces for neural networks , but they have not yet engaged deeply with these abstractions. \n\n To the extent these abstractions have been used, it has been in fairly standard ways.\n\n As a result, we have been left with impoverished interfaces (e.g., saliency maps or correlating abstract neurons) that leave a lot of value on the table. \n\n Worse, many interpretability techniques have not been fully actualized into abstractions because there has not been pressure to make them generalizable or composable.\n \n\n\n\n In this article, we treat existing interpretability methods as fundamental and composable building blocks for rich user interfaces.\n\n We find that these disparate techniques now come together in a unified grammar, fulfilling complementary roles in the resulting interfaces.\n\n Moreover, this grammar allows us to systematically explore the space of interpretability interfaces, enabling us to evaluate whether they meet particular goals.\n\n We will present interfaces that show *what* the network detects and explain *how* it develops its understanding, while keeping the amount of information *human-scale*.\n\n For example, we will see how a network looking at a labrador retriever detects floppy ears and how that influences its classification.\n \n\n\n\n\n Our interfaces are speculative and one might wonder how reliable they are. \n\n Rather than address this point piecemeal, we dedicate a section to it at the end of the article.\n \n\n In this article, we use GoogLeNet, an image classification model, to demonstrate our interface ideas because its neurons seem unusually semantically meaningful.We’re actively investigating why this is, and hope to uncover principles for designing interpretable models. In the meantime, while we demonstrate our techniques on GoogLeNet, we provide code for you to try them on other models.\n\n Although here we’ve made a specific choice of task and network, the basic abstractions and patterns for combining them that we present can be applied to neural networks in other domains.\n \n\n\nMaking Sense of Hidden Layers\n-----------------------------\n\n\n\n Much of the recent work on interpretability is concerned with a neural network’s input and output layers.\n Arguably, this focus is due to the clear meaning these layers have: in computer vision, the input layer represents values for the red, green, and blue color channels for every pixel in the input image, while the output layer consists of class labels and their associated probabilities.\n \n\n\n\n However, the power of neural networks lies in their hidden layers — at every layer, the network discovers a new representation of the input.\n\n In computer vision, we use neural networks that run the same feature detectors at every position in the image.\n\n We can think of each layer’s learned representation as a three-dimensional cube. Each cell in the cube is an *activation*, or the amount a neuron fires.\n\n The x- and y-axes correspond to positions in the image, and the z-axis is the channel (or detector) being run.\n \n\n\n\n\n\n The cube of activations that a neural network for computer vision develops at each hidden layer. \n \n Different slices of the cube allow us to target the activations of individual neurons, spatial positions, or channels.\n \n\n\n\n To make a semantic dictionary, we pair every neuron activation with a visualization of that neuron and sort them by the magnitude of the activation.\n\n This marriage of activations and feature visualization changes our relationship with the underlying mathematical object.\n\n Activations now map to iconic representations, instead of abstract indices, with many appearing to be similar to salient human ideas, such as “floppy ear,” “dog snout,” or “fur.”\n \n\n\n\n We use optimization-based feature visualization to avoid spurious correlation, but one could use other methods.\n \n\n Semantic dictionaries are powerful not just because they move away from meaningless indices, but because they express a neural network’s learned abstractions with canonical examples.\n \n With image classification, the neural network learns a set of visual abstractions and thus images are the most natural symbols to represent them.\n\n Were we working with audio, the more natural symbols would most likely be audio clips.\n\n This is important because when neurons appear to correspond to human ideas, it is tempting to reduce them to words.\n\n Doing so, however, is a lossy operation — even for familiar abstractions, the network may have learned a deeper nuance.\n\n For instance, GoogLeNet has multiple floppy ear detectors that appear to detect slightly different levels of droopiness, length, and surrounding context to the ears.\n\n There also may exist abstractions which are visually familiar, yet that we lack good natural language descriptions for: for example, take the particular column of shimmering light where sun hits rippling water.\n\n Moreover, the network may learn new abstractions that appear alien to us — here, natural language would fail us entirely!\n\n In general, canonical examples are a more natural way to represent the foreign abstractions that neural networks learn than native human language.\n \n\n\n\n By bringing meaning to hidden layers, semantic dictionaries set the stage for our existing interpretability techniques to be composable building blocks.\n\n As we shall see, just like their underlying vectors, we can apply dimensionality reduction to them.\n\n In other cases, semantic dictionaries allow us to push these techniques further.\n\n For example, besides the one-way attribution that we currently perform with the input and output layers, semantic dictionaries allow us to attribute to-and-from specific hidden layers.\n\n In principle, this work could have been done without semantic dictionaries but it would have been unclear what the results meant.\n \n\n\n\n While we introduce semantic dictionaries in terms of neurons, they can be used with any basis of activations. We will explore\n this more later.\n \nWhat Does the Network See?\n--------------------------\n\n\n\n\n Applying this technique to all the activation vectors allows us to not only see what the network detects at each position, but also what the network understands of the input image as a whole.\n \n\n\n\n\n And, by working across layers (eg. “mixed3a”, “mixed4d”), we can observe how the network’s understanding evolves: from detecting edges in earlier layers, to more sophisticated shapes and object parts in the latter.\n \n\n\n\n\n These visualizations, however, omit a crucial piece of information: the magnitude of the activations.\n \n By scaling the area of each cell by the magnitude of the activation vector, we can indicate how strongly the network detected features at that position:\n \n\n\n\n\nHow Are Concepts Assembled?\n---------------------------\n\n\n\n Feature visualization helps us answer *what* the network detects, but it does not answer *how* the network assembles these individual pieces to arrive at later decisions, or *why* these decisions were made.\n \n\n\n\n Attribution is a set of techniques that answers such questions by explaining the relationships between neurons.\n\n There are a wide variety of approaches to attribution but, so far, there doesn’t seem to be a clear right answer.\n\n In fact, there’s reason to think that all our present answers aren’t quite right .\n\n We think there’s a lot of important research to be done on attribution methods, but for the purposes of this article the exact approach taken to attribution doesn’t matter.\n\n We use a fairly simple method, linearly approximating the relationshipWe do attribution by linear approximation in all of our interfaces. That is, we estimate the effect of a neuron on the output is its activation times the rate at which increasing its activation increases the output. When we talk about a linear combination of activations, the attribution can be thought of as the linear combination of the attributions of the units, or equivalently as the dot product between the activation of that combination and the gradient. \n \n For spatial attribution, we do an additional trick. GoogLeNet’s strided max pooling introduces a lot of noise and checkerboard patterns to it’s gradients. To avoid our interface demonstrations being dominated by this noise, we (a) do a relaxation of the gradient of max pooling, distributing gradient to inputs proportional to their activation instead of winner takes all and (b) cancel out the checkerboard patterns. \n \nThe notebooks attached to diagrams provide reference implementations., but could easily substitute in essentially any other technique.\n\n Future improvements to attribution will, of course, correspondingly improve the interfaces built on top of them.\n \n\n\n### Spatial Attribution with Saliency Maps\n\n\n\n The most common interface for attribution is called a *saliency map* — a simple heatmap that highlights pixels of the input image that most caused the output classification.\n\n We see two weaknesses with this current approach.\n \n\n\n\n First, it is not clear that individual pixels should be the primary unit of attribution.\n\n The meaning of each pixel is extremely entangled with other pixels, is not robust to simple visual transforms (e.g., brightness, contrast, etc.), and is far-removed from high-level concepts like the output class.\n\n Second, traditional saliency maps are a very limited type of interface — they only display the attribution for a single class at a time, and do not allow you to probe into individual points more deeply.\n\n As they do not explicitly deal with hidden layers, it has been difficult to fully explore their design space.\n \n\n\n\n We instead treat attribution as another user interface building block, and apply it to the hidden layers of a neural network. \n\n In doing so, we change the questions we can pose.\n\n Rather than asking whether the color of a particular pixel was important for the “labrador retriever” classification, we instead ask whether the *high-level idea* detected at that position (such as “floppy ear”) was important.\n\n This approach is similar to what Class Activation Mapping (CAM) methods do but, because they interpret their results back onto the input image, they miss the opportunity to communicate in terms of the rich behavior of a network’s hidden layers.\n \n\n\n\n\n The above interface affords us a more flexible relationship with attribution.\n\n To start, we perform attribution from each spatial position of each hidden layer shown to all 1,000 output classes.\n\n In order to visualize this thousand-dimensional vector, we use dimensionality reduction to produce a multi-directional saliency map.\n\n Overlaying these saliency maps on our magnitude-sized activation grids provides an information scent over attribution space.\n\n The activation grids allow us to anchor attribution to the visual vocabulary our semantic dictionaries first established.\n\n On hover, we update the legend to depict attribution to the output classes (i.e., which classes does this spatial position most contribute to?).\n \n\n\n\n Perhaps most interestingly, this interface allows us to interactively perform attribution *between hidden layers*.\n\n On hover, additional saliency maps mask the hidden layers, in a sense shining a light into their black boxes.\n\n This type of layer-to-layer attribution is a prime example of how carefully considering interface design drives the generalization of our existing abstractions for interpretability.\n \n\n\n\n With this diagram, we have begun to think of attribution in terms of higher-level concepts.\n\n However, at a particular position, many concepts are being detected together and this interface makes it difficult to split them apart. \n \n By continuing to focus on spatial positions, these concepts remain entangled.\n \n\n\n### Channel Attribution\n\n\n\n Saliency maps implicitly slice our cube of activations by applying attribution to the spatial positions of a hidden layer.\n\n This aggregates over all channels and, as a result, we cannot tell which specific detectors *at each position* most contributed to the final output classification.\n \n\n\n\n An alternate way to slice the cube is by channels instead of spatial locations.\n\n Doing so allows us to perform *channel attribution*: how much did each detector contribute to the final output?\n\n (This approach is similar to contemporaneous work by Kim et al., who do attribution to learned combination of channels.)\n \n\n\n\n\n This diagram is analogous to the previous one we saw: we conduct layer-to-layer attribution but this time over channels rather than spatial positions.\n\n Once again, we use the icons from our semantic dictionary to represent the channels that most contribute to the final output classification.\n\n Hovering over an individual channel displays a heatmap of its activations overlaid on the input image.\n\n The legend also updates to show its attribution to the output classes (i.e., what are the top classes this channel supports?).\n\n Clicking a channel allows us to drill into the layer-to-layer attributions, identifying the channels at lower layers that most contributed as well as the channels at higher layers that are most supported.\n \n\n\n\n While these diagrams focus on layer-to-layer attribution, it can still be valuable to focus on a single hidden layer.\n\n For example, the teaser figure allows us to evaluate hypotheses for why one class succeeded over the other.\n \n\n\n\n\n\n Attribution to spatial locations and channels can reveal powerful things about a model, especially when we combine them together.\n\n Unfortunately, this family of approaches is burdened by two significant problems.\n\n On the one hand, it is very easy to end up with an overwhelming amount of information: it would take hours of human auditing to understand the long-tail of channels that slightly impact the output.\n\n On the other hand, both the aggregations we have explored are extremely lossy and can miss important parts of the story.\n\n And, while we could avoid lossy aggregation by working with individual neurons, and not aggregating at all, this explodes the first problem combinatorially.\n \n\n\nMaking Things Human-Scale\n-------------------------\n\n\n\n In previous sections, we’ve considered three ways of slicing the cube of activations: into spatial activations, channels, and individual neurons.\n Each of these has major downsides.\n If one only uses spatial activations or channels, they miss out on very important parts of the story.\n For example it’s interesting that the floppy ear detector helped us classify an image as a Labrador retriever, but it’s much more interesting when that’s combined with the locations that fired to do so.\n One can try to drill down to the level of neurons to tell the whole story, but the tens of thousands of neurons are simply too much information.\n Even the hundreds of channels, before being split into individual neurons, can be overwhelming to show users!\n \n\n\n\n If we want to make useful interfaces into neural networks, it isn’t enough to make things meaningful.\n We need to make them human scale, rather than overwhelming dumps of information.\n The key to doing so is finding more meaningful ways of breaking up our activations.\n There is good reason to believe that such decompositions exist.\n Often, many channels or spatial positions will work together in a highly correlated way and are most useful to think of as one unit.\n Other channels or positions will have very little activity, and can be ignored for a high-level overview.\n So, it seems like we ought to be able to find better decompositions if we had the right tools.\n \n\n\n\n There is an entire field of research, called matrix factorization, that studies optimal strategies for breaking up matrices.\n By flattening our cube into a matrix of spatial locations and channels, we can apply these techniques to get more meaningful groups of neurons.\n These groups will not align as naturally with the cube as the groupings we previously looked at.\n Instead, they will be combinations of spatial locations and channels.\n Moreover, these groups are constructed to explain the behavior of a network on a particular image.\n It would not be effective to reuse the same groupings on another image; each image requires calculating a unique set of groups.\n \n\n\n\n\n\n In addition to naturally slicing a hidden layer’s cube of activations into neurons, spatial locations, or channels, we can also consider more arbitrary groupings of locations and channels.\n \n\n\n The groups that come out of this factorization will be the atoms of the interface a user works with. Unfortunately, any grouping is inherently a tradeoff between reducing things to human scale and, because any aggregation is lossy, preserving information. Matrix factorization lets us pick what our groupings are optimized for, giving us a better tradeoff than the natural groupings we saw earlier.\n \n\n\n\n The goals of our user interface should influence what we optimize our matrix factorization to prioritize. For example, if we want to prioritize what the network detected, we would want the factorization to fully describe the activations. If we instead wanted to prioritize what would change the network’s behavior, we would want the factorization to fully describe the gradient. Finally, if we want to prioritize what caused the present behavior, we would want the factorization to fully describe the attributions. Of course, we can strike a balance between these three objectives rather than optimizing one to the exclusion of the others.\n \n\n\n\n In the following diagram, we’ve constructed groups that prioritize the activations, by factorizing the activations\n Most matrix factorization algorithms and libraries are set up to minimize the mean squared error of the reconstruction of a matrix you give them. There are ways to hack such libraries to achieve more general objectives through clever manipulations of the provided matrix, as we will see below. More broadly, matrix factorization is an optimization problem, and with custom tools you can achieve all sorts of custom factorizations.\n with non-negative matrix factorization\n As the name suggests, non-negative matrix factorization (NMF) constrains its factors to be positive. This is fine for the activations of a ReLU network, which must be positive as well. Our experience is that the groups we get from NMF seem more independent and semantically meaningful than those without this constraint. Because of this constraints, groups from NMF are a less efficient at representing the activations than they would be without, but our experience is that they seem more independent and semantically meaningful.\n  .\n Notice how the overwhelmingly large number of neurons has been reduced to a small set of groups, concisely summarizing the story of the neural network.\n \n\n\n\n\n This figure only focuses at a single layer but, as we saw earlier, it can be useful to look across multiple layers to understand how a neural network assembles together lower-level detectors into higher-level concepts.\n \n\n\n\n The groups we constructed before were optimized to understand a single layer independent of the others. To understand multiple layers together, we would like each layer’s factorization to be “compatible” — to have the groups of earlier layers naturally compose into the groups of later layers. This is also something we can optimize the factorization for\n \n We formalize this “compatibility” in a manner described below, although we’re not confident it’s the best formalization and won’t be surprised if it is superseded in future work. \n \n\n Consider the attribution from every neuron in the layer to the set of *N* groups we want it to be compatible with.\n The basic idea is to split each entry in the activation matrix into *N* entries on the channel dimension, spreading the values proportional to the absolute value of its attribution to the corresponding group.\n Any factorization of this matrix induces a factorization of the original matrix by collapsing the duplicated entries in the column factors.\n However, the resulting factorization tries to create separate factors when the activation of the same channel has different attributions in different places.\n \n  .\n \n\n\n\n\n In this section, we recognize that the way in which we break apart the cube of activations is an important interface decision. Rather than resigning ourselves to the natural slices of the cube of activations, we construct more optimal groupings of neurons. These improved groupings are both more meaningful and more human-scale, making it less tedious for users to understand the behavior of the network.\n \n\n\n\n Our visualizations have only begun to explore the potential of alternate bases in providing better atoms for understanding neural networks.\n For example, while we focus on creating smaller numbers of directions to explain individual examples, there’s recently been exciting work finding “globally” meaningful directions  — such bases could be especially helpful when trying to understand multiple examples at a time, or in comparing models.\n The recent [NIPS disentangling workshop](https://sites.google.com/corp/view/disentanglenips2017) provides other promising directions. We’re excited to see a venue for this developing area of research.\n\n\n\nThe Space of Interpretability Interfaces\n----------------------------------------\n\n\n\n The interface ideas presented in this article combine building blocks such as feature visualization and attribution.\n\n Composing these pieces is not an arbitrary process, but rather follows a structure based on the goals of the interface.\n\n For example, should the interface emphasize *what* the network recognizes, prioritize *how* its understanding develops, or focus on making things *human-scale*.\n\n To evaluate such goals, and understand the tradeoffs, we need to be able to *systematically* consider possible alternatives.\n \n\n\nWe can think of an interface as a union of individual elements.\n\n\n\n\n\n Each element displays a specific type of *content* (e.g., activations or attribution) using a particular style of *presentation* (e.g., feature visualization or traditional information visualization).\n\n This content lives on substrates defined by how given *layers* of the network are broken apart into *atoms*, and may be *transformed* by a series of operations (e.g., to filter it or project it onto another substrate).\n\n For example, our semantic dictionaries use feature visualization to display the activations of a hidden layer's neurons.\n \n\n\n\n One way to represent this way of thinking is with a formal grammar, but we find it helpful to think about the space visually.\n\n We can represent the network’s substrate (which layers we display, and how we break them apart) as a grid, with the content and style of presentation plotted on this grid as points and connections.\n \n\n\n\n![](images/design_space/empty.svg)\n\n\n This setup gives us a framework to begin exploring the space of interpretability interfaces step by step.\n\n For instance, let us consider our teaser figure again.\n\n Its goal is to help us compare two potential classifications for an input image.\n \n\n\n\n![](images/design_space/teaser.svg)\n\n\n**1. Feature visualization**\n![](images/teaser-1.png)\n\n To understand a classification, we focus on the channels of the `mixed4d` layer. Feature visualization makes these channels meaningful.\n \n\n\n**2. Filter by output attribution**\n![](images/teaser-2.png)\n\n Next, we filter for specific classes by calculating the output attribution.\n \n\n\n**3. Drill down on hover**\n![](images/teaser-3.png)\n\n Hovering over channels, we get a heatmap of spatial activations.\n \n\n\n\n\n figure .teaser-thumb {\n width: 100%;\n border: 30px solid rgb(248, 248, 248);\n border-radius: 10px;\n box-sizing: border-box;\n margin: 12px 0 10px;\n }\n \n\n In this article, we have only scratched the surface of possibilities.\n\n There are lots of combinations of our building blocks left to explore, and the design space gives us a way to do so systematically.\n \n\n\n\n\n Moreover, each building block represents a broad class of techniques.\n\n Our interfaces take only one approach but, as we saw in each section, there are a number of alternatives for feature visualization, attribution, and matrix factorization.\n\n An immediate next step would be to try using these alternate techniques, and research ways to improve them.\n \n\n\n\n Finally, this is not the complete set of building blocks; as new ones are discovered, they expand the space.\n\n For example, Koh & Liang. suggest ways of understanding the influence of dataset examples on model behavior .\n\n We can think of dataset examples as another substrate in our design space, thus becoming another building block that fully composes with the others.\n\n In doing so, we can now imagine interfaces that not only allow us to inspect the influence of dataset examples on the final output classification (as Koh & Liang proposed), but also how examples influence the features of hidden layers, and how they influence the relationship between these features and the output.\n\n For example, if we consider our “Labrador retriever” image, we can not only see which dataset examples most influenced the model to arrive at this classification, but also which dataset examples most caused the “floppy ear” detectors to fire, and which dataset examples most caused these detectors to increase the “Labrador retriever” classification.\n \n\n\n\n![](images/design_space/dataset.svg)\n\n\n\n\n\n\n\n\n\n\n\n\n A new substrate.\n \n\n An interface to understand how dataset examples influence the output classification, as presented by Koh & Liang\n\n\n An interface showing how examples influence the channels of hidden layers.\n \n\n An interface for identifying which dataset examples most caused particular detectors to increase the output classification.\n \n\n\n\n Beyond interfaces for analyzing model behavior, if we add model *parameters* as a substrate, the design space now allows us to consider interfaces for *taking action* on neural networks.Note that essentially all our interpretability techniques are differentiable, so you can backprop through them.\n\n While most models today are trained to optimize simple objective functions that one can easily describe, many of the things we’d like models to do in the real world are subtle, nuanced, and hard to describe mathematically.\n \n An extreme example of the subtle objective problem is something like “creating interesting art”, but much more mundane examples arise more or less whenever humans are involved.\n \n One very promising approach to training models for these subtle objectives is learning from human feedback .\n \n However, even with human feedback, it may still be hard to train models to behave the way we want if the problematic aspect of the model doesn’t surface strongly in the training regime where humans are giving feedback.\n \n \n There are lots of reasons why problematic behavior may not surface or may be hard for an evaluator to give feedback on.\n\n For example, discrimination and bias may be subtly present throughout the model’s behavior, such that it’s hard for a human evaluator to critique.\n \n Or the model may be making a decision in a way that has problematic consequences, but those consequences never play out in the problems we’re training it on.\n \n \n Human feedback on the model’s decision making process, facilitated by interpretability interfaces, could be a powerful solution to these problems.\n \n It might allow us to train models not just to make the *right decisions*, but to make them *for the right reasons*.\n\n (There is however a danger here: we are optimizing our model to look the way we want in our interface — if we aren’t careful, this may lead to the model fooling us!Related ideas have occasionally been discussed under the term “cognitive steganography.”)\n \n\n\n\n\n Another exciting possibility is interfaces for comparing multiple models.\n\n For instance, we might want to see how a model evolves during training, or how it changes when you transfer it to a new task. \n\n Or, we might want to understand how a whole family of models compares to each other.\n\n Existing work has primarily focused on comparing the output behavior of models but more recent work is starting to explore comparing their internal representations as well.\n\n One of the unique challenges of this work is that we may want to align the atoms of each model; if we have completely different models, can we find the most analogous neurons between them?\n\n Zooming out, can we develop interfaces that allow us to evaluate large spaces of models at once?\n \n\n\nHow Trustworthy Are These Interfaces?\n-------------------------------------\n\n\n\n In order for interpretability interfaces to be effective, we must trust the story they are telling us. \n\n We perceive two concerns with the set of building blocks we currently use.\n\n First, do neurons have a relatively consistent meaning across different input images, and is that meaning accurately reified by feature visualization?\n\n Semantic dictionaries, and the interfaces that build on top of them, are premised off this question being true.\n\n Second, does attribution make sense and do we trust any of the attribution methods we presently have?\n \n\n\n\n Much prior research has found that directions in neural networks are semantically meaningful.\n\n One particularly striking example of this is “semantic arithmetic” (eg. “king” - “man” + “woman” = “queen”). \n\n We explored this question, in depth, for GoogLeNet in our previous article and found that many of its neurons seem to correspond to meaningful ideas.We validated this in a number of ways: we visualized them without a generative model prior, so that the content of the visualizations\n was causally linked to the neuron firing; we inspected the spectrum of examples that cause the neuron to fire; and used diversity\n visualizations to try to create different inputs that cause the neuron to fire. \n \nFor more details, see\n [the article’s appendix](https://distill.pub/2017/feature-visualization/appendix/) and the guided tour in\n [@ch402′s Twitter thread](https://twitter.com/ch402/status/927968700384153601). We’re actively investigating why GoogLeNet’s neurons seem more meaningful.\n\n Besides these neurons, however, we also found many neurons that do not have as clean a meaning including “poly-semantic” neurons that respond to a mixture of salient ideas (e.g., “cat” and “car”).\n\n There are natural ways that interfaces could respond to this: we could use diversity visualizations to reveal the variety of meanings the neuron can take, or rotate our semantic dictionaries so their components are more disentangled.\n\n Of course, just like our models can be fooled, the features that make them up can be too — including with adversarial examples .\n\n In our view, features do not need to be flawless detectors for it to be useful for us to think about them as such. \n \n In fact, it can be interesting to identify when a detector misfires. \n \n\n\n\n With regards to attribution, recent work suggests that many of our current techniques are unreliable.\n\n One might even wonder if the idea is fundamentally flawed, since a function’s output could be the result of non-linear interactions between its inputs. \n\n One way these interactions can pan out is as attribution being “path-dependent”.\n\n A natural response to this would be for interfaces to explicitly surface this information: how path-dependent is the attribution?\n\n A deeper concern, however, would be whether this path-dependency dominates the attribution. \n\n Clearly, this is not a concern for attribution between adjacent layers because of the simple (essentially linear) mapping between them. \n \n While there may be technicalities about correlated inputs, we believe that attribution is on firm grounding here.\n\n And even with layers further apart, our experience has been that attribution between high-level features at the output is much more consistent than attribution to the input — we believe that path-dependence is not a dominating concern here. \n \n\n\n\n Model behavior is extremely complex, and our current building blocks force us to show only specific aspects of it. \n \n An important direction for future interpretability research will be developing techniques that achieve broader coverage of model behavior. \n \n But, even with such improvements, we anticipate that a key marker of trustworthiness will be interfaces that do not mislead. \n \n Interacting with the explicit information displayed should not cause users to implicitly draw incorrect assessments about the model (we see a similar principle articulated by Mackinlay for data visualization).\n\n Undoubtedly, the interfaces we present in this article have room to improve in this regard. \n\n Fundamental research, at the intersection of machine learning and human-computer interaction, is necessary to resolve these issues.\n \n\n\n\n Trusting our interfaces is essential for many of the ways we want to use interpretability.\n\n This is both because the stakes can be high (as in safety and fairness) and also because ideas like training models with interpretability feedback put our interpretability techniques in the middle of an adversarial setting.\n \n\n\nConclusion & Future Work\n------------------------\n\n\n\n There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks.\n\n We have a lot of work left ahead of us to build powerful and trustworthy interfaces for interpretability.\n\n But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems.", "date_published": "2018-03-06T20:00:00Z", "authors": ["Chris Olah", "Arvind Satyanarayan", "Ian Johnson", "Shan Carter", "Ludwig Schubert", "Katherine Ye", "Alexander Mordvintsev"], "summaries": ["Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them -- and the rich structure of this combinatorial space."], "doi": "10.23915/distill.00010", "journal_ref": "distill-pub", "bibliography": [{"link": "http://cognitivemedium.com/tat/index.html", "title": "Thought as a Technology"}, {"link": "http://colah.github.io/posts/2015-01-Visualizing-Representations/", "title": "Visualizing Representations: Deep Learning and Human Beings"}, {"link": "http://yosinski.com/media/papers/Yosinski__2015__ICML_DL__Understanding_Neural_Networks_Through_Deep_Visualization__.pdf", "title": "Understanding neural networks through deep visualization"}, {"link": "https://distill.pub/2017/aia/", "title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"link": "https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf", "title": "Visualizing higher-layer features of a deep network"}, {"link": "https://distill.pub/2017/feature-visualization", "title": "Feature Visualization"}, {"link": "https://arxiv.org/pdf/1312.6034.pdf", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"link": "https://arxiv.org/pdf/1412.1897.pdf", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"link": "https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html", "title": "Inceptionism: Going deeper into neural networks"}, {"link": "https://arxiv.org/pdf/1612.00005.pdf", "title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"link": "https://arxiv.org/pdf/1311.2901.pdf", "title": "Visualizing and understanding convolutional networks"}, {"link": "https://arxiv.org/pdf/1412.6806.pdf", "title": "Striving for simplicity: The all convolutional net"}, {"link": "https://arxiv.org/pdf/1610.02391.pdf", "title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"link": "https://arxiv.org/pdf/1704.03296.pdf", "title": "Interpretable Explanations of Black Boxes by Meaningful Perturbation"}, {"link": "https://arxiv.org/pdf/1705.05598.pdf", "title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"link": "https://arxiv.org/pdf/1711.00867.pdf", "title": "The (Un)reliability of saliency methods"}, {"link": "https://arxiv.org/pdf/1703.01365.pdf", "title": "Axiomatic attribution for deep networks"}, {"link": "http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf", "title": "Visualizing data using t-SNE"}, {"link": "https://arxiv.org/pdf/1606.07461.pdf", "title": "LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks"}, {"link": "https://arxiv.org/pdf/1704.01942.pdf", "title": "ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models"}, {"link": "https://arxiv.org/pdf/1710.06501.pdf", "title": "Do convolutional neural networks learn class hierarchy?"}, {"link": "https://arxiv.org/pdf/1409.4842.pdf", "title": "Going deeper with convolutions"}, {"link": "http://distill.pub/2016/deconv-checkerboard", "title": "Deconvolution and Checkerboard Artifacts"}, {"link": "http://cnnlocalization.csail.mit.edu/Zhou_Learning_Deep_Features_CVPR_2016_paper.pdf", "title": "Learning deep features for discriminative localization"}, {"link": "http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.5407&rep=rep1&type=pdf", "title": "Information foraging"}, {"link": "https://arxiv.org/pdf/1711.11279.pdf", "title": "TCAV: Relative concept importance testing with Linear Concept Activation Vectors"}, {"link": "http://papers.nips.cc/paper/7188-svcca-singular-vector-canonical-correlation-analysis-for-deep-learning-dynamics-and-interpretability.pdf", "title": "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability"}, {"link": "https://arxiv.org/pdf/1703.04730.pdf", "title": "Understanding Black-box Predictions via Influence Functions"}, {"link": "http://erichorvitz.com/steering_classification_2010.pdf", "title": "Interactive optimization for steering machine classification"}, {"link": "http://perer.org/papers/adamPerer-Prospector-CHI2016.pdf", "title": "Interacting with predictions: Visual inspection of black-box machine learning models"}, {"link": "https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/amershi.CHI2015.ModelTracker.pdf", "title": "Modeltracker: Redesigning performance analysis tools for machine learning"}, {"link": "https://arxiv.org/pdf/1704.05796.pdf", "title": "Network dissection: Quantifying interpretability of deep visual representations"}, {"link": "https://arxiv.org/pdf/1312.6199.pdf", "title": "Intriguing properties of neural networks"}, {"link": "https://arxiv.org/pdf/1301.3781.pdf", "title": "Efficient estimation of word representations in vector space"}, {"link": "https://arxiv.org/pdf/1511.06434.pdf", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks"}, {"link": "https://arxiv.org/pdf/1511.05122.pdf", "title": "Adversarial manipulation of deep representations"}, {"link": "http://www2.parc.com/istl/groups/uir/publications/items/UIR-1986-02-Mackinlay-TOG-Automating.pdf", "title": "Automating the design of graphical presentations of relational information"}]} {"id": "d5e3298bed72334acaa6805ec946526e", "title": "Using Artificial Intelligence to Augment Human Intelligence", "url": "https://distill.pub/2017/aia", "source": "distill", "source_type": "blog", "text": "What are computers for?\n-------------------------\n\n\n\n Historically, different answers to this question – that is,\n different visions of computing – have helped inspire and\n determine the computing systems humanity has ultimately\n built. Consider the early electronic computers. ENIAC, the\n world’s first general-purpose electronic computer, was\n commissioned to compute artillery firing tables for the United\n States Army. Other early computers were also used to solve\n numerical problems, such as simulating nuclear explosions,\n predicting the weather, and planning the motion of rockets. The\n machines operated in a batch mode, using crude input and output\n devices, and without any real-time interaction. It was a vision\n of computers as number-crunching machines, used to speed up\n calculations that would formerly have taken weeks, months, or more\n for a team of humans.\n \n\n\n\n In the 1950s a different vision of what computers are for began to\n develop. That vision was crystallized in 1962, when Douglas\n Engelbart proposed that computers could be used as a way\n of augmenting human\n intellect. In this view, computers weren’t primarily\n tools for solving number-crunching problems. Rather, they were\n real-time interactive systems, with rich inputs and outputs, that\n humans could work with to support and expand their own\n problem-solving process. This vision of intelligence augmentation\n (IA) deeply influenced many others, including researchers such as\n Alan Kay at Xerox PARC, entrepreneurs such as Steve Jobs at Apple,\n and led to many of the key ideas of modern computing systems. Its\n ideas have also deeply influenced digital art and music, and\n fields such as interaction design, data visualization,\n computational creativity, and human-computer interaction.\n \n\n\n\n Research on IA has often been in competition with research on\n artificial intelligence (AI): competition for funding, competition\n for the interest of talented researchers. Although there has\n always been overlap between the fields, IA has typically focused\n on building systems which put humans and machines to work\n together, while AI has focused on complete outsourcing of\n intellectual tasks to machines. In particular, problems in AI are\n often framed in terms of matching or surpassing human performance:\n beating humans at chess or Go; learning to recognize speech and\n images or translating language as well as humans; and so on.\n \n\n\n\n This essay describes a new field, emerging today out of a\n synthesis of AI and IA. For this field, we suggest the\n name *artificial intelligence augmentation* (AIA): the use\n of AI systems to help develop new methods for intelligence\n augmentation. This new field introduces important new fundamental\n questions, questions not associated to either parent field. We\n believe the principles and systems of AIA will be radically\n different to most existing systems.\n \n\n\n\n Our essay begins with a survey of recent technical work hinting at\n artificial intelligence augmentation, including work\n on *generative interfaces* – that is, interfaces\n which can be used to explore and visualize generative machine\n learning models. Such interfaces develop a kind of cartography of\n generative models, ways for humans to explore and make meaning\n from those models, and to incorporate what those models\n “know” into their creative work.\n \n\n\n\n Our essay is not just a survey of technical work. We believe now\n is a good time to identify some of the broad, fundamental\n questions at the foundation of this emerging field. To what\n extent are these new tools enabling creativity? Can they be used\n to generate ideas which are truly surprising and new, or are the\n ideas cliches, based on trivial recombinations of existing ideas?\n Can such systems be used to develop fundamental new interface\n primitives? How will those new primitives change and expand the\n way humans think?\n \n\n\n\n Using generative models to invent meaningful creative operations\n------------------------------------------------------------------\n\n\n\n Let’s look at an example where a machine learning model makes a\n new type of interface possible. To understand the interface,\n imagine you’re a type designer, working on creating a new\n fontWe shall egregiously abuse the distinction between\n a font and a typeface. Apologies to any type designers who may be\n reading.. After sketching some initial designs, you\n wish to experiment with bold, italic, and condensed variations.\n Let’s examine a tool to generate and explore such variations, from\n any initial design. For reasons that will soon be explained the\n quality of results is quite crude; please bear with us.\n \n\n\n\n\n\n\n Of course, varying the bolding (i.e., the weight), italicization\n and width are just three ways you can vary a font. Imagine that\n instead of building specialized tools, users could build their own\n tool merely by choosing examples of existing fonts. For instance,\n suppose you wanted to vary the degree of serifing on a font. In\n the following, please select 5 to 10 sans-serif fonts from the top\n box, and drag them to the box on the left. Select 5 to 10 serif\n fonts and drag them to the box on the right. As you do this, a\n machine learning model running in your browser will automatically\n infer from these examples how to interpolate your starting font in\n either the serif or sans-serif direction:\n \n\n\n\n\n\n In fact, we used this same technique to build the earlier bolding\n italicization, and condensing tool. To do so, we used the\n following examples of bold and non-bold fonts, of italic and\n non-italic fonts, and of condensed and non-condensed fonts:\n \n\n\n\n\n\n To build these tools, we used what’s called a *generative\n model*; the particular model we use was trained\n by James Wexler. To\n understand generative models, consider that *a priori*\n describing a font appears to require a lot of data. For\n instance, if the font is 646464 by 646464 pixels, then we’d expect\n to need 64×64=4,09664 \\times 64 = 4,09664×64=4,096 parameters to describe a single\n glyph. But we can use a generative model to find a much simpler\n description.\n \n\n\n\n We do this by building a neural network which takes a small number\n of input variables, called *latent variables*, and produces\n as output the entire glyph. For the particular model we use, we\n have 404040 latent space dimensions, and map that into the\n 4,0964,0964,096-dimensional space describing all the pixels in the glyph.\n In other words, the idea is to map a low-dimensional space into a\n higher-dimensional space:\n \n\n\n\n\n\n The generative model we use is a type of neural network known as\n a variational autoencoder\n (VAE). For our purposes, the details of the generative\n model aren’t so important. The important thing is that by\n changing the latent variables used as input, it’s possible to get\n different fonts as output. So one choice of latent variables will\n give one font, while another choice will give a different font:\n \n\n\n\n\n\n You can think of the latent variables as a compact, high-level\n representation of the font. The neural network takes that\n high-level representation and converts it into the full pixel\n data. It’s remarkable that just 404040 numbers can capture the\n apparent complexity in a glyph, which originally required 4,0964,0964,096\n variables.\n \n\n\n\n The generative model we use is learnt from a training set of more\n than 505050 thousand\n fonts Bernhardsson\n scraped from the open web. During training, the weights and\n biases in the network are adjusted so that the network can output\n a close approximation to any desired font from the training set,\n provided a suitable choice of latent variables is made. In some\n sense, the model is learning a highly compressed representation of\n all the training fonts.\n \n\n\n\n In fact, the model doesn’t just reproduce the training fonts. It\n can also generalize, producing fonts not seen in training. By\n being forced to find a compact description of the training\n examples, the neural net learns an abstract, higher-level model of\n what a font is. That higher-level model makes it possible to\n generalize beyond the training examples already seen, to produce\n realistic-looking fonts.\n \n\n\n\n Ideally, a good generative model would be exposed to a relatively\n small number of training examples, and use that exposure to\n generalize to the space of all possible human-readable fonts.\n That is, for any conceivable font – whether existing or\n perhaps even imagined in the future – it would be possible\n to find latent variables corresponding exactly to that font. Of\n course, the model we’re using falls far short of this ideal\n – a particularly egregious failure is that many fonts\n generated by the model omit the tail on the capital\n “Q” (you can see this in the examples above). Still,\n it’s useful to keep in mind what an ideal generative model would\n do.\n \n\n\n\n Such generative models are similar in some ways to how scientific\n theories work. Scientific theories often greatly simplify the\n description of what appear to be complex phenomena, reducing large\n numbers of variables to just a few variables from which many\n aspects of system behavior can be deduced. Furthermore, good\n scientific theories sometimes enable us to generalize to discover\n new phenomena.\n \n\n\n\n As an example, consider ordinary material objects. Such objects\n have what physicists call a *phase* – they may be a\n liquid, a solid, a gas, or perhaps something more exotic, like a\n superconductor\n or [Bose-Einstein\n condensate](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate). *A priori*, such systems seem immensely\n complex, involving perhaps 102310^{23}1023 or so molecules. But the\n laws of thermodynamics and statistical mechanics enable us to find\n a simpler description, reducing that complexity to just a few\n variables (temperature, pressure, and so on), which encompass much\n of the behavior of the system. Furthermore, sometimes it’s\n possible to generalize, predicting unexpected new phases of\n matter. For example, in 1924, physicists used thermodynamics and\n statistical mechanics to predict a remarkable new phase of matter,\n Bose-Einstein condensation, in which a collection of atoms may all\n occupy identical quantum states, leading to surprising large-scale\n quantum interference effects. We’ll come back to this predictive\n ability in our later discussion of creativity and generative\n models.\n \n\n\n\n Returning to the nuts and bolts of generative models, how can we\n use such models to do example-based reasoning like that in the\n tool shown above? Let’s consider the case of the bolding tool. In\n that instance, we take the average of all the latent vectors for\n the user-specified bold fonts, and the average for all the\n user-specified non-bold fonts. We then compute the difference\n between these two average vectors:\n \n\n\n\n\n\n We’ll refer to this as the *bolding vector*. To make some\n given font bolder, we simply add a little of the bolding vector to\n the corresponding latent vector, with the amount of bolding vector\n added controlling the boldness of the resultIn\n practice, sometimes a slightly different procedure is used. In\n some generative models the latent vectors satisfy some constraints\n – for instance, they may all be of the same length. When\n that’s the case, as in our model, a more sophisticated\n “adding” operation must be used, to ensure the length\n remains the same. But conceptually, the picture of adding the\n bolding vector is the right way to think.:\n \n\n\n\n\n\n This technique was introduced\n by Larsen *et al*, and\n vectors like the bolding vector are sometimes called\n *attribute vectors*. The same idea is use to implement all\n the tools we’ve shown. That is, we use example fonts to creating\n a bolding vector, an italicizing vector, a condensing vector, and\n a user-defined serif vector. The interface thus provides a way of\n exploring the latent space in those four directions.\n \n\n\n\n The tools we’ve shown have many drawbacks. Consider the following\n example, where we start with an example glyph, in the middle, and\n either increase or decrease the bolding (on the right and left,\n respectively):\n \n\n\n\n\n\n Examining the glyphs on the left and right we see many unfortunate\n artifacts. Particularly for the rightmost glyph, the edges start to get\n rough, and the serifs begin to disappear. A better generative\n model would reduce those artifacts. That’s a good long-term\n research program, posing many intriguing problems. But even with\n the model we have, there are also some striking benefits to the\n use of the generative model.\n \n\n\n\n To understand these benefits, consider a naive approach to\n bolding, in which we simply add some extra pixels around a glyph’s\n edges, thickening it up. While this thickening perhaps matches a\n non-expert’s way of thinking about type design, an expert does\n something much more involved. In the following we show the\n results of this naive thickening procedure versus what is actually\n done, for Georgia and Helvetica:\n \n\n\n\n\n\n As you can see, the naive bolding procedure produces quite\n different results, in both cases. For example, in Georgia, the\n left stroke is only changed slightly by bolding, while the right\n stroke is greatly enlarged, but only on one side. In both\n fonts, bolding doesn’t change the height of the font, while the\n naive approach does.\n \n\n\n\n As these examples show, good bolding is *not* a trivial\n process of thickening up a font. Expert type designers have many\n heuristics for bolding, heuristics inferred from much previous\n experimentation, and careful study of historical\n examples. Capturing all those heuristics in a conventional program\n would involve immense work. The benefit of using the generative\n model is that it automatically learns many such heuristics.\n \n\n\n\n For example, a naive bolding tool would rapidly fill in the\n enclosed negative space in the enclosed upper region of the letter\n “A”. The font tool doesn’t do this. Instead, it goes\n to some trouble to preserve the enclosed negative space, moving\n the A’s bar down, and filling out the interior strokes more slowly\n than the exterior. This principle is evident in the examples\n shown above, especially Helvetica, and it can also be seen in the\n operation of the font tool:\n \n\n\n\n\n\n The heuristic of preserving enclosed negative space is not *a\n priori* obvious. However, it’s done in many professionally\n designed fonts. If you examine examples like those shown above\n it’s easy to see why: it improves legibility. During training,\n our generative model has automatically inferred this principle\n from the examples it’s seen. And our bolding interface then makes\n this available to the user.\n \n\n\n\n In fact, the model captures many other heuristics. For instance,\n in the above examples the heights of the fonts are (roughly)\n preserved, which is the norm in professional font design. Again,\n what’s going on isn’t just a thickening of the font, but rather\n the application of a more subtle heuristic inferred by the\n generative model. Such heuristics can be used to create fonts\n with properties which would otherwise be unlikely to occur to\n users. Thus, the tool expands ordinary people’s ability to\n explore the space of meaningful fonts.\n \n\n\n\n The font tool is an example of a kind of cognitive technology. In\n particular, the primitive operations it contains can be\n internalized as part of how a user thinks. In this it resembles a\n program such as *Photoshop* or a spreadsheet or 3D graphics\n programs. Each provides a novel set of interface primitives,\n primitives which can be internalized by the user as fundamental\n new elements in their thinking. This act of internalization of new\n primitives is fundamental to much work on intelligence\n augmentation.\n \n\n\n\n The ideas shown in the font tool can be extended to other domains.\n Using the same interface, we can use a generative model to\n manipulate images of human faces using qualities such as\n expression, gender, or hair color. Or to manipulate sentences\n using length, sarcasm, or tone. Or to manipulate molecules using\n chemical properties:\n \n\n\n\n #alternate-uses .root {\n max-width: 300px;\n margin: 0 auto;\n padding: 0 20px;\n }\n @media(min-width: 600px) {\n #alternate-uses .root {\n max-width: 760px;\n display: grid;\n grid-template-columns: 1fr 1fr 1fr;\n grid-gap: 50px;\n }\n }\n .note {\n margin-top: 12px;\n grid-column-end: span 2;\n font-size: 10px;\n line-height: 1.5em;\n text-align: left;\n color: rgba(0, 0, 0, 0.4);\n }\n \n\n\n\n\n\n Images from *Sampling Generative Networks* by White.\n \n\n\n\n\n Sentence from *Pride and Prejudice* by Jane Austen. Interpolated by the authors. Inspired by experiments done by the novelist Robin Sloan\n\n\n\n\n\n Images from *Automatic chemical design using a data-driven continuous representation of molecules* by Gómez-Bombarelli *et al*.\n \n\n\n\n\n Such generative interfaces provide a kind of cartography of\n generative models, ways for humans to explore and make meaning\n using those models.\n \n\n\n\n We saw earlier that the font model automatically infers relatively\n deep principles about font design, and makes them available to\n users. While it’s great that such deep principles can be\n inferred, sometimes such models infer other things that are wrong,\n or undesirable. For example, White\n points out the addition of a smile vector in some face\n models will make faces not just smile more, but also appear more\n feminine. Why? Because in the training data more women than men\n were smiling. So these models may not just learn deep facts about\n the world, they may also internalize prejudices or erroneous\n beliefs. Once such a bias is known, it is often possible to make\n corrections. But to find those biases requires careful auditing\n of the models, and it is not yet clear how we can ensure such\n audits are exhaustive.\n \n\n\n\n\n More broadly, we can ask why attribute vectors work, when they\n work, and when they fail? At the moment, the answers to these\n questions are poorly understood.\n \n\n\n\n For the attribute vector to work requires that taking any starting\n font, we can construct the corresponding bold version by adding\n the *same* vector in the latent space. However, *a\n priori* there is no reason using a single constant vector to\n displace will work. It may be that we should displace in many\n different ways. For instance, the heuristics used to bold serif\n and sans-serif fonts are quite different, and so it seems likely\n that very different displacements would be involved:\n \n\n\n\n\n\n Of course, we could do something more sophisticated than using a\n single constant attribute vector. Given pairs of example fonts\n (unbold, bold) we could train a machine learning algorithm to take\n as input the latent vector for the unbolded version and output the\n latent vector for the bolded version. With additional training\n data about font weights, the machine learning algorithm could\n learn to generate fonts of arbitrary weight. Attribute vectors\n are just an extremely simple approach to doing these kinds of\n operation.\n \n\n\n\n For these reasons, it seems unlikely that attribute vectors will\n last as an approach to manipulating high-level features. Over the\n next few years much better approaches will be developed. However,\n we can still expect interfaces offering operations broadly similar\n to those sketched above, allowing access to high-level and\n potentially user-defined concepts. That interface pattern doesn’t\n depend on the technical details of attribute vectors.\n \n\n\n\n Interactive Generative Adversarial Models\n-------------------------------------------\n\n\n\n Let’s look at another example using machine learning models to\n augment human creativity. It’s the interactive generative\n adversarial networks, or iGANs, introduced\n by Zhu *et al* in 2016.\n \n\n\n\n One of the examples of Zhu *et al* is the use of iGANs in\n an interface to generate images of consumer products such as\n shoes. Conventionally, such an interface would require the\n programmer to write a program containing a great deal of knowledge\n about shoes: soles, laces, heels, and so on. Instead of doing\n this, Zhu *et al* train a generative model using 505050\n thousand images of shoes, downloaded from Zappos. They then use\n that generative model to build an interface that lets a user\n roughly sketch the shape of a shoe, the sole, the laces, and so\n on:\n \n\n\n\n\nExcerpted from Zhu *et\n al*.\n\n\n The visual quality is low, in part because the generative model\n Zhu *et al* used is outdated by modern (2017) standards\n – with more modern models, the visual quality would be much\n higher.\n \n\n\n\n But the visual quality is not the point. Many interesting things\n are going on in this prototype. For instance, notice how the\n overall shape of the shoe changes considerably when the sole is\n filled in – it becomes narrower and sleeker. Many small\n details are filled in, like the black piping on the top of the\n white sole, and the red coloring filled in everywhere on the\n shoe’s upper. These and other facts are automatically deduced\n from the underlying generative model, in a way we’ll describe\n shortly.\n \n\n\n\n The same interface may be used to sketch landscapes. The only\n difference is that the underlying generative model has been\n trained on landscape images rather than images of shoes. In this\n case it becomes possible to sketch in just the colors associated\n to a landscape. For example, here’s a user sketching in some green\n grass, the outline of a mountain, some blue sky, and snow on the\n mountain:\n \n\n\n\n\nExcerpted from Zhu *et\n al*.\n\n\n The generative models used in these interfaces are different than\n for our font model. Rather than using variational autoencoders,\n they’re based on generative\n adversarial networks (GANs). But the underlying idea is\n still to find a low-dimensional latent space which can be used to\n represent (say) all landscape images, and map that latent space to\n a corresponding image. Again, we can think of points in the\n latent space as a compact way of describing landscape images.\n \n\n\n\n Roughly speaking, the way the iGANs works is as follows. Whatever\n the current image is, it corresponds to some point in the latent\n space:\n \n\n\n\n\n\n Suppose, as happened in the earlier video, the user now sketches\n in a stroke outlining the mountain shape. We can think of the\n stroke as a constraint on the image, picking out a subspace of the\n latent space, consisting of all points in the latent space whose\n image matches that outline:\n \n\n\n\n\n\n The way the interface works is to find a point in the latent space\n which is near to the current image, so the image is not changed\n too much, but also coming close to satisfying the imposed\n constraints. This is done by optimizing an objective function\n which combines the distance to each of the imposed constraints, as\n well as the distance moved from the current point. If there’s\n just a single constraint, say, corresponding to the mountain\n stroke, this looks something like the following:\n \n\n\n\n\n\n We can think of this, then, as a way of applying constraints to\n the latent space to move the image around in meaningful ways.\n \n\n\n\n The iGANs have much in common with the font tool we showed\n earlier. Both make available operations that encode much subtle\n knowledge about the world, whether it be learning to understand\n what a mountain looks like, or inferring that enclosed negative\n space should be preserved when bolding a font. Both the iGANs and\n the font tool provide ways of understanding and navigating a\n high-dimensional space, keeping us on the natural space of fonts\n or shoes or landscapes. As Zhu *et al* remark:\n \n\n\n\n> \n> [F]or most of us, even a simple image manipulation in Photoshop\n> presents insurmountable difficulties… any less-than-perfect\n> edit immediately makes the image look completely unrealistic. To\n> put another way, classic visual manipulation paradigm does not\n> prevent the user from “falling off” the manifold of\n> natural images.\n> \n\n\n\n Like the font tool, the iGANs is a cognitive technology. Users\n can internalize the interface operations as new primitive elements\n in their thinking. In the case of shoes, for example, they can\n learn to think in terms of the difference they want to apply,\n adding a heel, or a higher top, or a special highlight. This is\n richer than the traditional way non-experts think about shoes\n (“Size 11, black” *etc*). To the extent that\n non-experts do think in more sophisticated ways –\n “make the top a little higher and sleeker” –\n they get little practice in thinking this way, or seeing the\n consequences of their choices. Having an interface like this\n enables easier exploration, the ability to develop idioms and the\n ability to plan, to swap ideas with friends, and so on.\n \n\n\n\n Two models of computation\n---------------------------\n\n\n\n Let’s revisit the question we began the essay with, the question\n of what computers are for, and how this relates to intelligence\n augmentation.\n \n\n\n\n One common conception of computers is that they’re problem-solving\n machines: “computer, what is the result of firing this\n artillery shell in such-and-such a wind [and so on]?”;\n “computer, what will the maximum temperature in Tokyo be in\n 5 days?”; “computer, what is the best move to take\n when the Go board is in this position?”; “computer,\n how should this image be classified?”; and so on.\n \n\n\n\n This is a conception common to both the early view of computers as\n number-crunchers, and also in much work on AI, both historically\n and today. It’s a model of a computer as a way of outsourcing\n cognition. In speculative depictions of possible future AI,\n this *cognitive outsourcing* model often shows up in the\n view of an AI as an oracle, able to solve some large class of\n problems with better-than-human performance.\n \n\n\n\n But a very different conception of what computers are for is\n possible, a conception much more congruent with work on\n intelligence augmentation.\n \n\n\n\n To understand this alternate view, consider our subjective\n experience of thought. For many people, that experience is verbal:\n they think using language, forming chains of words in their heads,\n similar to sentences in speech or written on a page. For other\n people, thinking is a more visual experience, incorporating\n representations such as graphs and maps. Still other people mix\n mathematics into their thinking, using algebraic expressions or\n diagrammatic techniques, such as Feynman diagrams and Penrose\n diagrams.\n \n\n\n\n In each case, we’re thinking using representations invented by\n other people: words, graphs, maps, algebra, mathematical diagrams,\n and so on. We internalize these cognitive technologies as we grow\n up, and come to use them as a kind of substrate for our thinking.\n \n\n\n\n For most of history, the range of available cognitive technologies\n has changed slowly and incrementally. A new word will be\n introduced, or a new mathematical symbol. More rarely, a radical\n new cognitive technology will be developed. For example, in 1637\n Descartes published his “Discourse on Method”,\n explaining how to represent geometric ideas using algebra, and\n vice versa:\n \n\n\n\n\n This enabled a radical change and expansion in how we think about\n both geometry and algebra.\n \n\n\n\n Historically, lasting cognitive technologies have been invented\n only rarely. But modern computers are a meta-medium enabling the\n rapid invention of many new cognitive technologies. Consider a\n relatively banal example, such\n as *Photoshop*. Adept *Photoshop* users routinely\n have formerly impossible thoughts such as: “let’s apply the\n clone stamp to the such-and-such layer.”. That’s an\n instance of a more general class of thought: “computer, [new\n type of action] this [new type of representation for a newly\n imagined class of object]”. When that happens, we’re using\n computers to expand the range of thoughts we can think.\n \n\n\n\n It’s this kind of *cognitive transformation* model which\n underlies much of the deepest work on intelligence augmentation.\n Rather than outsourcing cognition, it’s about changing the\n operations and representations we use to think; it’s about\n changing the substrate of thought itself. And so while cognitive\n outsourcing is important, this cognitive transformation view\n offers a much more profound model of intelligence augmentation.\n It’s a view in which computers are a means to change and expand\n human thought itself.\n \n\n\n\n Historically, cognitive technologies were developed by human\n inventors, ranging from the invention of writing in Sumeria and\n Mesoamerica, to the modern interfaces of designers such as Douglas\n Engelbart, Alan Kay, and others.\n \n\n\n\n Examples such as those described in this essay suggest that AI\n systems can enable the creation of new cognitive technologies.\n Things like the font tool aren’t just oracles to be consulted when\n you want a new font. Rather, they can be used to explore and\n discover, to provide new representations and operations, which can\n be internalized as part of the user’s own thinking. And while\n these examples are in their early stages, they suggest AI is not\n just about cognitive outsourcing. A different view of AI is\n possible, one where it helps us invent new cognitive technologies\n which transform the way we think.\n \n\n\n\n In this essay we’ve focused on a small number of examples, mostly\n involving exploration of the latent space. There are many other\n examples of artificial intelligence augmentation. To give some\n flavor, without being comprehensive:\n the sketch-rnn system, for neural\n network assisted drawing;\n the Wekinator, which enables\n users to rapidly build new musical instruments and artistic\n systems; TopoSketch, for developing\n animations by exploring latent spaces; machine learning models for\n designing overall typographic\n layout; and a generative model which enables\n interpolation between musical\n phrases. In each case, the systems use machine learning\n to enable new primitives which can be integrated into the user’s\n thinking. More broadly, artificial intelligence augmentation will\n draw on fields such as computational\n creativity and interactive machine\n learning.\n \n\n\n\n Finding powerful new primitives of thought\n--------------------------------------------\n\n\n\n We’ve argued that machine learning systems can help create\n representations and operations which serve as new primitives in\n human thought. What properties should we look for in such new\n primitives? This is too large a question to be answered\n comprehensively in a short essay. But we will explore it briefly.\n \n\n\n\n Historically, important new media forms often seem strange when\n introduced. Many such stories have passed into popular culture:\n the near riot at the premiere of Stravinsky and Nijinksy’s\n “Rite of Spring”; the consternation caused by the\n early cubist paintings, leading\n *The New York Times* to\n comment: “What do they mean? Have those\n responsible for them taken leave of their senses? Is it art or\n madness? Who knows?”\n \n\n\n\n Another example comes from physics. In the 1940s, different\n formulations of the theory of quantum electrodynamics were\n developed independently by the physicists Julian Schwinger,\n Shin’ichirō Tomonaga, and Richard Feynman. In their work,\n Schwinger and Tomonaga used a conventional algebraic approach,\n along lines similar to the rest of physics. Feynman used a more\n radical approach, based on what are now known as Feynman diagrams,\n for depicting the interaction of light and matter:\n \n\n\n\n![](images/feynmann-diagram.svg)\nImage by [Joel\n Holdsworth](https://commons.wikimedia.org/w/index.php?curid=1764161)), licensed under a Creative Commons\n Attribution-Share Alike 3.0 Unported license\n \n\n\n Initially, the Schwinger-Tomonaga approach was easier for other\n physicists to understand. When Feynman and Schwinger presented\n their work at a 1948 workshop, Schwinger was immediately\n acclaimed. By contrast, Feynman left his audience mystified. As\n James Gleick put it in his biography of\n Feynman:\n \n\n\n\n> \n> It struck Feynman that everyone had a favorite principle or\n> theorem and he was violating them all… Feynman knew he had\n> failed. At the time, he was in anguish. Later he said simply:\n> “I had too much stuff. My machines came from too far\n> away.”\n> \n\n\n\n Of course, strangeness for strangeness’s sake alone is not\n useful. But these examples suggest that breakthroughs in\n representation often appear strange at first. Is there any\n underlying reason that is true?\n \n\n\n\n Part of the reason is because if some representation is truly new,\n then it will appear different than anything you’ve ever seen\n before. Feynman’s diagrams, Picasso’s paintings, Stravinsky’s\n music: all revealed genuinely new ways of making meaning. Good\n representations sharpen up such insights, eliding the familiar to\n show that which is new as vividly as possible. But because of\n that emphasis on unfamiliarity, the representation will seem\n strange: it shows relationships you’ve never seen before. In some\n sense, the task of the designer is to identify that core\n strangeness, and to amplify it as much as possible.\n \n\n\n\n Strange representations are often difficult to understand. At\n first, physicists preferred Schwinger-Tomonaga to Feynman. But as\n Feynman’s approach was slowly understood by physicists, they\n realized that although Schwinger-Tomonaga and Feynman were\n mathematically equivalent, Feynman was more powerful. As Gleick\n puts it:\n \n\n\n\n> \n> Schwinger’s students at Harvard were put at a competitive\n> disadvantage, or so it seemed to their fellows elsewhere, who\n> suspected them of surreptitiously using the diagrams anyway. This\n> was sometimes true… Murray Gell-Mann later spent a semester\n> staying in Schwinger’s house and loved to say afterward that he\n> had searched everywhere for the Feynman diagrams. He had not\n> found any, but one room had been locked…\n> \n\n\n\n These ideas are true not just of historical representations, but\n also of computer interfaces. However, our advocacy of strangeness\n in representation contradicts much conventional wisdom about\n interfaces, especially the widely-held belief that they should be\n “user friendly”, i.e., simple and immediately useable\n by novices. That most often means the interface is cliched, built\n from conventional elements combined in standard ways. But while\n using a cliched interface may be easy and fun, it’s an ease\n similar to reading a formulaic romance novel. It means the\n interface does not reveal anything truly surprising about its\n subject area. And so it will do little to deepen the user’s\n understanding, or to change the way they think. For mundane tasks\n that is fine, but for deeper tasks, and for the longer term, you\n want a better interface.\n \n\n\n\n Ideally, an interface will surface the deepest principles\n underlying a subject, revealing a new world to the user. When you\n learn such an interface, you internalize those principles, giving\n you more powerful ways of reasoning about that world. Those\n principles are the diffs in your understanding. They’re all you\n really want to see, everything else is at best support, at worst\n unimportant dross. The purpose of the best interfaces isn’t to be\n user-friendly in some shallow sense. It’s to be user-friendly in\n a much stronger sense, reifying deep\n principles about the world, making them the working\n conditions in which users live and create. At that point what once\n appeared strange can instead becomes comfortable and familiar,\n part of the pattern of thoughtA powerful instance of\n these ideas is when an interface reifies general-purpose\n principles. An example is an\n interface one of us developed\n based on the principle of conservation of energy. Such\n general-purpose principles generate multiple unexpected\n relationships between the entities of a subject, and so are a\n particularly rich source of insights when reified in an\n interface..\n \n\n\n\n What does this mean for the use of AI models for intelligence\n augmentation?\n \n\n\n\n Aspirationally, as we’ve seen, our machine learning models will\n help us build interfaces which reify deep principles in ways\n meaningful to the user. For that to happen, the models have to\n discover deep principles about the world, recognize those\n principles, and then surface them as vividly as possible in an\n interface, in a way comprehensible by the user.\n \n\n\n\n Of course, this is a tall order! The examples we’ve shown are just\n barely beginning to do this. It’s true that our models do\n sometimes discover relatively deep principles, like the\n preservation of enclosed negative space when bolding a font. But\n this is merely implicit in the model. And while we’ve built a tool\n which takes advantage of such principles, it’d be better if the\n model automatically inferred the important principles learned, and\n found ways of explicitly surfacing them through the interface.\n (Encouraging progress toward this has been made\n by InfoGANs, which use\n information-theoretic ideas to find structure in the latent\n space.) Ideally, such models would start to get at true\n explanations, not just in a static form, but in a dynamic form,\n manipulable by the user. But we’re a long way from that point.\n \n\n\n\n Do these interfaces inhibit creativity?\n-----------------------------------------\n\n\n\n It’s tempting to be skeptical of the expressiveness of the\n interfaces we’ve described. If an interface constrains us to\n explore only the natural space of images, does that mean we’re\n merely doing the expected? Does it mean these interfaces can only\n be used to generate visual cliches? Does it prevent us from\n generating anything truly new, from doing truly creative work?\n \n\n\n\n To answer these questions, it’s helpful to identify two different\n modes of creativity. This two-mode model is over-simplified:\n creativity doesn’t fit so neatly into two distinct categories. Yet\n the model nonetheless clarifies the role of new interfaces in\n creative work.\n \n\n\n\n The first mode of creativity is the everyday creativity of a\n craftsperson engaged in their craft. Much of the work of a font\n designer, for example, consists of competent recombination of the\n best existing practices. Such work typically involves many\n creative choices to meet the intended design goals, but not\n developing key new underlying principles.\n \n\n\n\n For such work, the generative interfaces we’ve been discussing are\n promising. While they currently have many limitations, future\n research will identity and fix many deficiencies. This is\n happening rapidly with GANs: the original\n GANs had many limitations,\n but models soon appeared that were better adapted to\n images, improved the\n resolution, reduced artifactsSo much work has been\n done on improving resolution and reducing artifacts it seems\n unfair to single out any small set of papers, and to omit the many\n others., and so on. With enough iterations it’s\n plausible these generative interfaces will become powerful tools\n for craft work.\n \n\n\n\n The second mode of creativity aims toward developing new\n principles that fundamentally change the range of creative\n expression. One sees this in the work of artists such as Picasso\n or Monet, who violated existing principles of painting, developing\n new principles which enabled people to see in new ways.\n \n\n\n\n Is it possible to do such creative work, while using a generative\n interface? Don’t such interfaces constrain us to the space of\n natural images, or natural fonts, and thus actively prevent us\n from exploring the most interesting new directions in creative\n work?\n \n\n\n\n The situation is more complex than this.\n \n\n\n\n In part, this is a question about the power of our generative\n models. In some cases, the model can only generate recombinations\n of existing ideas. This is a limitation of an ideal GAN, since a\n perfectly trained GAN generator will reproduce the training\n distribution. Such a model can’t directly generate an image based\n on new fundamental principles, because such an image wouldn’t look\n anything like it’s seen in its training data.\n \n\n\n\n Artists such as [Mario\n Klingemann](http://quasimondo.com/) and [Mike\n Tyka](http://www.miketyka.com/) are now using GANs to create interesting\n artwork. They’re doing that using “imperfect” GAN\n models, which they seem to be able to use to explore interesting\n new principles; it’s perhaps the case that bad GANs may be more\n artistically interesting than ideal GANs. Furthermore, nothing\n says an interface must only help us explore the latent space.\n Perhaps operations can be added which deliberately take us out\n of the latent space, or to less probable (and so more\n surprising) parts of the space of natural images.\n \n\n\n\n Of course, GANs are not the only generative models. In a\n sufficiently powerful generative model, the generalizations\n discovered by the model may contain ideas going beyond what humans\n have discovered. In that case, exploration of the latent space may\n enable us to discover new fundamental principles. The model would\n have discovered stronger abstractions than human experts. Imagine\n a generative model trained on paintings up until just before the\n time of the cubists; might it be that by exploring that model it\n would be possible to discover cubism? It would be an analogue to\n something like the prediction of Bose-Einstein condensation, as\n discussed earlier in the essay. Such invention is beyond today’s\n generative models, but seems a worthwhile aspiration for future\n models.\n \n\n\n\n Our examples so far have all been based on generative models. But\n there are some illuminating examples which are not based on\n generative models. Consider the pix2pix system developed\n by Isola *et al*. This\n system is trained on pairs of images, e.g., pairs showing the\n edges of a cat, and the actual corresponding cat. Once trained,\n it can be shown a set of edges and asked to generate an image for\n an actual corresponding cat. It often does this quite well:\n \n\n\n\n .cat-grid .row {\n grid-template-columns: 1fr 1fr 0.5fr;\n align-items: center;\n }\n .cat-grid .row {\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n }\n .cat-grid .row:last-child {\n border-bottom: none;\n }\n \n\n\n#### Input\n\n\n#### Output\n\n\n\n\n\n![](images/cat-sample-input.jpg)\n![](images/cat-sample-output.jpg)\n\n[Live demo by Christopher Hesse](https://affinelayer.com/pixsrv/)\n\n\n\n\n When supplied with unusual constraints, pix2pix can produce\n striking images:\n \n\n\n\n\n![](images/bread-cat-input.jpg)\n![](images/bread-cat-output.jpg)\n[Bread cat by Ivy Tsai](https://twitter.com/ivymyt/status/834174687282241537)\n\n\n![](images/cat-beholder-input.jpg)\n![](images/cat-beholder-output.jpg)\n[Cat beholder by Marc Hesse](https://affinelayer.com/pixsrv/beholder.jpg)\n\n\n![](images/spiral-cat-input.jpg)\n![](images/spiral-cat-output.jpg)\nSpiral cat\n\n\n\n This is perhaps not high creativity of a Picasso-esque level. But\n it is still surprising. It’s certainly unlike images most of us\n have ever seen before. How does pix2pix and its human user achieve\n this kind of result?\n \n\n\n\n Unlike our earlier examples, pix2pix is not a generative model.\n This means it does not have a latent space or a corresponding\n space of natural images. Instead, there is a neural network,\n called, confusingly, a generator – this is not meant in the\n same sense as our earlier generative models – that takes as\n input the constraint image, and produces as output the filled-in\n image.\n \n\n\n\n The generator is trained adversarially against a discriminator\n network, whose job is to distinguish between pairs of images\n generated from real data, and pairs of images generated by the\n generator.\n \n\n\n\n While this sounds similar to a conventional GAN, there is a\n crucial difference: there is no latent vector input to the\n generatorActually, Isola *et\n al* experimented with adding such a latent vector to\n the generator, but found it made little difference to the\n resulting images.. Rather, there is simply an input\n constraint. When a human inputs a constraint unlike anything seen\n in training, the network is forced to improvise, doing the best it\n can to interpret that constraint according to the rules it has\n previously learned. The creativity is the result of a forced\n merger of knowledge inferred from the training data, together with\n novel constraints provided by the user. As a result, even\n relatively simple ideas – like the bread- and beholder-cats\n – can result in striking new types of images, images not\n within what we would previously have considered the space of\n natural images.\n \n\n\nConclusion\n----------\n\n\n\n It is conventional wisdom that AI will change how we interact with\n computers. Unfortunately, many in the AI community greatly\n underestimate the depth of interface design, often regarding it as\n a simple problem, mostly about making things pretty or\n easy-to-use. In this view, interface design is a problem to be\n handed off to others, while the hard work is to train some machine\n learning system.\n \n\n\n\n This view is incorrect. At its deepest, interface design means\n developing the fundamental primitives human beings think and\n create with. This is a problem whose intellectual genesis goes\n back to the inventors of the alphabet, of cartography, and of\n musical notation, as well as modern giants such as Descartes,\n Playfair, Feynman, Engelbart, and Kay. It is one of the hardest,\n most important and most fundamental problems humanity grapples\n with.\n \n\n\n\n As discussed earlier, in one common view of AI our computers will\n continue to get better at solving problems, but human beings will\n remain largely unchanged. In a second common view, human beings\n will be modified at the hardware level, perhaps directly through\n neural interfaces, or indirectly through whole brain emulation.\n \n\n\n\n We’ve described a third view, in which AIs actually change\n humanity, helping us invent new cognitive technologies, which\n expand the range of human thought. Perhaps one day those\n cognitive technologies will, in turn, speed up the development of\n AI, in a virtuous feedback cycle:\n \n\n\n\n![](images/cycle.svg)\n\n\n It would not be a Singularity in machines. Rather, it would be a\n Singularity in humanity’s range of thought. Of course, this loop\n is at present extremely speculative. The systems we’ve described\n can help develop more powerful ways of thinking, but there’s at\n most an indirect sense in which those ways of thinking are being\n used in turn to develop new AI systems.\n \n\n\n\n Of course, over the long run it’s possible that machines will\n exceed humans on all or most cognitive tasks. Even if that’s the\n case, cognitive transformation will still be a valuable end, worth\n pursuing in its own right. There is pleasure and value involved\n in learning to play chess or Go well, even if machines do it\n better. And in activities such as story-telling the benefit often\n isn’t so much the artifact produced as the process of construction\n itself, and the relationships forged. There is intrinsic value in\n personal change and growth, apart from instrumental benefits.\n \n\n\n\n The interface-oriented work we’ve discussed is outside the\n narrative used to judge most existing work in artificial\n intelligence. It doesn’t involve beating some benchmark for a\n classification or regression problem. It doesn’t involve\n impressive feats like beating human champions at games such as\n Go. Rather, it involves a much more subjective and\n difficult-to-measure criterion: is it helping humans think and\n create in new ways?\n \n\n\n\n This creates difficulties for doing this kind of work,\n particularly in a research setting. Where should one publish?\n What community does one belong to? What standards should be\n applied to judge such work? What distinguishes good work from\n bad?\n \n\n\n\n We believe that over the next few years a community will emerge\n which answers these questions. It will run workshops and\n conferences. It will publish work in venues such as Distill. Its\n standards will draw from many different communities: from the\n artistic and design and musical communities; from the mathematical\n community’s taste in abstraction and good definition; as well as\n from the existing AI and IA communities, including work on\n computational creativity and human-computer interaction. The\n long-term test of success will be the development of tools which\n are widely used by creators. Are artists using these tools to\n develop remarkable new styles? Are scientists in other fields\n using them to develop understanding in ways not otherwise\n possible? These are great aspirations, and require an approach\n that builds on conventional AI work, but also incorporates very\n different norms.", "date_published": "2017-12-04T20:00:00Z", "authors": ["Shan Carter", "Michael Nielsen"], "summaries": ["By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning."], "doi": "10.23915/distill.00009", "journal_ref": "distill-pub", "bibliography": [{"link": "https://pair-code.github.io/font-explorer/", "title": "deeplearn.js font demo"}, {"link": "https://erikbern.com/2016/01/21/analyzing-50k-fonts-using-deep-neural-networks.html", "title": "Analyzing 50k fonts using deep neural networks"}, {"link": "http://arxiv.org/pdf/1609.04468.pdf", "title": "Sampling Generative Networks"}, {"link": "https://vimeo.com/232545219", "title": "Writing with the Machine"}, {"link": "http://arxiv.org/pdf/1610.02415.pdf", "title": "Automatic chemical design using a data-driven continuous representation of molecules"}, {"link": "http://arxiv.org/pdf/1704.03477.pdf", "title": "A Neural Representation of Sketch Drawings"}, {"link": "http://www.jon.gold/2016/05/robot-design-school/", "title": "Taking The Robots To Design School, Part 1"}, {"link": "https://nips2017creativity.github.io/doc/Hierarchical_Variational_Autoencoders_for_Music.pdf", "title": "Hierarchical Variational Autoencoders for Music"}, {"link": "http://query.nytimes.com/mem/archive-free/pdf?res=9D02E2D71131E233A2575BC0A9669D946096D6CF&mcubz=3", "title": "Eccentric School of Painting Increased Its Vogue in the Current Art Exhibition — What Its Followers Attempt to Do"}, {"link": "http://cognitivemedium.com/tat/index.html", "title": "Thought as a Technology"}, {"link": "http://arxiv.org/pdf/1511.06434.pdf", "title": "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks"}, {"link": "http://arxiv.org/pdf/1611.07004.pdf", "title": "Image-to-Image Translation with Conditional Adversarial Networks"}]} {"id": "5445aad20630afcd10626117b6df4dcc", "title": "Sequence Modeling with CTC", "url": "https://distill.pub/2017/ctc", "source": "distill", "source_type": "blog", "text": "Introduction\n------------\n\n\nConsider speech recognition. We have a dataset of audio clips and\ncorresponding transcripts. Unfortunately, we don’t know how the characters in\nthe transcript align to the audio. This makes training a speech recognizer\nharder than it might at first seem.\n\n\nWithout this alignment, the simple approaches aren’t available to us. We\ncould devise a rule like “one character corresponds to ten inputs”. But\npeople’s rates of speech vary, so this type of rule can always be broken.\nAnother alternative is to hand-align each character to its location in the\naudio. From a modeling standpoint this works well — we’d know the ground truth\nfor each input time-step. However, for any reasonably sized dataset this is\nprohibitively time consuming.\n\n\nThis problem doesn’t just turn up in speech recognition. We see it in many\nother places. Handwriting recognition from images or sequences of pen strokes\nis one example. Action labelling in videos is another.\n\n\n\n\n![](assets/handwriting_recognition.svg)\n\n**Handwriting recognition:** The input can be\n (x,y)(x,y)(x,y) coordinates of a pen stroke or\n pixels in an image.\n \n\n\n![](assets/speech_recognition.svg)\n\n**Speech recognition:** The input can be a spectrogram or some\n other frequency based feature extractor.\n \n\n\nConnectionist Temporal Classification (CTC) is a way to get around not\nknowing the alignment between the input and the output. As we’ll see, it’s\nespecially well suited to applications like speech and handwriting\nrecognition.\n\n\n\n\n---\n\n\nTo be a bit more formal, let’s consider mapping input sequences\nX=[x1,x2,…,xT]X = [x\\_1, x\\_2, \\ldots, x\\_T]X=[x1​,x2​,…,xT​], such as audio, to corresponding output\nsequences Y=[y1,y2,…,yU]Y = [y\\_1, y\\_2, \\ldots, y\\_U]Y=[y1​,y2​,…,yU​], such as transcripts.\nWe want to find an accurate mapping from XXX’s to YYY’s.\n\n\nThere are challenges which get in the way of us\nusing simpler supervised learning algorithms. In particular:\n\n\n* Both XXX and YYY\n can vary in length.\n* The ratio of the lengths of XXX and YYY\n can vary.\n* We don’t have an accurate alignment (correspondence of the elements) of\n XXX and Y.Y.Y.\n\n\nThe CTC algorithm overcomes these challenges. For a given XXX\nit gives us an output distribution over all possible YYY’s. We\ncan use this distribution either to *infer* a likely output or to assess\nthe *probability* of a given output.\n\n\nNot all ways of computing the loss function and performing inference are\ntractable. We’ll require that CTC do both of these efficiently.\n\n\n**Loss Function:** For a given input, we’d like to train our\nmodel to maximize the probability it assigns to the right answer. To do this,\nwe’ll need to efficiently compute the conditional probability\np(Y∣X).p(Y \\mid X).p(Y∣X). The function p(Y∣X)p(Y \\mid X)p(Y∣X) should\nalso be differentiable, so we can use gradient descent.\n\n\n**Inference:** Naturally, after we’ve trained the model, we\nwant to use it to infer a likely YYY given an X.X.X.\nThis means solving\n\nY∗=argmaxYp(Y∣X).\n Y^\\* \\enspace =\\enspace {\\mathop{\\text{argmax}}\\limits\\_{Y}} \\enspace p(Y \\mid X).\nY∗=Yargmax​p(Y∣X).\n\nIdeally Y∗Y^\\*Y∗ can be found efficiently. With CTC we’ll settle\nfor an approximate solution that’s not too expensive to find.\n\n\nThe Algorithm\n-------------\n\n\nThe CTC algorithm can assign a probability for any YYY\ngiven an X.X.X. The key to computing this probability is how CTC\nthinks about alignments between inputs and outputs. We’ll start by looking at\nthese alignments and then show how to use them to compute the loss function and\nperform inference.\n\n\n### Alignment\n\n\nThe CTC algorithm is *alignment-free* — it doesn’t require an\nalignment between the input and the output. However, to get the probability of\nan output given an input, CTC works by summing over the probability of all\npossible alignments between the two. We need to understand what these\nalignments are in order to understand how the loss function is ultimately\ncalculated.\n\n\nTo motivate the specific form of the CTC alignments, first consider a naive\napproach. Let’s use an example. Assume the input has length six and Y=Y\n=Y= [c, a, t]. One way to align XXX and YYY\nis to assign an output character to each input step and collapse repeats.\n\n\n\n![](assets/naive_alignment.svg)\n\nThis approach has two problems.\n\n\n* Often, it doesn’t make sense to force every input step to align to\n some output. In speech recognition, for example, the input can have stretches\n of silence with no corresponding output.\n* We have no way to produce outputs with multiple characters in a row.\n Consider the alignment [h, h, e, l, l, l, o]. Collapsing repeats will\n produce “helo” instead of “hello”.\n\n\nTo get around these problems, CTC introduces a new token to the set of\nallowed outputs. This new token is sometimes called the *blank* token. We’ll\nrefer to it here as ϵ.\\epsilon.ϵ. The\nϵ\\epsilonϵ token doesn’t correspond to anything and is simply\nremoved from the output.\n\n\nThe alignments allowed by CTC are the same length as the input. We allow any\nalignment which maps to YYY after merging repeats and removing\nϵ\\epsilonϵ tokens:\n\n\n\n![](assets/ctc_alignment_steps.svg)\n\nIf YYY has two of the same character in a row, then a valid\nalignment must have an ϵ\\epsilonϵ between them. With this rule\nin place, we can differentiate between alignments which collapse to “hello” and\nthose which collapse to “helo”.\n\n\nLet’s go back to the output [c, a, t] with an input of length six. Here are\na few more examples of valid and invalid alignments.\n\n\n\n![](assets/valid_invalid_alignments.svg)\n\nThe CTC alignments have a few notable properties. First, the allowed\nalignments between XXX and YYY are monotonic.\nIf we advance to the next input, we can keep the corresponding output the\nsame or advance to the next one. A second property is that the alignment of\nXXX to YYY is many-to-one. One or more input\nelements can align to a single output element but not vice-versa. This implies\na third property: the length of YYY cannot be greater than the\nlength of X.X.X.\n\n\n### Loss Function\n\n\nThe CTC alignments give us a natural way to go from probabilities at each\ntime-step to the probability of an output sequence.\n\n\n\n![](assets/full_collapse_from_audio.svg)\n\nTo be precise, the CTC objective\nfor a single (X,Y)(X, Y)(X,Y) pair is:\n\n\n\n\n\n\np(Y∣X)=p(Y \\mid X) \\;\\; =p(Y∣X)=\n\n\n∑A∈AX,Y\\sum\\_{A \\in \\mathcal{A}\\_{X,Y}}A∈AX,Y​∑​\n\n\n∏t=1Tpt(at∣X)\\prod\\_{t=1}^T \\; p\\_t(a\\_t \\mid X)t=1∏T​pt​(at​∣X)\n\n\n\n\n The CTC conditional **probability**\n\n\n**marginalizes** over the set of valid alignments\n \n\n computing the **probability** for a single alignment step-by-step.\n \n\n\n\nModels trained with CTC typically use a recurrent neural network (RNN) to\nestimate the per time-step probabilities, pt(at∣X).p\\_t(a\\_t \\mid X).pt​(at​∣X).\nAn RNN usually works well since it accounts for context in the input, but we’re\nfree to use any learning algorithm which produces a distribution over output\nclasses given a fixed-size slice of the input.\n\n\nIf we aren’t careful, the CTC loss can be very expensive to compute. We\ncould try the straightforward approach and compute the score for each alignment\nsumming them all up as we go. The problem is there can be a massive number of\nalignments.\n\n For a YYY of length UUU without any repeat\n characters and an XXX of length TTT the size\n of the set is (T+UT−U).{T + U \\choose T - U}.(T−UT+U​). For T=100T=100T=100 and\n U=50U=50U=50 this number is almost 1040.10^{40}.1040.\n\nFor most problems this would be too slow.\n\n\nThankfully, we can compute the loss much faster with a dynamic programming\nalgorithm. The key insight is that if two alignments have reached the same\noutput at the same step, then we can merge them.\n\n\n\n\n![](assets/all_alignments.svg)\n\n Summing over all alignments can be very expensive.\n \n\n\n![](assets/merged_alignments.svg)\n\n Dynamic programming merges alignments, so it’s much faster.\n \n\n\nSince we can have an ϵ\\epsilonϵ before or after any token in\nYYY, it’s easier to describe the algorithm\nusing a sequence which includes them. We’ll work with the sequence\nZ=[ϵ, y1, ϵ, y2, …, ϵ, yU, ϵ]\nZ \\enspace =\\enspace [\\epsilon, ~y\\_1, ~\\epsilon, ~y\\_2,~ \\ldots, ~\\epsilon, ~y\\_U, ~\\epsilon]\nZ=[ϵ, y1​, ϵ, y2​, …, ϵ, yU​, ϵ]\nwhich is YYY with an ϵ\\epsilonϵ at\nthe beginning, end, and between every character.\n\n\nLet’s let α\\alphaα be the score of the merged alignments at a\ngiven node. More precisely, αs,t\\alpha\\_{s, t}αs,t​ is the CTC score of\nthe subsequence Z1:sZ\\_{1:s}Z1:s​ after ttt input steps.\nAs we’ll see, we’ll compute the final CTC score, P(Y∣X)P(Y \\mid X)P(Y∣X),\nfrom the α\\alphaα’s at the last time-step. As long as we know\nthe values of α\\alphaα at the previous time-step, we can compute\nαs,t.\\alpha\\_{s, t}.αs,t​. There are two cases.\n\n\n**Case 1:**\n\n\n\n\n![](assets/cost_no_skip.svg)\n\nIn this case, we can’t jump over zs−1z\\_{s-1}zs−1​, the previous\ntoken in Z.Z.Z. The first reason is that the previous token can\nbe an element of YYY, and we can’t skip elements of\nY.Y.Y. Since every element of YYY in\nZZZ is followed by an ϵ\\epsilonϵ, we can\nidentify this when zs=ϵ. z\\_{s} = \\epsilon.zs​=ϵ. The second reason is\nthat we must have an ϵ\\epsilonϵ between repeat characters in\nY.Y.Y. We can identify this when\nzs=zs−2.z\\_s = z\\_{s-2}.zs​=zs−2​.\n\n\n\nTo ensure we don’t skip zs−1z\\_{s-1}zs−1​, we can either be there\nat the previous time-step or have already passed through at some earlier\ntime-step. As a result there are two positions we can transition from.\n\n\n\n\n\nαs,t=\\alpha\\_{s, t} \\; =αs,t​=\n\n\n(αs−1,t−1+αs,t−1)⋅(\\alpha\\_{s-1, t-1} + \\alpha\\_{s, t-1}) \\quad\\quad \\cdot(αs−1,t−1​+αs,t−1​)⋅\n\n The CTC probability of the two valid subsequences after\n t−1t-1t−1 input steps.\n \n\n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\n\n\n\n\n![](assets/cost_regular.svg)\n\n**Case 2:**\n\n\nIn the second case, we’re allowed to skip the previous token in\nZ.Z.Z. We have this case whenever zs−1z\\_{s-1}zs−1​ is\nan ϵ\\epsilonϵ between unique characters. As a result there are\nthree positions we could have come from at the previous step.\n\n\n\n\n\n\nαs,t=\\alpha\\_{s, t} \\; =αs,t​=\n\n\n(αs−2,t−1+αs−1,t−1+αs,t−1)⋅(\\alpha\\_{s-2, t-1} + \\alpha\\_{s-1, t-1} + \\alpha\\_{s, t-1}) \\quad\\quad \\cdot(αs−2,t−1​+αs−1,t−1​+αs,t−1​)⋅\n\n The CTC probability of the three valid subsequences after\n t−1t-1t−1 input steps.\n \n\n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\n\n\nBelow is an example of the computation performed by the dynamic programming\nalgorithm. Every valid alignment has a path in this graph.\n\n\n\n\n output \n\nY=Y =Y= [a, b]\n \n\n input, XXX\n\n\n![](assets/ctc_cost.svg)\n\n Node (s,t)(s, t)(s,t) in the diagram represents\n αs,t\\alpha\\_{s, t}αs,t​ – the CTC score of\n the subsequence Z1:sZ\\_{1:s}Z1:s​ after\n ttt input steps.\n \n\n\nThere are two valid starting nodes and two valid final nodes since the\nϵ\\epsilonϵ at the beginning and end of the sequence is\noptional. The complete probability is the sum of the two final nodes.\n\n\nNow that we can efficiently compute the loss function, the next step is to\ncompute a gradient and train the model. The CTC loss function is differentiable\nwith respect to the per time-step output probabilities since it’s just sums and\nproducts of them. Given this, we can analytically compute the gradient of the\nloss function with respect to the (unnormalized) output probabilities and from\nthere run backpropagation as usual.\n\n\nFor a training set D\\mathcal{D}D, the model’s parameters\nare tuned to minimize the negative log-likelihood\n∑(X,Y)∈D−logp(Y∣X)\n\\sum\\_{(X, Y) \\in \\mathcal{D}} -\\log\\; p(Y \\mid X)\n(X,Y)∈D∑​−logp(Y∣X)\ninstead of maximizing the likelihood directly.\n\n\n### Inference\n\n\nAfter we’ve trained the model, we’d like to use it to find a likely output\nfor a given input. More precisely, we need to solve:\n\n\nY∗=argmaxYp(Y∣X)\nY^\\* \\enspace = \\enspace {\\mathop{\\text{argmax}}\\limits\\_{Y}} \\enspace p(Y \\mid X)\nY∗=Yargmax​p(Y∣X)\nOne heuristic is to take the most likely output at each time-step. This\ngives us the alignment with the highest probability:\n\n\nA∗=argmaxA∏t=1Tpt(at∣X)\nA^\\* \\enspace = \\enspace {\\mathop{\\text{argmax}}\\limits\\_{A}} \\enspace \\prod\\_{t=1}^{T} \\; p\\_t(a\\_t \\mid X)\nA∗=Aargmax​t=1∏T​pt​(at​∣X)\nWe can then collapse repeats and remove ϵ\\epsilonϵ tokens to\nget Y.Y.Y.\n\n\nFor many applications this heuristic works well, especially when most of the\nprobability mass is alloted to a single alignment. However, this approach can\nsometimes miss easy to find outputs with much higher probability. The problem\nis, it doesn’t take into account the fact that a single output can have many\nalignments.\n\n\nHere’s an example. Assume the alignments [a, a, ϵ\\epsilonϵ]\nand [a, a, a] individually have lower probability than [b, b, b]. But\nthe sum of their probabilities is actually greater than that of [b, b, b]. The\nnaive heuristic will incorrectly propose Y=Y =Y= [b] as\nthe most likely hypothesis. It should have chosen Y=Y =Y= [a].\nTo fix this, the algorithm needs to account for the fact that [a, a, a] and [a,\na, ϵ\\epsilonϵ] collapse to the same output.\n\n\nWe can use a modified beam search to solve this. Given limited\ncomputation, the modified beam search won’t necessarily find the\nmost likely Y.Y.Y. It does, at least, have\nthe nice property that we can trade-off more computation\n(a larger beam-size) for an asymptotically better solution.\n\n\nA regular beam search computes a new set of hypotheses at each input step.\nThe new set of hypotheses is generated from the previous set by extending each\nhypothesis with all possible output characters and keeping only the top\ncandidates.\n\n\n\n![](assets/beam_search.svg)\n\n A standard beam search algorithm with an alphabet of\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b} and a beam size\n of three.\n \n\nWe can modify the vanilla beam search to handle multiple alignments mapping to\nthe same output. In this case instead of keeping a list of alignments in the\nbeam, we store the output prefixes after collapsing repeats and removing\nϵ\\epsilonϵ characters. At each step of the search we accumulate\nscores for a given prefix based on all the alignments which map to it.\n\n\n\n\n\n The CTC beam search algorithm with an output alphabet\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b}\n and a beam size of three.\n \n\nA proposed extension can map to two output prefixes if the character is a\nrepeat. This is shown at T=3T=3T=3 in the figure above\nwhere ‘a’ is proposed as an extension to the prefix [a]. Both [a] and [a, a] are\nvalid outputs for this proposed extension.\n\n\nWhen we extend [a] to produce [a,a], we only want include the part of the\nprevious score for alignments which end in ϵ.\\epsilon.ϵ. Remember, the\nϵ\\epsilonϵ is required between repeat characters. Similarly,\nwhen we don’t extend the prefix and produce [a], we should only include the part\nof the previous score for alignments which don’t end in ϵ.\\epsilon.ϵ.\n\n\nGiven this, we have to keep track of two probabilities for each prefix\nin the beam. The probability of all alignments which end in\nϵ\\epsilonϵ and the probability of all alignments which don’t\nend in ϵ.\\epsilon.ϵ. When we rank the hypotheses at\neach step before pruning the beam, we’ll use their combined scores.\n\n\n\n\n\nThe implementation of this algorithm doesn’t require much code, but it is\ndense and tricky to get right. Checkout this\n[gist](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0)\nfor an example implementation in Python.\n\n\nIn some problems, such as speech recognition, incorporating a language model\nover the outputs significantly improves accuracy. We can include the language\nmodel as a factor in the inference problem.\n\n\n\n\n\nY∗=argmaxYY^\\* \\enspace = \\enspace {\\mathop{\\text{argmax}}\\limits\\_{Y}} \n Y∗=Yargmax​\n\np(Y∣X)⋅p(Y \\mid X) \\quad \\cdotp(Y∣X)⋅\n\n The CTC conditional probability.\n \n\n\np(Y)α⋅p(Y)^\\alpha \\quad \\cdotp(Y)α⋅\n\n The language model probability.\n \n\n\nL(Y)βL(Y)^\\betaL(Y)β\n\n The “word” insertion bonus.\n \n\n\n\nThe function L(Y)L(Y)L(Y) computes the length of\nYYY in terms of the language model tokens and acts as a word\ninsertion bonus. With a word-based language model L(Y)L(Y)L(Y)\ncounts the number of words in Y.Y.Y. If we use a character-based\nlanguage model then L(Y)L(Y)L(Y) counts the number of characters\nin Y.Y.Y. The language model scores are only included when a\nprefix is extended by a character (or word) and not at every step of the\nalgorithm. This causes the search to favor shorter prefixes, as measured by\nL(Y)L(Y)L(Y), since they don’t include as many language model\nupdates. The word insertion bonus helps with this. The parameters\nα\\alphaα and β\\betaβ are usually set by\ncross-validation.\n\n\nThe language model scores and word insertion term can be included in the\nbeam search. Whenever we propose to extend a prefix by a character, we can\ninclude the language model score for the new character given the prefix so\nfar.\n\n\nProperties of CTC\n-----------------\n\n\nWe mentioned a few important properties of CTC so far. Here we’ll go\ninto more depth on what these properties are and what trade-offs they offer.\n\n\n### Conditional Independence\n\n\nOne of the most commonly cited shortcomings of CTC is the conditional\nindependence assumption it makes.\n\n\n\n\n![](assets/conditional_independence.svg)\n\n Graphical model for CTC.\n \n\nThe model assumes that every output is conditionally independent of\nthe other outputs given the input. This is a bad assumption for many\nsequence to sequence problems.\n\n\n\nSay we had an audio clip of someone saying “triple A”.\n Another valid transcription could\nbe “AAA”. If the first letter of the predicted transcription is ‘A’, then\nthe next letter should be ‘A’ with high probability and ‘r’ with low\nprobability. The conditional independence assumption does not allow for this.\n\n\n\n\n![](assets/triple_a.svg)\n\n If we predict an ‘A’ as the first letter then the suffix ‘AA’ should get much\n more probability than ‘riple A’. If we predict ‘t’ first, the opposite\n should be true.\n \n\n\nIn fact speech recognizers using CTC don’t learn a language model over the\noutput nearly as well as models which are conditionally dependent.\n However, a separate language model can\nbe included and usually gives a good boost to accuracy.\n\n\nThe conditional independence assumption made by CTC isn’t always a bad\nthing. Baking in strong beliefs over output interactions makes the model less\nadaptable to new or altered domains. For example, we might want to use a speech\nrecognizer trained on phone conversations between friends to transcribe\ncustomer support calls. The language in the two domains can be quite different\neven if the acoustic model is similar. With a CTC acoustic model, we can easily\nswap in a new language model as we change domains.\n\n\n### Alignment Properties\n\n\nThe CTC algorithm is *alignment-free*. The objective function\nmarginalizes over all alignments. While CTC does make strong assumptions about\nthe form of alignments between XXX and YYY, the\nmodel is agnostic as to how probability is distributed amongst them. In some\nproblems CTC ends up allocating most of the probability to a single alignment.\nHowever, this isn’t guaranteed.\n\nWe could force the model to choose a single\nalignment by replacing the sum with a max in the objective function,\np(Y∣X)=maxA∈AX,Y∏t=1Tp(at∣X).\np(Y \\mid X) \\enspace = \\enspace \\max\\_{A \\in \\mathcal{A}\\_{X,Y}} \\enspace \\prod\\_{t=1}^T \\; p(a\\_t \\mid X).\np(Y∣X)=A∈AX,Y​max​t=1∏T​p(at​∣X).\n\n\nAs mentioned before, CTC only allows *monotonic* alignments. In\nproblems such as speech recognition this may be a valid assumption. For other\nproblems like machine translation where a future word in a target sentence\ncan align to an earlier part of the source sentence, this assumption is a\ndeal-breaker.\n\n\nAnother important property of CTC alignments is that they are\n*many-to-one*. Multiple inputs can align to at most one output. In some\ncases this may not be desirable. We might want to enforce a strict one-to-one\ncorrespondence between elements of XXX and\nY.Y.Y. Alternatively, we may want to allow multiple output\nelements to align to a single input element. For example, the characters\n“th” might align to a single input step of audio. A character based CTC model\nwould not allow that.\n\n\nThe many-to-one property implies that the output can’t have more time-steps\nthan the input.\n\n If YYY has rrr consecutive\n repeats, then the length of YYY must be less than\n the length of XXX by 2r−1.2r - 1.2r−1.\n\nThis is usually not a problem for speech and handwriting recognition since the\ninput is much longer than the output. However, for other problems where\nYYY is often longer than XXX, CTC just won’t\nwork.\n\n\nCTC in Context\n--------------\n\n\nIn this section we’ll discuss how CTC relates to other commonly used\nalgorithms for sequence modeling.\n\n\n### HMMs\n\n\nAt a first glance, a Hidden Markov Model (HMM) seems quite different from\nCTC. But, the two algorithms are actually quite similar. Understanding the\nrelationship between them will help us understand what advantages CTC has over\nHMM sequence models and give us insight into how CTC could be changed for\nvarious use cases.\n\n\nLet’s use the same notation as before,\nXXX is the input sequence and YYY\nis the output sequence with lengths TTT and\nUUU respectively. We’re interested in learning\np(Y∣X).p(Y \\mid X).p(Y∣X). One way to simplify the problem is to apply\nBayes’ Rule:\np(Y∣X)∝p(X∣Y)p(Y).\np(Y \\mid X) \\; \\propto \\; p(X \\mid Y) \\; p(Y).\np(Y∣X)∝p(X∣Y)p(Y).\n\nThe p(Y)p(Y)p(Y) term can be any language model, so let’s focus on\np(X∣Y).p(X \\mid Y).p(X∣Y). Like before we’ll let\nA\\mathcal{A}A be a set of allowed alignments between\nXXX and Y.Y.Y. Members of\nA\\mathcal{A}A have length T.T.T.\nLet’s otherwise leave A\\mathcal{A}A unspecified for now. We’ll\ncome back to it later. We can marginalize over alignments to get\np(X∣Y)=∑A∈Ap(X,A∣Y).\np(X \\mid Y)\\; = \\; \\sum\\_{A \\in \\mathcal{A}} \\; p(X, A \\mid Y).\np(X∣Y)=A∈A∑​p(X,A∣Y).\nTo simplify notation, let’s remove the conditioning on YYY, it\nwill be present in every p(⋅).p(\\cdot).p(⋅). With two assumptions we can\nwrite down the standard HMM.\n\n\n\n\n\np(X)=p(X) \\quad =p(X)=\n\n The probability of the input\n \n\n\n∑A∈A∏t=1T\\sum\\_{A \\in \\mathcal{A}} \\; \\prod\\_{t=1}^T∑A∈A​∏t=1T​\n\n Marginalizes over alignments\n \n\n\np(xt∣at)⋅p(x\\_t \\mid a\\_t) \\quad \\cdotp(xt​∣at​)⋅\n\n The emission probability\n \n\n\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​)\n\n The transition probability\n \n\n\n\nThe first assumption is the usual Markov property. The state\nata\\_tat​ is conditionally independent of all historic states given\nthe previous state at−1.a\\_{t-1}.at−1​. The second is that the observation\nxtx\\_txt​ is conditionally independent of everything given the\ncurrent state at.a\\_t.at​.\n\n\n\n![](assets/hmm.svg)\n\n The graphical model for an HMM.\n \n\nNow we can take just a few steps to transform the HMM into CTC and see how\nthe two models relate. First, let’s assume that the transition probabilities\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​) are uniform. This gives\np(X)∝∑A∈A∏t=1Tp(xt∣at).\np(X) \\enspace \\propto \\enspace \\sum\\_{A \\in \\mathcal{A}} \\enspace \\prod\\_{t=1}^T \\; p(x\\_t \\mid a\\_t).\np(X)∝A∈A∑​t=1∏T​p(xt​∣at​).\nThere are only two differences from this equation and the CTC loss function.\nThe first is that we are learning a model of XXX given\nYYY as opposed to YYY given X.X.X.\nThe second is how the set A\\mathcal{A}A is produced. Let’s deal\nwith each in turn.\n\n\nThe HMM can be used with discriminative models which estimate p(a∣x).p(a \\mid x).p(a∣x).\nTo do this, we apply Bayes’ rule and rewrite the model as\np(X)∝∑A∈A∏t=1Tp(at∣xt)p(xt)p(at)\np(X) \\enspace \\propto \\enspace \\sum\\_{A \\in \\mathcal{A}} \\enspace \\prod\\_{t=1}^T \\; \\frac{p(a\\_t \\mid x\\_t)\\; p(x\\_t)}{p(a\\_t)}\np(X)∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)p(xt​)​\n∝∑A∈A∏t=1Tp(at∣xt)p(at). \n\\quad\\quad\\quad\\propto \\enspace \\sum\\_{A \\in \\mathcal{A}} \\enspace \\prod\\_{t=1}^T \\; \\frac{p(a\\_t \\mid x\\_t)}{p(a\\_t)}.\n∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)​.\n\n\n\nIf we assume a uniform prior over the states aaa and condition on all of\nXXX instead of a single element at a time, we arrive at\np(X)∝∑A∈A∏t=1Tp(at∣X).\np(X) \\enspace \\propto \\enspace \\sum\\_{A \\in \\mathcal{A}} \\enspace \\prod\\_{t=1}^T \\; p(a\\_t \\mid X).\np(X)∝A∈A∑​t=1∏T​p(at​∣X).\n\n\nThe above equation is essentially the CTC loss function, assuming the set\nA\\mathcal{A}A is the same. In fact, the HMM framework does not specify what\nA\\mathcal{A}A should consist of. This part of the model can be designed on a\nper-problem basis. In many cases the model doesn’t condition on YYY and the\nset A\\mathcal{A}A consists of all possible length TTT sequences from the\noutput alphabet. In this case, the HMM can be drawn as an *ergodic* state\ntransition diagram in which every state connects to every other state. The\nfigure below shows this model with the alphabet or set of unique hidden states\nas {a,b,c}.\\{a, b, c\\}.{a,b,c}.\n\n\nIn our case the transitions allowed by the model are strongly related to\nY.Y.Y. We want the HMM to reflect this. One possible model could\nbe a simple linear state transition diagram. The figure below shows this with\nthe same alphabet as before and Y=Y =Y= [a, b]. Another commonly\nused model is the *Bakis* or left-right HMM. In this model any\ntransition which proceeds from the left to the right is allowed.\n\n\n\n\n![](assets/ergodic_hmm.svg)\n\n**Ergodic HMM:** Any node can be either a starting or\n final state.\n \n\n\n![](assets/linear_hmm.svg)\n\n**Linear HMM:** Start on the left, end on the right.\n \n\n\n![](assets/ctc_hmm.svg)\n\n**CTC HMM:** The first two nodes are the starting\n states and the last two nodes are the final states.\n \n\n\nIn CTC we augment the alphabet with ϵ\\epsilonϵ and the HMM model allows a\nsubset of the left-right transitions. The CTC HMM has two start\nstates and two accepting states.\n\n\nOne possible source of confusion is that the HMM model differs for any unique\nY.Y.Y. This is in fact standard in applications such as speech recognition. The\nstate diagram changes based on the output Y.Y.Y. However, the functions which\nestimate the observation and transition probabilities are shared.\n\n\nLet’s discuss how CTC improves on the original HMM model. First, we can think\nof the CTC state diagram as a special case HMM which works well for many\nproblems of interest. Incorporating the blank as a hidden state in the HMM\nallows us to use the alphabet of YYY as the other hidden states. This model\nalso gives a set of allowed alignments which may be a good prior for some\nproblems.\n\n\nPerhaps most importantly, CTC is discriminative. It models p(Y∣X)p(Y \\mid\n X)p(Y∣X) directly, an idea that’s been important in the past with other\ndiscriminative improvements to HMMs.\nDiscriminative training let’s us apply powerful learning algorithms like the\nRNN directly towards solving the problem we care about.\n\n\n### Encoder-Decoder Models\n\n\nThe encoder-decoder is perhaps the most commonly used framework for sequence\nmodeling with neural networks. These models have an encoder\nand a decoder. The encoder maps the input sequence XXX into a\nhidden representation. The decoder consumes the hidden representation and\nproduces a distribution over the outputs. We can write this as\n\nH=encode(X)p(Y∣X)=decode(H).\n\\begin{aligned}\nH\\enspace &= \\enspace\\textsf{encode}(X) \\\\[.5em]\np(Y \\mid X)\\enspace &= \\enspace \\textsf{decode}(H).\n\\end{aligned}\nHp(Y∣X)​=encode(X)=decode(H).​\n\nThe encode(⋅)\\textsf{encode}(\\cdot)encode(⋅) and\ndecode(⋅)\\textsf{decode}(\\cdot)decode(⋅) functions are typically RNNs. The\ndecoder can optionally be equipped with an attention mechanism. The hidden\nstate sequence HHH has the same number of time-steps as the\ninput, T.T.T. Sometimes the encoder subsamples the input. If the\nencoder subsamples the input by a factor sss then\nHHH will have T/sT/sT/s time-steps.\n\n\nWe can interpret CTC in the encoder-decoder framework. This is helpful to\nunderstand the developments in encoder-decoder models that are applicable to\nCTC and to develop a common language for the properties of these\nmodels.\n\n\n**Encoder:** The encoder of a CTC model can be just about any\nencoder we find in commonly used encoder-decoder models. For example the\nencoder could be a multi-layer bidirectional RNN or a convolutional network.\nThere is a constraint on the CTC encoder that doesn’t apply to the others. The\ninput length cannot be sub-sampled so much that T/sT/sT/s\nis less than the length of the output.\n\n\n**Decoder:** We can view the decoder of a CTC model as a simple\nlinear transformation followed by a softmax normalization. This layer should\nproject all TTT steps of the encoder output\nHHH into the dimensionality of the output alphabet.\n\n\nWe mentioned earlier that CTC makes a conditional independence assumption over\nthe characters in the output sequence. This is one of the big advantages that\nother encoder-decoder models have over CTC — they can model the\ndependence over the outputs. However in practice, CTC is still more commonly\nused in tasks like speech recognition as we can partially overcome the\nconditional independence assumption by including an external language model.\n\n\nPractitioner’s Guide\n--------------------\n\n\nSo far we’ve mostly developed a conceptual understanding of CTC. Here we’ll go\nthrough a few implementation tips for practitioners.\n\n\n**Software:** Even with a solid understanding of CTC, the\nimplementation is difficult. The algorithm has several edge cases and a fast\nimplementation should be written in a lower-level programming language.\nOpen-source software tools make it much easier to get started:\n\n\n* Baidu Research has open-sourced\n [warp-ctc](https://github.com/baidu-research/warp-ctc). The\n package is written in C++ and CUDA. The CTC loss function runs on either\n the CPU or the GPU. Bindings are available for Torch, TensorFlow and\n [PyTorch](https://github.com/awni/warp-ctc).\n* TensorFlow has built in\n [CTC loss](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_loss)\n and [CTC beam search](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_beam_search_decoder)\n functions for the CPU.\n* Nvidia also provides a GPU implementation of CTC in\n [cuDNN](https://developer.nvidia.com/cudnn) versions 7 and up.\n\n\n**Numerical Stability:** Computing the CTC loss naively is\nnumerically unstable. One method to avoid this is to normalize the\nα\\alphaα’s at each time-step. The original publication\nhas more detail on this including the adjustments to the gradient.\n In practice this works well enough\nfor medium length sequences but can still underflow for long sequences.\nA better solution is to compute the loss function in log-space with the\nlog-sum-exp trick.\n\nWhen computing the sum of two probabilities in log space use the identity\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\n\\log(e^a + e^b) = \\max\\{a, b\\} + \\log(1 + e^{-|a-b|})\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\nMost programming languages have a stable function to compute\nlog(1+x)\\log(1 + x)log(1+x) when\nxxx is close to zero.\n\n\nInference should also be done in log-space using the log-sum-exp trick.\n\n\n**Beam Search:** There are a couple of good tips to know about\nwhen implementing and using the CTC beam search.\n\n\nThe correctness of the beam search can be tested as follows.\n\n\n1. Run the beam search algorithm on an arbitrary input.\n2. Save the inferred output Y¯\\bar{Y}Y¯ and the corresponding\n score c¯.\\bar{c}.c¯.\n3. Compute the actual CTC score ccc for\n Y¯.\\bar{Y}.Y¯.\n4. Check that c¯≈c\\bar{c} \\approx cc¯≈c with the former being no\n greater than the latter. As the beam size increases the inferred output\n Y¯\\bar{Y}Y¯ may change, but the two numbers should grow\n closer.\n\n\nA common question when using a beam search decoder is the size of the beam\nto use. There is a trade-off between accuracy and runtime. We can check if the\nbeam size is in a good range. To do this first compute the CTC score for the\ninferred output ci.c\\_i.ci​. Then compute the CTC score for the ground\ntruth output cg.c\\_g.cg​. If the two outputs are not the same, we\nshould have cg